VideoHelp Forum
+ Reply to Thread
Page 2 of 4
FirstFirst 1 2 3 4 LastLast
Results 31 to 60 of 115
Thread
  1. hmm, audio ? maybe later using source filter "bas" ?
    if joining videos, preference would be no crossfades naturally, so joining seems to be no problem, even mixing videos and images (quiet audio would be added)

    I wanted to fade in or fade out audios from obvious reason, likeNLE's do (avoid abrupt sound changes, clicking), but that turn out to be not possible because audio in vapoursyth does not allow FrameEval() for audio samples. FrameEval() only allows vs.VideoNode to work with. So other modules are an option, I quickly check pydub, that might work, but did not go any further with it.

    So without crossfades, concatenating video and images and rotate video (looks like mp4 oor mov only) and images if needed .
    The approach would be to have an object called clip for example that caries both video and audio, and allowing certain actions: trim, slicing, adding, multiplying, printing, that would also be carried for both video and audio. That could provide class Clip() and could be a separate module, called pairs.py:
    Code:
    class Clip:
        '''
         Creates class that carries both vs.VideoNode and vs.AudioNode.
         Using the same syntax as for vs.VideoNode on this class object for: trim/slice, add, multiply or print,
         the same operation is also done to the audio at the same time
         examples:
         from vapoursynth import core
         video = core.lsmas.LibavSMASHSource('video.mp4')
         audio = core.bas.Source('video.mp4')
         clip = Clip(video, audio)
         clip = clip[0] + clip[20:1001] + clip[1500:2000] + clip[2500:]
         #clip = clip.trim(first=0, length=1) + clip.trim(firt=20, last=1000) + clip.trim(1500, length=500) + clip.trim(2500)
         clip = clip*2
         clip = clip + clip
         print(clip)
         clip.video.set_output(0)
         clip.audio.set_output(1)
         '''
    
        def __init__(self, video = None, audio=None, attribute_audio_path=None):
            self.video = video
            self.audio = audio
            if self.video is None:
                self.video = core.std.BlankClip()
            if self.audio is None:
                if attribute_audio_path is None:
                    raise ValueError('argument attribute_audio_path is needed to get default audio for images (could be a really short video')
                attr_audio = core.bas.Source(attribute_audio_path)
                length = int(attr_audio.sample_rate/self.video.fps*self.video.num_frames)
                self.audio = attr_audio.std.BlankAudio(length=length)
    
        def trim(self, first=0, last=None, length=None):
            afirst  = self.to_samples(first)    if first  is not None else None
            alast   = self.to_samples(last+1)-1 if last   is not None else None
            alength = self.to_samples(length)   if length is not None else None
            return Clip( self.video.std.Trim(first=first, last=last, length=length),
                         self.audio.std.AudioTrim(first=afirst,last=alast,length=alength)
                        )
        def to_samples(self, frame):
            return int((self.audio.sample_rate/self.video.fps)*frame)
    
        def __add__(self, other):
            return Clip(self.video + other.video, self.audio + other.audio)
    
        def __mul__(self, multiple):
            return Clip(self.video*multiple, self.audio*multiple)
    
        def __getitem__(self, val):
            if isinstance(val, slice):
                if val.step is not None:
                    raise ValueError('Using steps while slicing AudioNode together with VideoNode makes no sense')
                start = self.to_samples(val.start) if val.start is not None else None
                stop =  self.to_samples(val.stop)  if val.stop  is not None else None
                return Clip( self.video.__getitem__(val),
                             self.audio.__getitem__(slice(start,stop))
                             )
            elif isinstance(val, int):
                start = self.to_samples(val)
                stop = int(start + self.audio.sample_rate/self.video.fps)
                return Clip( self.video[val],
                             self.audio.__getitem__(slice(start,stop))
                             )        
        def __repr__(self):
            return '{}\n{}\n{}'.format('Clip():\n-------', repr(self.video), repr(self.audio))
    
        def __str__(self):
            return '{}\n{}\n{}'.format('Clip():\n-------', str(self.video), str(self.audio))
    So get_clip function might look like this:
    Code:
    import pairs
    ATTRIBUTE_AUDIO_PATH = r'D:\test2\some_very_short_video.m2ts'
    
    def get_clip(path):
        #get video
        data = LOADER.get_data([str(path),])[0] 
        video = data.clip
        if data.load_isError:
            raise ValueError(f'{data.load_log}\nload.py could not create vs.VideoNode from {path}')
        video = rotation_check(video, path, save_rotated_image=SAVE_ROTATED_IMAGES)
        video = video.std.AssumeFPS(fpsnum=FPSNUM, fpsden=FPSDEN)
        if BOX:  video = boxing(video, WIDTH, HEIGHT)
        else:    video = resize_clip(video, WIDTH, HEIGHT)
        video = video[0]*LENGTH if len(video)<5 else video
        
        #get_audio
        try:
            audio = core.bas.Source(str(path))
        except AttributeError:
            raise ImportError('Vapoursynth audio source plugin "BestAudioSource.dll" could not be loaded\n'
                              'download: https://github.com/vapoursynth/bestaudiosource/releases/tag/R1')
        except vs.Error as e:
            print(f'BestAudioSource could not load audio for: {path}\n{e}')
            clip = pairs.Clip(video, attribute_audio_path=ATTRIBUTE_AUDIO_PATH) #will generate quiet clip with desired parameters
        except Exception as e:
            print(f'Error loading audio for: {path}\n{e}')
            clip = pairs.Clip(video, attribute_audio_path=ATTRIBUTE_AUDIO_PATH)
        else:
            clip = pairs.Clip(video, audio)    
        return clip
    I used my load.py, that loads any filepath and uses proper plugin source, ffms2 indexes to temp folder, out of the way (could be deleted later). And also it remembers indexes, so it does not index again. I uploaded it here , you just need load.py and viewfunc.py, both can be downloaded. If you just run: import load; load.settings() it will launch GUI, where you can change preferences and it will always remember those settings. But you'd need to have tkinter installed, which comes with python, but not with portable python. Without tkinter UI settings would not work, load.py will work, but if you need to change plugin sources prereferences DEFAULT_PLUGIN_MAP within load.py would need to be manually changed.

    Then tha main loop code to work with clips, you had it a bit different, but just how it works with that class Clip():
    Code:
    import load
    
    LOADER = load.Sources()
    paths = Path(DIRECTORY).glob("*.*")
    path = get_path(paths)
    if path is None:
        raise ValueError(f'Extensions: {EXTENSIONS}, not found in {DIRECTORY}')
    clips = get_clip(path)    #loads obj class Clip() ! not vs.VideoNode
    print('wait ...')
    while 1:
        path = get_path(paths)
        if path is None:
            break
        clip = get_clip(path)   #again , it always loads obj class Clip(), not vs.VideoNode
        clips += clip  #class Clip() can add, where internally it adds video and audio
    
    clips.video.set_output(0)
    clips.audio.set_output(1)
    Code also needs to define attribute video filepath that vapoursynth would get audio parameters from for quiet audio (if image is loaded). For some reason I cannot change samplerate from integer to float and cannot set default BlankAudio with float.
    Code:
    ATTRIBUTE_AUDIO_PATH = r'D:\test2\some_very_short_video.m2ts'
    I hope I did not forgot anything, except maybe I had all scripts in portable directory/python_modules and use this script to encode output video, where it mixed images, mp4 and m2ts videos and rotated images, so this was encode.py:
    Code:
    import vapoursynth as vs
    from vapoursynth import core
    from subprocess import Popen, PIPE
    from pathlib import Path
    import sys
    import os
    
    def main(script, output_path):
    
        PATH        = Path(os.getcwd()) #Path(sys._MEIPASS) #if frozen
        script      = str(PATH / 'python_modules' / f'{script}')
        VSPipe      = str(PATH / 'VSPipe.exe')
        x264        = str(PATH / 'tools' / 'x264.exe')
        neroAacEnc  = str(PATH / 'tools' / 'neroAacEnc.exe')
        Mp4box      = str(PATH / 'tools' / 'MP4Box.exe')
        ##import loadDLL #if frozen
        ##VS_PLUGINS_DIR = PATH / 'vapoursynth64/plugins'
        ##isDLLS_LOADED, DLLS_LOG = loadDLL.vapoursynth_dlls(core, VS_PLUGINS_DIR)
        path        = Path(output_path)
        temp_video = str(path.parent / f'{path.stem}.264')
        temp_audio = str(path.parent / f'{path.stem}.m4a')
    
        vspipe_video = [VSPipe, '--container',  'y4m',
                                script,
                                '-']
        
        x264_cmd = [x264,       #'--frames',           f'{len(video)}',
                                '--demuxer',           'y4m',  
                                '--crf',               '18',
                                '--vbv-maxrate',       '30000',
                                '--vbv-bufsize',       '30000',
                                '--keyint',            '60',
                                '--tune',              'film',
                                '--colorprim',         'bt709',
                                '--transfer',          'bt709',
                                '--colormatrix',       'bt709',
                                '--output',             temp_video,
                                '-']
    
        vspipe_audio = [VSPipe, '--outputindex', '1',
                                '--container',  'wav',
                                script,
                                '-']
    
        aac_cmd = [neroAacEnc,  '-ignorelength',
                                '-lc',
                                '-cbr', '96000',
                                '-if', '-',
                                '-of', temp_audio]
    
    
        mp4box_cmd = [Mp4box,   '-add' , f'{temp_video}',
                                '-add',  f'{temp_audio}#audio',
                                '-new',  output_path]
    
        p1 = Popen(vspipe_video, stdout=PIPE, stderr=PIPE)
        p2 = Popen(x264_cmd, stdin=p1.stdout, stdout=PIPE, stderr=PIPE)
        p1.stdout.close()
        p2.communicate()
    
        p1 = Popen(vspipe_audio, stdout=PIPE, stderr=PIPE)
        p2 = Popen(aac_cmd, stdin=p1.stdout, stdout=PIPE, stderr=PIPE)
        p1.stdout.close()
        p2.communicate()
    
        p = Popen(mp4box_cmd)
        p.communicate()
    
    if __name__=='__main__':
        #scripts and outputs have full paths
        #python "encode.py" "media_to_show.py" "output.mp4"
        if len(sys.argv) > 2: 
            main(sys.argv[1], sys.argv[2])
    Last edited by _Al_; 21st Jan 2023 at 12:06.
    Quote Quote  
  2. Member hydra3333's Avatar
    Join Date
    Oct 2009
    Location
    Australia
    Search Comp PM
    Beaut !
    Some more code to look into, thanks, and now I'm happy playing with code as in back in my nerdy youth
    I may take some time out to also learn a bit of python, have been flying on first principles and shaggy memories of fortran77.
    Quote Quote  
  3. , I remember fortran too, actually not remembering it much, but those times I spend to make code then making punch cards and only then running that code using those cards
    Quote Quote  
  4. Edit: There's been quite a few posts I didn't see before I submitted this one, so it mightn't be too relevant now, but anyway....

    hydra3333,
    I had another thought.... MJPEG.
    Instead of converting to bitmaps and having to convert those to video, why not just convert to jpeg and stick them in a container?
    I guess it'll depend on a player's mjpeg support. My 11yo Samsung TV supports it, but only at a low resolution. If you want to test though, there's samples attached....

    I put them into an MKV using AnotherGUI. I added the command line myself, so it mightn't be especially clever, but it works. I saved the command line as a bat file, for reference.

    Code:
    "C:\Program Files\AnotherGUI\ffmpeg.exe" -start_number 1 -report -framerate 0.5 -y -i "D:\%%d.jpg" -threads 1 -vcodec copy "D:\Test JPEG 444.mkv"
    PAUSE
    After messing around to get it right for 1080p, I think I finally understand how IrfanView's text placement works. The MKVs below all contain jpegs saved at 80% quality.
    IrfanView does have an option (plugin) to save a slideshow as an MP4, but it doesn't work for me so I know nothing about it. That's probably an XP thing.

    Image
    [Attachment 68774 - Click to enlarge]


    I guess if you're wanting to add clips you're going to have to re-encode the pictures, and you'll probably need to duplicate them so they'll display for "X" seconds and match the video frame rate too. You could open an MKV like the ones below and do that in a script while appending video etc, but maybe there's a better way, or it's not going to be the easiest method if there's also video involved. It was just another idea that might come in handy, or not....
    Image Attached Files
    Last edited by hello_hello; 21st Jan 2023 at 01:13.
    Quote Quote  
  5. Member hydra3333's Avatar
    Join Date
    Oct 2009
    Location
    Australia
    Search Comp PM
    Originally Posted by hello_hello View Post
    I had another thought.... MJPEG.
    Instead of converting to bitmaps and having to convert those to video, why not just convert to jpeg and stick them in a container?
    I guess it'll depend on a player's mjpeg support. My 11yo Samsung TV supports it, but only at a low resolution. If you want to test though, there's samples attached....
    Hello, hello_hello, thank you for the proposal and the very handy information, what a great approach.
    It certainly has a place in dealing with the issue, and may turn out to be the "right" solution in many circumstances.
    Nice sample clips too, I smiled watching them.
    I'll post some stuff too, once I can find items absent family faces, since privacy is apparently a thing nowadays.

    For now, and also with a view to keeping the few remaining grey cells in use, I will first fiddle with _Al_'s gear.

    Looks like I'll have to bite the bullet and run the python installer and move away from direct vpy-into-ffmpeg as inability to pass parameters and run it standalone can be a tad disconcerting. Yes I could create and use a parameter file however "it's not as easy".
    Be interesting to see what if any the vspipe overheads may be vs vpy-into-ffmpeg.
    Quote Quote  
  6. Member hydra3333's Avatar
    Join Date
    Oct 2009
    Location
    Australia
    Search Comp PM
    edit: oops
    Quote Quote  
  7. if you use that load.py,
    I actually uploaded it with active custom imwri_Read function for reading images and that is a beta sort of, only good if loading all images into a sequence, part of them etc. If using that plugin, it loads images with the same pattern as a clip, which is not desirable if loading images from same camera in a directory. It would have a tendency to load them all in a clip.

    Use regular core.imwri.Read(), which I did not add there, so either upload that load.py again, I replace it and fixed it and set as a default,
    or
    launch load settings UI (double click load.py) and then fill box "Add plugin" with: imwri.Read , then press "Add" button. Then add extensions and kwargs to that plugin (copy those from that custom imwri_Read plugin). Then delete those extensions for that previous imwri_Read. Save it.
    Last edited by _Al_; 22nd Jan 2023 at 23:44.
    Quote Quote  
  8. Member hydra3333's Avatar
    Join Date
    Oct 2009
    Location
    Australia
    Search Comp PM
    Thank you. Have been away for some days. Surgery early next week. My progress may be a tad slow
    Quote Quote  
  9. Member hydra3333's Avatar
    Join Date
    Oct 2009
    Location
    Australia
    Search Comp PM
    Originally Posted by hello_hello View Post
    I haven't read the thread from top to bottom, but some of the things you're wanting to achieve can be done with IrfanView's batch mode
    Thank you, yes it can.
    I just tried it and it works fine for images, padding even works with a black canvas set to the target size (eg 1920x1080).
    Made a .BAT script which parsed folders one at a time and converted images to a destination tree on another drive, correctly scaled to 1920x1088 whilst maintaining aspect ratio regardless of the original image dimensions. Very happy with it.

    The same exercise failed miserably with native ffmpeg "-f concat" using the "scale" and "pad" filters ... as soon as it saw an odd dimension then the image was stretched either horizontally or vertically no matter what options I tried ... and I tried a lot.
    It also was less than forgiving by delivering bt470gb colorspace etc and inconsistent "range" (TV or PC) perhaps depending on the first image it encountered.
    Plenty of advice on the net with basically the same options however the authors must not have tried a range of old and new and odd-dimensioned images.
    I, so far, did not spend enough time to find out how to subtitle each image with aspects of its path and name.

    Ended up giving up on ffmpeg -f concat et al directly, and moving to trying your suggestion of irfanview to resize all the images correctly and THEN using ffmpeg -f concat on those using an input file of filenames.
    Still issues with delivered inconsistent colorspace etc and inconsistent "range" (TV or PC) but better than it was. Also still not looked into subtitle each image with aspects of its path and name.
    And there's no possibility of including the first few seconds of any video clips (of arbitrary sizes, eg old/new/portrait/landscape etc) in the mix.

    Next stop is to look at _AI_ stuff in https://forum.videohelp.com/threads/408230-ffmpeg-avc-from-jpgs-of-arbitrary-dimension...e2#post2678789 and https://github.com/UniversalAl/load since I'm still alive

    P.S. don't look here: https://github.com/hydra3333/vpy_slideshow/tree/main/Rabbit_Hole_number_01 unless you want to be horrified at a kludged .bat script
    Last edited by hydra3333; 3rd Mar 2023 at 17:50. Reason: add link
    Quote Quote  
  10. Member hydra3333's Avatar
    Join Date
    Oct 2009
    Location
    Australia
    Search Comp PM
    for posterity and reproducibility, uploaded

    - the .bat which uses "-f concat" only (no irfanview to pre-adjust images) (it has hard-coded paths to .exes)
    - a few images in a folder tree

    which demonstrates the scale issue.

    if someone could fix it somewhow, that'd be great.

    edit: or https://github.com/hydra3333/vpy_slideshow/tree/main/Rabbit_Hole_number_00
    Image Attached Files
    Last edited by hydra3333; 4th Mar 2023 at 03:04.
    Quote Quote  
  11. Member hydra3333's Avatar
    Join Date
    Oct 2009
    Location
    Australia
    Search Comp PM
    Originally Posted by _Al_ View Post
    I hope I did not forgot anything
    Looks cool.

    There's a reference to
    Code:
     view2.py
    ... is that the one at https://github.com/pbthong/view2 ?

    I notice you had a view.py repository, but the reference in viewfunc said view2.py
    helper functions for vapoursynth view2.py previewer but could be used outside of this package
    constructed to work with API3 or API4
    Quote Quote  
  12. There should not be any reference to view2.py, sorry, I will look into it, I might forgot to take away that that from top of script.
    view2.py is a new package to preview, compare videos, which I did not posted on github yet.

    also might fix it so it is more user friendly, not checking after each clip loading if there was an error, and just throw error right away or something,
    Last edited by _Al_; 4th Mar 2023 at 11:33.
    Quote Quote  
  13. Sorry, download that load.py again (viewfunc.py perhaps also), now it should be ok.
    To be sure, delete directory where it stores defaults: C:\Users\*** your user name ***\AppData\Roaming\Load

    double click load.py, it would launch settings GUI, if you have tkinter installed, set what you need. If you do not have tkinter installed, pass d2vwitch of ffmsindex directory while instantiating. If those executables are in windows PATH or current directory, they will be found without passing their directory.

    Code:
    import load
    LOADER = load.Sources(d2vwitch_dir='F:\\tools', ffmsindex_dir='F:\\tools') #pass indexing executables if not sorting this in GUI
    .
    .
    
    def get_clip(path):
        data = LOADER.get_data([str(path),])[0] 
        video = data.clip
        if data.load_isError:
            raise ValueError(f'{data.load_log}\n{data.load_log_error}\nno vs.VideoNode created from {path}')
        .
        .
        .
    That load.py was set up for GUI use though, what extensions should be used with what source plugin, to add source plugins (and again add extension what it should be used with), selecting if wanting to use already made index file for same path always again or temporary turn it off (to always make new index), selecting indexing directory, etc.
    Last edited by _Al_; 4th Mar 2023 at 23:45.
    Quote Quote  
  14. Member hydra3333's Avatar
    Join Date
    Oct 2009
    Location
    Australia
    Search Comp PM
    Took me a while to figure out mediainfo (with videos) returns rotation degrees clockwise ... PIL (for images) returns a code interpreted as anti-clockwise degrees

    ... unaware common code for rotating doesn't go very well with that
    Quote Quote  
  15. that was a hiccup too, , turn left , which is to rotate counterclockwise 90 degrees:
    Image
    [Attachment 69781 - Click to enlarge]

    PIL: image = image.transpose(Image.ROTATE_270) #turn left
    Vapoursynth: clip = clip.std.Transpose().std.FlipHorizontal() #turn left
    PIL logic is perhaps to always rotate clockwise, so you need to land on 270 value to turn left 90 degree . Avisynth uses simple logic of turning left or right. thinking that rotating by degrees is rarely used, there is some rotate function if I remember, not sure what logic is there.

    edit: oh I see, there is also PIL's rotate function, which has logic of rotating counterclockwise and setting positive values, so they have two different logic's in their code
    https://pythonexamples.org/python-pillow-rotate-image-90-180-270-degrees/
    image = image.rotate(90, expand=True) #turn left
    Last edited by _Al_; 14th Mar 2023 at 10:50.
    Quote Quote  
  16. Member hydra3333's Avatar
    Join Date
    Oct 2009
    Location
    Australia
    Search Comp PM
    Still fiddling with it.

    Sometimes ffms2 doesn't report video specs, eg matrix,transfer etc ... have some code to get those via mediainfo however it seems if ffms2 doesn't find it then neither does mediainfo which is sort of expected. So, have some more roughy code to make guesses on those specs. Got the error about no way to go from one colorspace to another during resizing, which is why I have to make the guesses.

    As expected, direct-input to ffmpeg with "-f vapoursynth" starts of well enough then continually drops to circa 1.9 - 2 fps
    Same script with vspipe->NUL is 271 fps.
    Same script with vspipe->ffmpeg is 7.9 fps.

    Have some examples of images and videos with rotations, will post them later after culling some for privacy etc.

    PS I like your transpose diagrams, mine were scribbled on the back of an envelope
    Last edited by hydra3333; 15th Mar 2023 at 23:38. Reason: like
    Quote Quote  
  17. to get matrix I use functions from viewfunc.py (same page as load.py on github), it tries to get matrix from vaporsynth props (if ffms2 loaded it as you mentioned, or other source plugin), or defaults by size as a last resort. Also mostly there is a value, but it is not usable like 2. So that takes care of it as well. There is a function to get a color range as well, if it is needed for resize, if sources are unknown and video could be full range, perhaps a good idea as well .There is a catch - vapoursynth "_ColorRange" in props and zimg resize color range values are swapped (_ColoRange=1 is matrix=0 in zimg resize and vice versa), so that takes care of it.
    Code:
    from viewfunc import get_matrix, get_zimg_range
    matrix, matrix_s = get_matrix(clip)
    range, range_s   = get_zimg_range(clip)
    example of resizing RGB to YUV:
    https://github.com/UniversalAl/load/blob/main/viewfunc.py#L581
    these scripts are in constant flux, load.py, viewfunc.py, I slowly update them and integrate them, but hopefully its ok those I posted.

    I do not use ffmpeg and -f vapoursynth, do not know what is going on
    Last edited by _Al_; 16th Mar 2023 at 11:44.
    Quote Quote  
  18. Member hydra3333's Avatar
    Join Date
    Oct 2009
    Location
    Australia
    Search Comp PM
    Yes, thank you, I did see those functions ... and sort of extended the guessing method slightly which didn't really help as it turns out

    ffmpeg-direct-vpy-input is unusable due to low fps throughput - there's a thread around that here: https://forum.videohelp.com/threads/397728-ffmpeg-accepting-vapoursynth-vpy-input-dire...elerated-speed

    Oh dear. Looks like I'm having an issue with vspipe.

    Using identical input and an identical commandline -other than "-i" related stuff - (a) via vspipe to ffmpeg and (b) vpy directly into ffmpeg ...

    It appears that (a) with vspipe crashes and burns, whilst (b) direct-vpy-input works ...
    The ffmpeg error (refer below) suggests at least some of the colourspace stuff is not getting passed along to ffmpeg and/or ffmpeg may be ignoring it.

    For clarity, here's the code grabbing the specs of the final clip output by the script:
    Code:
    	clip_specs = {}
    	clip_specs["width"] = clip.width
    	clip_specs["height"] = clip.height
    	clip_specs["num_frames"] = clip.num_frames
    	clip_specs["fps"] = clip.fps
    	clip_specs["format_name"] = clip.format.name
    	clip_specs["color_family"] = clip.format.color_family.value		# .value
    	clip_specs["sample_type"] = clip.format.sample_type.value		# .value	If the format is integer or floating point based.
    	clip_specs["bits_per_sample"] = clip.format.bits_per_sample
    	clip_specs["bytes_per_sample"] = clip.format.bytes_per_sample
    	clip_specs["num_planes"] = clip.format.num_planes
    	clip_specs["subsampling_w"] = clip.format.subsampling_w
    	clip_specs["subsampling_h"] = clip.format.subsampling_h
    	with clip.get_frame(0) as f:
    		clip_specs["_Matrix"] = f.props["_Matrix"] if "_Matrix" in f.props else None
    		clip_specs["_Transfer"] = f.props["_Transfer"] if "_Transfer" in f.props else None
    		clip_specs["_Primaries"] = f.props["_Primaries"] if "_Primaries" in f.props else None
    		clip_specs["_ColorRange"] = f.props["_ColorRange"] if "_ColorRange" in f.props else None
    		clip_specs["_ChromaLocation"] = f.props["_ChromaLocation"] if  "_ChromaLocation" in f.props else None
    		clip_specs["_DurationDen"] = f.props["_DurationDen"] if "_DurationDen" in f.props else None
    		clip_specs["_DurationNum"] = f.props["_DurationNum"] if "_DurationNum" in f.props else None
    		clip_specs["_FieldBased"] = f.props["_FieldBased"] if "_FieldBased" in f.props else None
    		clip_specs["_PictType"] = f.props["_PictType"] if "_PictType" in f.props else None
    and here's debug output from running it with vspipe alone showing the output clip has the "right" properties; note
    Code:
    '_Matrix': 1, '_Transfer': 1, '_Primaries': 1, '_ColorRange': 0
    .
    Code:
    DEBUG: OUTPUT VIDEO: clip and frame properties: w=1920 h=1080 specs=
    {'width': 1920,
     'height': 1080,
     'num_frames': 8986,
     'fps': Fraction(25, 1),
     'format_name': 'YUV420P8',
     'color_family': 3,
     'sample_type': 0,
     'bits_per_sample': 8,
     'bytes_per_sample': 1,
     'num_planes': 3,
     'subsampling_w': 1,
     'subsampling_h': 1,
     '_Matrix': 1,
     '_Transfer': 1,
     '_Primaries': 1,
     '_ColorRange': 0,
     '_ChromaLocation': 0,
     '_DurationDen': 25,
     '_DurationNum': 1,
     '_FieldBased': None,
     '_PictType': None,
     'auto_rotation_degrees': 0,
     'auto_rotation_direction': 'clockwise',
     'guessed_Matrix': False,
     'proposed_Matrix': 1,
     'guessed_Primaries': False,
     'proposed_Primaries': 1,
     'guessed_Transfer': False,
     'proposed_Transfer': 1,
     'guessed_ColorRange': False,
     'proposed_ColorRange': 0,
     'possible_colour_source': 'PAL'}
    Script evaluation done in 35.07 seconds

    Here's the commandline and log which fails (vspipe into ffmpeg) with the identical script:
    Code:
    "C:\SOFTWARE\Vapoursynth-x64\VSPipe.exe" --progress --filter-time --container y4m "G:\DVD\PAT-SLIDESHOWS\_AI_05_in_development\_AI_08.vpy" -  | "C:\SOFTWARE\Vapoursynth-x64\ffmpeg_OpenCL.exe" -hide_banner -v debug -stats -f yuv4mpegpipe -i pipe: -probesize 200M -analyzeduration 200M -sws_flags lanczos+accurate_rnd+full_chroma_int+full_chroma_inp -filter_complex "colorspace=all=bt709:space=bt709:trc=bt709:primaries=bt709:range=pc:format=yuv420p:fast=0,format=yuv420p,setdar=16/9" -colorspace bt709 -color_primaries bt709 -color_trc bt709 -color_range pc -strict experimental -c:v h264_nvenc -preset p7 -multipass fullres -forced-idr 1 -g 25 -coder:v cabac -spatial-aq 1 -temporal-aq 1 -dpb_size 0 -bf:v 3 -b_ref_mode:v 0 -rc:v vbr -b:v 9000000 -minrate:v 3000000 -maxrate:v 15000000 -bufsize 15000000 -profile:v high -level 5.2 -movflags +faststart+write_colr -an -y "G:\DVD\PAT-SLIDESHOWS\_AI_05_in_development\_AI_08.mp4"  
    Splitting the commandline.
    Reading option '-hide_banner' ... matched as option 'hide_banner' (do not show program banner) with argument '1'.
    Reading option '-v' ... matched as option 'v' (set logging level) with argument 'debug'.
    Reading option '-stats' ... matched as option 'stats' (print progress report during encoding) with argument '1'.
    Reading option '-f' ... matched as option 'f' (force format) with argument 'yuv4mpegpipe'.
    Reading option '-i' ... matched as input url with argument 'pipe:'.
    Reading option '-probesize' ... matched as AVOption 'probesize' with argument '200M'.
    Reading option '-analyzeduration' ... matched as AVOption 'analyzeduration' with argument '200M'.
    Reading option '-sws_flags' ... matched as AVOption 'sws_flags' with argument 'lanczos+accurate_rnd+full_chroma_int+full_chroma_inp'.
    Reading option '-filter_complex' ... matched as option 'filter_complex' (create a complex filtergraph) with argument 'colorspace=all=bt709:space=bt709:trc=bt709:primaries=bt709:range=pc:format=yuv420p:fast=0,format=yuv420p,setdar=16/9'.
    Reading option '-colorspace' ... matched as AVOption 'colorspace' with argument 'bt709'.
    Reading option '-color_primaries' ... matched as AVOption 'color_primaries' with argument 'bt709'.
    Reading option '-color_trc' ... matched as AVOption 'color_trc' with argument 'bt709'.
    Reading option '-color_range' ... matched as AVOption 'color_range' with argument 'pc'.
    Reading option '-strict' ...Routing option strict to both codec and muxer layer
     matched as AVOption 'strict' with argument 'experimental'.
    Reading option '-c:v' ... matched as option 'c' (codec name) with argument 'h264_nvenc'.
    Reading option '-preset' ... matched as AVOption 'preset' with argument 'p7'.
    Reading option '-multipass' ... matched as AVOption 'multipass' with argument 'fullres'.
    Reading option '-forced-idr' ... matched as AVOption 'forced-idr' with argument '1'.
    Reading option '-g' ... matched as AVOption 'g' with argument '25'.
    Reading option '-coder:v' ... matched as AVOption 'coder:v' with argument 'cabac'.
    Reading option '-spatial-aq' ... matched as AVOption 'spatial-aq' with argument '1'.
    Reading option '-temporal-aq' ... matched as AVOption 'temporal-aq' with argument '1'.
    Reading option '-dpb_size' ... matched as AVOption 'dpb_size' with argument '0'.
    Reading option '-bf:v' ... matched as AVOption 'bf:v' with argument '3'.
    Reading option '-b_ref_mode:v' ... matched as AVOption 'b_ref_mode:v' with argument '0'.
    Reading option '-rc:v' ... matched as AVOption 'rc:v' with argument 'vbr'.
    Reading option '-b:v' ... matched as option 'b' (video bitrate (please use -b:v)) with argument '9000000'.
    Reading option '-minrate:v' ... matched as AVOption 'minrate:v' with argument '3000000'.
    Reading option '-maxrate:v' ... matched as AVOption 'maxrate:v' with argument '15000000'.
    Reading option '-bufsize' ... matched as AVOption 'bufsize' with argument '15000000'.
    Reading option '-profile:v' ... matched as option 'profile' (set profile) with argument 'high'.
    Reading option '-level' ... matched as AVOption 'level' with argument '5.2'.
    Reading option '-movflags' ... matched as AVOption 'movflags' with argument '+faststart+write_colr'.
    Reading option '-an' ... matched as option 'an' (disable audio) with argument '1'.
    Reading option '-y' ... matched as option 'y' (overwrite output files) with argument '1'.
    Reading option 'G:\DVD\PAT-SLIDESHOWS\_AI_05_in_development\_AI_08.mp4' ... matched as output url.
    Finished splitting the commandline.
    Parsing a group of options: global .
    Applying option hide_banner (do not show program banner) with argument 1.
    Applying option v (set logging level) with argument debug.
    Applying option stats (print progress report during encoding) with argument 1.
    Applying option filter_complex (create a complex filtergraph) with argument colorspace=all=bt709:space=bt709:trc=bt709:primaries=bt709:range=pc:format=yuv420p:fast=0,format=yuv420p,setdar=16/9.
    Applying option y (overwrite output files) with argument 1.
    Successfully parsed a group of options.
    Parsing a group of options: input url pipe:.
    Applying option f (force format) with argument yuv4mpegpipe.
    Successfully parsed a group of options.
    Opening an input file: pipe:.
    [yuv4mpegpipe @ 000001c16e110000] Opening 'pipe:' for reading
    [pipe @ 000001c16e110380] Setting default whitelist 'crypto,data'
    [yuv4mpegpipe @ 000001c16e110000] Before avformat_find_stream_info() pos: 54 bytes read:4096 seeks:0 nb_streams:1
    [yuv4mpegpipe @ 000001c16e110000] All info found
    [yuv4mpegpipe @ 000001c16e110000] After avformat_find_stream_info() pos: 3110460 bytes read:3112960 seeks:0 frames:1
    Input #0, yuv4mpegpipe, from 'pipe:':
      Duration: N/A, start: 0.000000, bitrate: N/A
      Stream #0:0, 1, 1/25: Video: rawvideo, 1 reference frame (I420 / 0x30323449), yuv420p(progressive, center), 1920x1080, 0/1, 25 fps, 25 tbr, 25 tbn
    Successfully opened the file.
    [AVFilterGraph @ 000001c16e121b80] Setting 'all' to value 'bt709'
    [AVFilterGraph @ 000001c16e121b80] Setting 'space' to value 'bt709'
    [AVFilterGraph @ 000001c16e121b80] Setting 'trc' to value 'bt709'
    [AVFilterGraph @ 000001c16e121b80] Setting 'primaries' to value 'bt709'
    [AVFilterGraph @ 000001c16e121b80] Setting 'range' to value 'pc'
    [AVFilterGraph @ 000001c16e121b80] Setting 'format' to value 'yuv420p'
    [AVFilterGraph @ 000001c16e121b80] Setting 'fast' to value '0'
    [AVFilterGraph @ 000001c16e121b80] Setting 'pix_fmts' to value 'yuv420p'
    [AVFilterGraph @ 000001c16e121b80] Setting 'dar' to value '16/9'
    Parsing a group of options: output url G:\DVD\PAT-SLIDESHOWS\_AI_05_in_development\_AI_08.mp4.
    Applying option c:v (codec name) with argument h264_nvenc.
    Applying option b:v (video bitrate (please use -b:v)) with argument 9000000.
    Applying option profile:v (set profile) with argument high.
    Applying option an (disable audio) with argument 1.
    Successfully parsed a group of options.
    Opening an output file: G:\DVD\PAT-SLIDESHOWS\_AI_05_in_development\_AI_08.mp4.
    [file @ 000001c16e121fc0] Setting default whitelist 'file,crypto,data'
    Successfully opened the file.
    Stream mapping:
      Stream #0:0 (rawvideo) -> colorspace:default
      setdar:default -> Stream #0:0 (h264_nvenc)
    [vost#0:0/h264_nvenc @ 000001c16e125240] cur_dts is invalid [init:0 i_done:0 finish:0] (this is harmless if it occurs once at the start per stream)
    [rawvideo @ 000001c16e1213c0] PACKET SIZE: 3110400, STRIDE: 2880
    [AVFilterGraph @ 000001c16e121b80] Setting 'all' to value 'bt709'
    [AVFilterGraph @ 000001c16e121b80] Setting 'space' to value 'bt709'
    [AVFilterGraph @ 000001c16e121b80] Setting 'trc' to value 'bt709'
    [AVFilterGraph @ 000001c16e121b80] Setting 'primaries' to value 'bt709'
    [AVFilterGraph @ 000001c16e121b80] Setting 'range' to value 'pc'
    [AVFilterGraph @ 000001c16e121b80] Setting 'format' to value 'yuv420p'
    [AVFilterGraph @ 000001c16e121b80] Setting 'fast' to value '0'
    [AVFilterGraph @ 000001c16e121b80] Setting 'pix_fmts' to value 'yuv420p'
    [AVFilterGraph @ 000001c16e121b80] Setting 'dar' to value '16/9'
    detected 24 logical cores
    [graph 0 input from stream 0:0 @ 000001c16e192600] Setting 'video_size' to value '1920x1080'
    [graph 0 input from stream 0:0 @ 000001c16e192600] Setting 'pix_fmt' to value '0'
    [graph 0 input from stream 0:0 @ 000001c16e192600] Setting 'time_base' to value '1/25'
    [graph 0 input from stream 0:0 @ 000001c16e192600] Setting 'pixel_aspect' to value '0/1'
    [graph 0 input from stream 0:0 @ 000001c16e192600] Setting 'frame_rate' to value '25/1'
    [graph 0 input from stream 0:0 @ 000001c16e192600] w:1920 h:1080 pixfmt:yuv420p tb:1/25 fr:25/1 sar:0/1
    [format @ 000001c16e192900] Setting 'pix_fmts' to value 'yuv420p|nv12|p010le|yuv444p|p016le|yuv444p16le|bgr0|bgra|rgb0|rgba|x2rgb10le|x2bgr10le|gbrp|gbrp16le|cuda|d3d11'
    [AVFilterGraph @ 000001c16e121b80] query_formats: 6 queried, 5 merged, 0 already done, 0 delayed
    [Parsed_setdar_2 @ 000001c16e18f2c0] w:1920 h:1080 dar:16/9 sar:0/1 -> dar:16/9 sar:1/1
    [Parsed_colorspace_0 @ 000001c16e185400] Unsupported input primaries 2 (unknown)
    Error while filtering: Invalid argument
    Failed to inject frame into filter network: Invalid argument
    Error while processing the decoded data for stream #0:0
    [AVIOContext @ 000001c16e140640] Statistics: 0 bytes written, 0 seeks, 0 writeouts
    [in#0/yuv4mpegpipe @ 000001c16c388300] Terminating demuxer thread
    [AVIOContext @ 000001c16e120580] Statistics: 9334784 bytes read, 0 seeks
    Conversion failed!
    Lines of interest in the "failed" log:
    Code:
      Stream #0:0 (rawvideo) -> colorspace:default
    [Parsed_colorspace_0 @ 000001c16e185400] Unsupported input primaries 2 (unknown)
    Error while filtering: Invalid argument
    Failed to inject frame into filter network: Invalid argument
    Error while processing the decoded data for stream #0:0
    Now I'm sort of stuck, vspipe -> ffmpeg fails, ffmpeg-direct-vpy-input is unusable.

    I suppose we could remove the "-filter_complex" and leave just the "-colorspace bt709 -color_primaries bt709 -color_trc bt709 -color_range pc" by assuming the script always outputs that; that works.

    I guess it's possible "--container y4m" could have something to do with it.
    One wonders if there's documentation somewhere about the vspipe container options in "--container <y4m/wav/w64> Add headers for the specified format to the output" ?

    Advice welcomed !

    For comparison, here's the commandline and ffmpeg debug log which works (no vspipe) with the identical script:
    Code:
    "C:\SOFTWARE\Vapoursynth-x64\ffmpeg_OpenCL.exe" -hide_banner -v debug -stats -f vapoursynth -i "G:\DVD\PAT-SLIDESHOWS\_AI_05_in_development\_AI_08.vpy" -probesize 200M -analyzeduration 200M -sws_flags lanczos+accurate_rnd+full_chroma_int+full_chroma_inp -filter_complex "colorspace=all=bt709:space=bt709:trc=bt709:primaries=bt709:range=pc:format=yuv420p:fast=0,format=yuv420p,setdar=16/9" -colorspace bt709 -color_primaries bt709 -color_trc bt709 -color_range pc -strict experimental -c:v h264_nvenc -preset p7 -multipass fullres -forced-idr 1 -g 25 -coder:v cabac -spatial-aq 1 -temporal-aq 1 -dpb_size 0 -bf:v 3 -b_ref_mode:v 0 -rc:v vbr -b:v 9000000 -minrate:v 3000000 -maxrate:v 15000000 -bufsize 15000000 -profile:v high -level 5.2 -movflags +faststart+write_colr -an -y "G:\DVD\PAT-SLIDESHOWS\_AI_05_in_development\_AI_08.mp4"  
    Splitting the commandline.
    Reading option '-hide_banner' ... matched as option 'hide_banner' (do not show program banner) with argument '1'.
    Reading option '-v' ... matched as option 'v' (set logging level) with argument 'debug'.
    Reading option '-stats' ... matched as option 'stats' (print progress report during encoding) with argument '1'.
    Reading option '-f' ... matched as option 'f' (force format) with argument 'vapoursynth'.
    Reading option '-i' ... matched as input url with argument 'G:\DVD\PAT-SLIDESHOWS\_AI_05_in_development\_AI_08.vpy'.
    Reading option '-probesize' ... matched as AVOption 'probesize' with argument '200M'.
    Reading option '-analyzeduration' ... matched as AVOption 'analyzeduration' with argument '200M'.
    Reading option '-sws_flags' ... matched as AVOption 'sws_flags' with argument 'lanczos+accurate_rnd+full_chroma_int+full_chroma_inp'.
    Reading option '-filter_complex' ... matched as option 'filter_complex' (create a complex filtergraph) with argument 'colorspace=all=bt709:space=bt709:trc=bt709:primaries=bt709:range=pc:format=yuv420p:fast=0,format=yuv420p,setdar=16/9'.
    Reading option '-colorspace' ... matched as AVOption 'colorspace' with argument 'bt709'.
    Reading option '-color_primaries' ... matched as AVOption 'color_primaries' with argument 'bt709'.
    Reading option '-color_trc' ... matched as AVOption 'color_trc' with argument 'bt709'.
    Reading option '-color_range' ... matched as AVOption 'color_range' with argument 'pc'.
    Reading option '-strict' ...Routing option strict to both codec and muxer layer
     matched as AVOption 'strict' with argument 'experimental'.
    Reading option '-c:v' ... matched as option 'c' (codec name) with argument 'h264_nvenc'.
    Reading option '-preset' ... matched as AVOption 'preset' with argument 'p7'.
    Reading option '-multipass' ... matched as AVOption 'multipass' with argument 'fullres'.
    Reading option '-forced-idr' ... matched as AVOption 'forced-idr' with argument '1'.
    Reading option '-g' ... matched as AVOption 'g' with argument '25'.
    Reading option '-coder:v' ... matched as AVOption 'coder:v' with argument 'cabac'.
    Reading option '-spatial-aq' ... matched as AVOption 'spatial-aq' with argument '1'.
    Reading option '-temporal-aq' ... matched as AVOption 'temporal-aq' with argument '1'.
    Reading option '-dpb_size' ... matched as AVOption 'dpb_size' with argument '0'.
    Reading option '-bf:v' ... matched as AVOption 'bf:v' with argument '3'.
    Reading option '-b_ref_mode:v' ... matched as AVOption 'b_ref_mode:v' with argument '0'.
    Reading option '-rc:v' ... matched as AVOption 'rc:v' with argument 'vbr'.
    Reading option '-b:v' ... matched as option 'b' (video bitrate (please use -b:v)) with argument '9000000'.
    Reading option '-minrate:v' ... matched as AVOption 'minrate:v' with argument '3000000'.
    Reading option '-maxrate:v' ... matched as AVOption 'maxrate:v' with argument '15000000'.
    Reading option '-bufsize' ... matched as AVOption 'bufsize' with argument '15000000'.
    Reading option '-profile:v' ... matched as option 'profile' (set profile) with argument 'high'.
    Reading option '-level' ... matched as AVOption 'level' with argument '5.2'.
    Reading option '-movflags' ... matched as AVOption 'movflags' with argument '+faststart+write_colr'.
    Reading option '-an' ... matched as option 'an' (disable audio) with argument '1'.
    Reading option '-y' ... matched as option 'y' (overwrite output files) with argument '1'.
    Reading option 'G:\DVD\PAT-SLIDESHOWS\_AI_05_in_development\_AI_08.mp4' ... matched as output url.
    Finished splitting the commandline.
    Parsing a group of options: global .
    Applying option hide_banner (do not show program banner) with argument 1.
    Applying option v (set logging level) with argument debug.
    Applying option stats (print progress report during encoding) with argument 1.
    Applying option filter_complex (create a complex filtergraph) with argument colorspace=all=bt709:space=bt709:trc=bt709:primaries=bt709:range=pc:format=yuv420p:fast=0,format=yuv420p,setdar=16/9.
    Applying option y (overwrite output files) with argument 1.
    Successfully parsed a group of options.
    Parsing a group of options: input url G:\DVD\PAT-SLIDESHOWS\_AI_05_in_development\_AI_08.vpy.
    Applying option f (force format) with argument vapoursynth.
    Successfully parsed a group of options.
    Opening an input file: G:\DVD\PAT-SLIDESHOWS\_AI_05_in_development\_AI_08.vpy.
    [vapoursynth @ 00000232a653c340] Opening 'G:\DVD\PAT-SLIDESHOWS\_AI_05_in_development\_AI_08.vpy' for reading
    [file @ 00000232a653c600] Setting default whitelist 'file,crypto,data'
    
    <snip huge script debug log>
    
    [vapoursynth @ 00000232a653c340] VS format YUV420P8 -> pixfmt yuv420p
    [vapoursynth @ 00000232a653c340] Before avformat_find_stream_info() pos: 126495 bytes read:126495 seeks:0 nb_streams:1
    [vapoursynth @ 00000232a653c340] All info found
    [vapoursynth @ 00000232a653c340] After avformat_find_stream_info() pos: 126495 bytes read:126495 seeks:0 frames:1
    Input #0, vapoursynth, from 'G:\DVD\PAT-SLIDESHOWS\_AI_05_in_development\_AI_08.vpy':
      Duration: 00:05:59.44, start: 0.000000, bitrate: 2 kb/s
      Stream #0:0, 1, 1/25: Video: wrapped_avframe, 1 reference frame, yuv420p, 1920x1080, 0/1, 25 tbr, 25 tbn
    Successfully opened the file.
    [AVFilterGraph @ 00000232a786a800] Setting 'all' to value 'bt709'
    [AVFilterGraph @ 00000232a786a800] Setting 'space' to value 'bt709'
    [AVFilterGraph @ 00000232a786a800] Setting 'trc' to value 'bt709'
    [AVFilterGraph @ 00000232a786a800] Setting 'primaries' to value 'bt709'
    [AVFilterGraph @ 00000232a786a800] Setting 'range' to value 'pc'
    [AVFilterGraph @ 00000232a786a800] Setting 'format' to value 'yuv420p'
    [AVFilterGraph @ 00000232a786a800] Setting 'fast' to value '0'
    [AVFilterGraph @ 00000232a786a800] Setting 'pix_fmts' to value 'yuv420p'
    [AVFilterGraph @ 00000232a786a800] Setting 'dar' to value '16/9'
    Parsing a group of options: output url G:\DVD\PAT-SLIDESHOWS\_AI_05_in_development\_AI_08.mp4.
    Applying option c:v (codec name) with argument h264_nvenc.
    Applying option b:v (video bitrate (please use -b:v)) with argument 9000000.
    Applying option profile:v (set profile) with argument high.
    Applying option an (disable audio) with argument 1.
    Successfully parsed a group of options.
    Opening an output file: G:\DVD\PAT-SLIDESHOWS\_AI_05_in_development\_AI_08.mp4.
    [file @ 00000232a786ff40] Setting default whitelist 'file,crypto,data'
    Successfully opened the file.
    Stream mapping:
      Stream #0:0 (wrapped_avframe) -> colorspace:default
      setdar:default -> Stream #0:0 (h264_nvenc)
    Press [q] to stop, [?] for help
    [vost#0:0/h264_nvenc @ 00000232a786ddc0] cur_dts is invalid [init:0 i_done:0 finish:0] (this is harmless if it occurs once at the start per stream)
    [AVFilterGraph @ 00000232a786d180] Setting 'all' to value 'bt709'
    [AVFilterGraph @ 00000232a786d180] Setting 'space' to value 'bt709'
    [AVFilterGraph @ 00000232a786d180] Setting 'trc' to value 'bt709'
    [AVFilterGraph @ 00000232a786d180] Setting 'primaries' to value 'bt709'
    [AVFilterGraph @ 00000232a786d180] Setting 'range' to value 'pc'
    [AVFilterGraph @ 00000232a786d180] Setting 'format' to value 'yuv420p'
    [AVFilterGraph @ 00000232a786d180] Setting 'fast' to value '0'
    [AVFilterGraph @ 00000232a786d180] Setting 'pix_fmts' to value 'yuv420p'
    [AVFilterGraph @ 00000232a786d180] Setting 'dar' to value '16/9'
    detected 24 logical cores
    [graph 0 input from stream 0:0 @ 00000232a653ec80] Setting 'video_size' to value '1920x1080'
    [graph 0 input from stream 0:0 @ 00000232a653ec80] Setting 'pix_fmt' to value '0'
    [graph 0 input from stream 0:0 @ 00000232a653ec80] Setting 'time_base' to value '1/25'
    [graph 0 input from stream 0:0 @ 00000232a653ec80] Setting 'pixel_aspect' to value '0/1'
    [graph 0 input from stream 0:0 @ 00000232a653ec80] Setting 'frame_rate' to value '25/1'
    [graph 0 input from stream 0:0 @ 00000232a653ec80] w:1920 h:1080 pixfmt:yuv420p tb:1/25 fr:25/1 sar:0/1
    [format @ 00000232a78c6cc0] Setting 'pix_fmts' to value 'yuv420p|nv12|p010le|yuv444p|p016le|yuv444p16le|bgr0|bgra|rgb0|rgba|x2rgb10le|x2bgr10le|gbrp|gbrp16le|cuda|d3d11'
    [AVFilterGraph @ 00000232a786d180] query_formats: 6 queried, 5 merged, 0 already done, 0 delayed
    [Parsed_setdar_2 @ 00000232a78c4680] w:1920 h:1080 dar:16/9 sar:0/1 -> dar:16/9 sar:1/1
    [h264_nvenc @ 00000232a786e380] Loaded lib: nvcuda.dll
    [h264_nvenc @ 00000232a786e380] Loaded sym: cuInit
    [h264_nvenc @ 00000232a786e380] Loaded sym: cuDeviceGetCount
    [h264_nvenc @ 00000232a786e380] Loaded sym: cuDeviceGet
    [h264_nvenc @ 00000232a786e380] Loaded sym: cuDeviceGetAttribute
    [h264_nvenc @ 00000232a786e380] Loaded sym: cuDeviceGetName
    [h264_nvenc @ 00000232a786e380] Loaded sym: cuDeviceComputeCapability
    [h264_nvenc @ 00000232a786e380] Loaded sym: cuCtxCreate_v2
    [h264_nvenc @ 00000232a786e380] Loaded sym: cuCtxSetLimit
    [h264_nvenc @ 00000232a786e380] Loaded sym: cuCtxPushCurrent_v2
    [h264_nvenc @ 00000232a786e380] Loaded sym: cuCtxPopCurrent_v2
    [h264_nvenc @ 00000232a786e380] Loaded sym: cuCtxDestroy_v2
    [h264_nvenc @ 00000232a786e380] Loaded sym: cuMemAlloc_v2
    [h264_nvenc @ 00000232a786e380] Loaded sym: cuMemAllocPitch_v2
    [h264_nvenc @ 00000232a786e380] Loaded sym: cuMemAllocManaged
    [h264_nvenc @ 00000232a786e380] Loaded sym: cuMemsetD8Async
    [h264_nvenc @ 00000232a786e380] Loaded sym: cuMemFree_v2
    [h264_nvenc @ 00000232a786e380] Loaded sym: cuMemcpy
    [h264_nvenc @ 00000232a786e380] Loaded sym: cuMemcpyAsync
    [h264_nvenc @ 00000232a786e380] Loaded sym: cuMemcpy2D_v2
    [h264_nvenc @ 00000232a786e380] Loaded sym: cuMemcpy2DAsync_v2
    [h264_nvenc @ 00000232a786e380] Loaded sym: cuMemcpyHtoD_v2
    [h264_nvenc @ 00000232a786e380] Loaded sym: cuMemcpyHtoDAsync_v2
    [h264_nvenc @ 00000232a786e380] Loaded sym: cuMemcpyDtoH_v2
    [h264_nvenc @ 00000232a786e380] Loaded sym: cuMemcpyDtoHAsync_v2
    [h264_nvenc @ 00000232a786e380] Loaded sym: cuMemcpyDtoD_v2
    [h264_nvenc @ 00000232a786e380] Loaded sym: cuMemcpyDtoDAsync_v2
    [h264_nvenc @ 00000232a786e380] Loaded sym: cuGetErrorName
    [h264_nvenc @ 00000232a786e380] Loaded sym: cuGetErrorString
    [h264_nvenc @ 00000232a786e380] Loaded sym: cuCtxGetDevice
    [h264_nvenc @ 00000232a786e380] Loaded sym: cuDevicePrimaryCtxRetain
    [h264_nvenc @ 00000232a786e380] Loaded sym: cuDevicePrimaryCtxRelease
    [h264_nvenc @ 00000232a786e380] Loaded sym: cuDevicePrimaryCtxSetFlags
    [h264_nvenc @ 00000232a786e380] Loaded sym: cuDevicePrimaryCtxGetState
    [h264_nvenc @ 00000232a786e380] Loaded sym: cuDevicePrimaryCtxReset
    [h264_nvenc @ 00000232a786e380] Loaded sym: cuStreamCreate
    [h264_nvenc @ 00000232a786e380] Loaded sym: cuStreamQuery
    [h264_nvenc @ 00000232a786e380] Loaded sym: cuStreamSynchronize
    [h264_nvenc @ 00000232a786e380] Loaded sym: cuStreamDestroy_v2
    [h264_nvenc @ 00000232a786e380] Loaded sym: cuStreamAddCallback
    [h264_nvenc @ 00000232a786e380] Loaded sym: cuEventCreate
    [h264_nvenc @ 00000232a786e380] Loaded sym: cuEventDestroy_v2
    [h264_nvenc @ 00000232a786e380] Loaded sym: cuEventSynchronize
    [h264_nvenc @ 00000232a786e380] Loaded sym: cuEventQuery
    [h264_nvenc @ 00000232a786e380] Loaded sym: cuEventRecord
    [h264_nvenc @ 00000232a786e380] Loaded sym: cuLaunchKernel
    [h264_nvenc @ 00000232a786e380] Loaded sym: cuLinkCreate
    [h264_nvenc @ 00000232a786e380] Loaded sym: cuLinkAddData
    [h264_nvenc @ 00000232a786e380] Loaded sym: cuLinkComplete
    [h264_nvenc @ 00000232a786e380] Loaded sym: cuLinkDestroy
    [h264_nvenc @ 00000232a786e380] Loaded sym: cuModuleLoadData
    [h264_nvenc @ 00000232a786e380] Loaded sym: cuModuleUnload
    [h264_nvenc @ 00000232a786e380] Loaded sym: cuModuleGetFunction
    [h264_nvenc @ 00000232a786e380] Loaded sym: cuModuleGetGlobal
    [h264_nvenc @ 00000232a786e380] Loaded sym: cuTexObjectCreate
    [h264_nvenc @ 00000232a786e380] Loaded sym: cuTexObjectDestroy
    [h264_nvenc @ 00000232a786e380] Loaded sym: cuGLGetDevices_v2
    [h264_nvenc @ 00000232a786e380] Loaded sym: cuGraphicsGLRegisterImage
    [h264_nvenc @ 00000232a786e380] Loaded sym: cuGraphicsUnregisterResource
    [h264_nvenc @ 00000232a786e380] Loaded sym: cuGraphicsMapResources
    [h264_nvenc @ 00000232a786e380] Loaded sym: cuGraphicsUnmapResources
    [h264_nvenc @ 00000232a786e380] Loaded sym: cuGraphicsSubResourceGetMappedArray
    [h264_nvenc @ 00000232a786e380] Loaded sym: cuGraphicsResourceGetMappedPointer_v2
    [h264_nvenc @ 00000232a786e380] Loaded sym: cuDeviceGetUuid
    [h264_nvenc @ 00000232a786e380] Loaded sym: cuImportExternalMemory
    [h264_nvenc @ 00000232a786e380] Loaded sym: cuDestroyExternalMemory
    [h264_nvenc @ 00000232a786e380] Loaded sym: cuExternalMemoryGetMappedBuffer
    [h264_nvenc @ 00000232a786e380] Loaded sym: cuExternalMemoryGetMappedMipmappedArray
    [h264_nvenc @ 00000232a786e380] Loaded sym: cuMipmappedArrayGetLevel
    [h264_nvenc @ 00000232a786e380] Loaded sym: cuMipmappedArrayDestroy
    [h264_nvenc @ 00000232a786e380] Loaded sym: cuImportExternalSemaphore
    [h264_nvenc @ 00000232a786e380] Loaded sym: cuDestroyExternalSemaphore
    [h264_nvenc @ 00000232a786e380] Loaded sym: cuSignalExternalSemaphoresAsync
    [h264_nvenc @ 00000232a786e380] Loaded sym: cuWaitExternalSemaphoresAsync
    [h264_nvenc @ 00000232a786e380] Loaded sym: cuArrayCreate_v2
    [h264_nvenc @ 00000232a786e380] Loaded sym: cuArray3DCreate_v2
    [h264_nvenc @ 00000232a786e380] Loaded sym: cuArrayDestroy
    [h264_nvenc @ 00000232a786e380] Cannot load optional cuEGLStreamProducerConnect
    [h264_nvenc @ 00000232a786e380] Cannot load optional cuEGLStreamProducerDisconnect
    [h264_nvenc @ 00000232a786e380] Cannot load optional cuEGLStreamConsumerDisconnect
    [h264_nvenc @ 00000232a786e380] Cannot load optional cuEGLStreamProducerPresentFrame
    [h264_nvenc @ 00000232a786e380] Cannot load optional cuEGLStreamProducerReturnFrame
    [h264_nvenc @ 00000232a786e380] Loaded sym: cuD3D11GetDevice
    [h264_nvenc @ 00000232a786e380] Loaded sym: cuD3D11GetDevices
    [h264_nvenc @ 00000232a786e380] Loaded sym: cuGraphicsD3D11RegisterResource
    [h264_nvenc @ 00000232a786e380] Loaded lib: nvEncodeAPI64.dll
    [h264_nvenc @ 00000232a786e380] Loaded sym: NvEncodeAPICreateInstance
    [h264_nvenc @ 00000232a786e380] Loaded sym: NvEncodeAPIGetMaxSupportedVersion
    [h264_nvenc @ 00000232a786e380] Loaded Nvenc version 12.0
    [h264_nvenc @ 00000232a786e380] Nvenc initialized successfully
    [h264_nvenc @ 00000232a786e380] 1 CUDA capable devices found
    [h264_nvenc @ 00000232a786e380] [ GPU #0 - < NVIDIA GeForce RTX 2060 SUPER > has Compute SM 7.5 ]
    [h264_nvenc @ 00000232a786e380] supports NVENC
    [h264_nvenc @ 00000232a786e380] AQ enabled.
    [h264_nvenc @ 00000232a786e380] Temporal AQ enabled.
    [h264_nvenc @ 00000232a786e380] Lookahead enabled: depth 28, scenecut enabled, B-adapt enabled.
    Output #0, mp4, to 'G:\DVD\PAT-SLIDESHOWS\_AI_05_in_development\_AI_08.mp4':
      Metadata:
        encoder         : Lavf60.4.100
      Stream #0:0, 0, 1/12800: Video: h264 (High), 1 reference frame (avc1 / 0x31637661), yuv420p(pc, bt709, progressive), 1920x1080 (0x0) [SAR 1:1 DAR 16:9], 0/1, q=2-31, 9000 kb/s, 25 fps, 12800 tbn
        Metadata:
          encoder         : Lavc60.6.101 h264_nvenc
        Side data:
          cpb: bitrate max/min/avg: 15000000/0/9000000 buffer size: 15000000 vbv_delay: N/A
    [vost#0:0/h264_nvenc @ 00000232a786ddc0] Clipping frame in rate conversion by 0.000008
    frame=    0 fps=0.0 q=0.0 size=       0kB time=-577014:32:22.77 bitrate=  -0.0kbits/s speed=N/A    
    frame=    1 fps=0.0 q=11.0 size=       0kB time=-00:00:00.08 bitrate=N/A speed=N/A    
    frame=   43 fps= 42 q=14.0 size=     256kB time=00:00:01.60 bitrate=1311.0kbits/s speed=1.58x    
    frame=   73 fps= 48 q=10.0 size=     768kB time=00:00:02.80 bitrate=2247.1kbits/s speed=1.83x    
    <snip>
    frame= 8821 fps= 42 q=10.0 size=  270848kB time=00:05:52.72 bitrate=6290.5kbits/s speed=1.67x    
    frame= 8847 fps= 42 q=9.0 size=  271104kB time=00:05:53.76 bitrate=6277.9kbits/s speed=1.67x    
    frame= 8900 fps= 42 q=24.0 size=  275200kB time=00:05:55.88 bitrate=6334.8kbits/s speed=1.68x    
    [in#0/vapoursynth @ 00000232a48d8280] EOF while reading input
    [in#0/vapoursynth @ 00000232a48d8280] Terminating demuxer thread
    [out_0_0 @ 00000232a78c6bc0] EOF on sink link out_0_0:default.
    No more output streams to write to, finishing.
    [out#0/mp4 @ 00000232a65d6840] All streams finished
    [out#0/mp4 @ 00000232a65d6840] Terminating muxer thread
    [mp4 @ 00000232a786d240] Starting second pass: moving the moov atom to the beginning of the file
    [mp4 @ 00000232a786d240] Opening 'G:\DVD\PAT-SLIDESHOWS\_AI_05_in_development\_AI_08.mp4' for reading
    [file @ 00000236c27d2440] Setting default whitelist 'file,crypto,data'
    [AVIOContext @ 00000236c28f1c80] Statistics: 286141176 bytes read, 0 seeks
    [AVIOContext @ 00000232a78b10c0] Statistics: 572371133 bytes written, 4 seeks, 2187 writeouts
    frame= 8986 fps= 42 q=10.0 Lsize=  279521kB time=00:05:59.28 bitrate=6373.4kbits/s speed=1.69x    
    video:279435kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.031054%
    Input file #0 (G:\DVD\PAT-SLIDESHOWS\_AI_05_in_development\_AI_08.vpy):
      Input stream #0:0 (video): 8986 packets read (4313280 bytes); 8986 frames decoded; 
      Total: 8986 packets (4313280 bytes) demuxed
    Output file #0 (G:\DVD\PAT-SLIDESHOWS\_AI_05_in_development\_AI_08.mp4):
      Output stream #0:0 (video): 8986 frames encoded; 8986 packets muxed (286141128 bytes); 
      Total: 8986 packets (286141128 bytes) muxed
    8986 frames successfully decoded, 0 decoding errors
    [h264_nvenc @ 00000232a786e380] Nvenc unloaded
    [AVIOContext @ 00000232a653c7c0] Statistics: 126495 bytes read, 0 seeks
    Quote Quote  
  19. Member hydra3333's Avatar
    Join Date
    Oct 2009
    Location
    Australia
    Search Comp PM
    Ah.

    y4m in https://wiki.multimedia.cx/index.php/YUV4MPEG2 ... none of the colour flags are mentioned in that container.

    I guess I'll have to output fixed colour stuff every time. Oh well.
    Quote Quote  
  20. Member hydra3333's Avatar
    Join Date
    Oct 2009
    Location
    Australia
    Search Comp PM
    OK. A solution of sorts.

    Affixing the script output to BT.709 etc and then using this ffmpeg commandline worked.
    Code:
    "C:\SOFTWARE\Vapoursynth-x64\VSPipe.exe" --progress --container y4m "G:\DVD\PAT-SLIDESHOWS\_AI_05_in_development\_AI_08.vpy" -  | ^
    "C:\SOFTWARE\Vapoursynth-x64\ffmpeg_OpenCL.exe" -hide_banner -v verbose -stats ^
    -colorspace bt709 -color_primaries bt709 -color_trc bt709 -color_range pc ^
    -f yuv4mpegpipe -i pipe: -probesize 200M -analyzeduration 200M ^
    -sws_flags lanczos+accurate_rnd+full_chroma_int+full_chroma_inp ^
    -filter_complex "colorspace=all=bt709:space=bt709:trc=bt709:primaries=bt709:range=pc:format=yuv420p:fast=0,format=yuv420p,setdar=16/9" ^
    -strict experimental ^
    -c:v h264_nvenc -preset p7 -multipass fullres -forced-idr 1 -g 25 -coder:v cabac -spatial-aq 1 -temporal-aq 1 -dpb_size 0 -bf:v 3 -b_ref_mode:v 0 ^
    -rc:v vbr -b:v 9000000 -minrate:v 3000000 -maxrate:v 15000000 -bufsize 15000000 -profile:v high -level 5.2 ^
    -movflags +faststart+write_colr ^
    -an ^
    -y "G:\DVD\PAT-SLIDESHOWS\_AI_05_in_development\_AI_08.mp4"
    Most of the "-filter_complex" is unnecessary, the issue was discovered during debugging when having filters there.

    Two NOTES for the unwary:

    1. ffmpeg only worked (i.e. ffmpeg did not abort) by placing "-colorspace bt709 -color_primaries bt709 -color_trc bt709 -color_range pc" before the input: "-f yuv4mpegpipe -i pipe:"


    2. a handy post by jagabo 11 Jan 2023: https://forum.videohelp.com/threads/408170-Colors-Information-missing-on-a-video#post2677695 suggests we may also need to add
    Code:
    -bsf:v h264_metadata=colour_primaries=1:transfer_characteristics=1:matrix_coefficients=1:video_full_range_flag=1"
    after the encoding parameters on the ffmpeg commandline
    Originally Posted by jagabo View Post
    Don't those stats have to be added at the bitstream filter level when remuxing?
    https://ffmpeg.org/ffmpeg-bitstream-filters.html#h264_005fmetadata
    something like:
    Code:
    ffmpeg -i input.mp4 -c copy -bsf:v h264_metadata=colour_primaries=1:transfer_characteristics=1:matrix_coefficients=1:video_full_range_flag=0 output.mp4
    I don't know what values are correct. You'll have to look them up (or just experiment). And if it's not h.264 you'll have to specify the correct bitstream filter (hevc_metadata, prores-metadata, etc.).

    <edit>
    After a little experimentation: 1 is BT.709 for color_primaries, transfer_characteristics, and matrix_coefficients. 0 is limited range for video_full_range_flag. Values may be different for different bsf's. Above command line modified to reflect this.
    MediaInfo report from an unflagged h.264 video remuxed with the above command line:
    Code:
    Color range                              : Limited
    Color primaries                          : BT.709
    Transfer characteristics                 : BT.709
    Matrix coefficients                      : BT.709
    </edit>
    However, without those extra ffmpeg options on the ffmpeg commandline at top, mediainfo reports the below, suggesting it is not needed in this case.
    Code:
    colour_description_present               : Yes
    colour_description_present_Source        : Container / Stream
    Color range                              : Full
    colour_range_Source                      : Container / Stream
    Color primaries                          : BT.709
    colour_primaries_Source                  : Container / Stream
    Transfer characteristics                 : BT.709
    transfer_characteristics_Source          : Container / Stream
    Matrix coefficients                      : BT.709
    matrix_coefficients_Source               : Container / Stream
    So my final commandline with fixed bt.709 input and output would be:
    Code:
    "C:\SOFTWARE\Vapoursynth-x64\VSPipe.exe" --progress --container y4m "G:\DVD\PAT-SLIDESHOWS\_AI_05_in_development\_AI_08.vpy" -  | ^
    "C:\SOFTWARE\Vapoursynth-x64\ffmpeg_OpenCL.exe" -hide_banner -v verbose -stats ^
    -colorspace bt709 -color_primaries bt709 -color_trc bt709 -color_range pc ^
    -f yuv4mpegpipe -i pipe: -probesize 200M -analyzeduration 200M ^
    -sws_flags lanczos+accurate_rnd+full_chroma_int+full_chroma_inp ^
    -filter_complex "format=yuv420p,setdar=16/9" ^
    -strict experimental ^
    -c:v h264_nvenc -preset p7 -multipass fullres -forced-idr 1 -g 25 -coder:v cabac -spatial-aq 1 -temporal-aq 1 -dpb_size 0 -bf:v 3 -b_ref_mode:v 0 ^
    -rc:v vbr -b:v 9000000 -minrate:v 3000000 -maxrate:v 15000000 -bufsize 15000000 -profile:v high -level 5.2 ^
    -movflags +faststart+write_colr ^
    -an ^
    -y "G:\DVD\PAT-SLIDESHOWS\_AI_05_in_development\_AI_08.mp4"
    Cheers
    Quote Quote  
  21. Member hydra3333's Avatar
    Join Date
    Oct 2009
    Location
    Australia
    Search Comp PM
    Hello. There seems to be a minor glitch with crossfades, they look funny with the last few frames of each video clip repeated ... likely I am doing something wrong.

    In any case, it turns out there's Public Domain crossfade python software over at https://github.com/OrangeChannel/vs-transitions

    I don't know whether it works, I'll give it a try to see if it doe anything.

    Cheers

    Code:
    # Public Domain software: vs_transitions
    #		https://github.com/OrangeChannel/vs-transitions
    # vs-transitions SOURCE CODE:
    #		https://raw.githubusercontent.com/OrangeChannel/vs-transitions/master/vs_transitions/__init__.py
    # vs-transitions DOCUMENTATION:
    #		https://vapoursynth-transitions.readthedocs.io/en/latest/api.html
    # modified and saved as vs_transitions.py from
    #		https://raw.githubusercontent.com/OrangeChannel/vs-transitions/master/vs_transitions/__init__.py
    import vs_transitions
    #dir(vs_transitions)
    edit: for some, vs_transitions input clip needs to be 4:4:4 eg YUV444p8, so one may need to work in 4:4:4 space.

    edit 2:
    yes, one does need to work in 4:4:4 for vs_transitions to work (plus make the vs APIv4 mod below)
    Last edited by hydra3333; 20th Mar 2023 at 06:05.
    Quote Quote  
  22. Yes, transitions for video do not work in vapoursynth, I mentioned that, it cannot evaluate audio samples. That variable for cross length has to be 0 or 1 , not sure now.
    Quote Quote  
  23. Member hydra3333's Avatar
    Join Date
    Oct 2009
    Location
    Australia
    Search Comp PM
    Hello. In viewfunc,
    https://github.com/UniversalAl/load/blob/main/viewfunc.py
    I notice this code which uses 'get_write_array':
    Code:
        def get_vsFrame(n, f, img):
            vsFrame = f.copy()
            [np.copyto(np.asarray(vsFrame.get_write_array(i)), img[:, :, i]) for i in [2,1,0]]
            return vsFrame
    I also note this comment in https://forum.doom9.org/showthread.php?p=1954375#post1954375
    get_read_array <= was deprecated, either use the new memory view or get_read_ptr
    which sort of implies perhaps 'get_write_array' is also deprecated.

    I also note 'get_read_array' has an APIv4 solution (yours, I believe !) https://forum.doom9.org/showthread.php?p=1955818#post1955818
    Code:
    if API4: img =  np.dstack([np.asarray(f[p]) for p in [2,1,0]])
    else:    img =  np.dstack([np.asarray(f.get_read_array(p)) for p in [2,1,0]])
    Currently, in vs_transitions which uses 'get_write_array', I have been seeing this error:
    Code:
    Python exception: 'vapoursynth.VideoFrame' object has no attribute 'get_write_array'
    Presumably vs_transitions worked previously (looks like pre APIv4) ...although it could be a bug.
    It seems possible it is pre-APIv4 and 'get_write_array' may no longer be available , however it could still be a bug; even if it is a bug, APIv4 code would still be best.

    Would you be able to suggest the apiv4 equivalent to replace 'fout.get_write_array(0)' below ?
    Code:
            import numpy as np
            def frame_writer(n, f):
                if n is not None:
                    pass
                fout = f.copy()
                ptr = np.asarray(fout.get_write_array(0))
                ptr[0] = np.linspace(0, 1, 1 << 16)
                return fout
            mask = blank_clip.std.ModifyFrame([blank_clip], frame_writer)
    And perhaps for viewfunc ?

    Thanks

    edit: I see in https://github.com/vapoursynth/vapoursynth/blob/master/APIV4%20changes.txt
    Code:
    Frame data access has been reworked and the broken get_read_array and get_write_array functions have been dropped.
    They're replaced by frame[plane/channel] which will return a python array to access the underlying data directly without
    a risk of access violations.
    So I GUESS in this case with 'get_write_array'
    Code:
    ptr = np.asarray(fout.get_write_array(0))
    becomes
    Code:
    ptr = np.asarray(fout[0])
    edit 2: yes, that seemed to do the trick.

    edit 3: and, vs_transitions is ... beautiful (added a random transition chooser around which vs_transition to use per image/video changeover)
    Last edited by hydra3333; 20th Mar 2023 at 06:06.
    Quote Quote  
  24. that API4 broke lots of things , I have to check that get_write_array() or change it

    great find those transitions
    Quote Quote  
  25. Originally Posted by hydra3333 View Post
    Code:
    # Public Domain software: vs_transitions
    #		https://github.com/OrangeChannel/vs-transitions
    # vs-transitions SOURCE CODE:
    #		https://raw.githubusercontent.com/OrangeChannel/vs-transitions/master/vs_transitions/__init__.py
    # vs-transitions DOCUMENTATION:
    #		https://vapoursynth-transitions.readthedocs.io/en/latest/api.html
    # modified and saved as vs_transitions.py from
    #		https://raw.githubusercontent.com/OrangeChannel/vs-transitions/master/vs_transitions/__init__.py
    import vs_transitions
    #dir(vs_transitions)
    that's really great, it works well so far. Implementation was almost non existent, I just replaced that crossfade function with :
    Code:
    import vs_transitions          
    def crossfade(a, b, duration):
        return vs_transitions.wipe(a, b, frames=duration)
    and it works, that crossfade uses exactly same logic as vs_transition, a and b clip plus duration
    Quote Quote  
  26. ok, that was for wipe transition, but testing the rest of transitions, it started to give errors in previewer:
    Crop: cropped area needs to have mod 2 width offset
    and some rendered frames gave error, if cropping was not following subsampling, so for example going into source code for def reveal() instead of :
    Code:
    w = math.floor(progress * clipa.width)
    h = math.floor(progress * clipa.height)
    I had to put:
    Code:
    w = progress * clipa.width
    w = w - w % 2
    h = progress * clipa.height
    h = h - h % 2
    and that was working for YUV420P8, so one would have to go manually for each filter and find so cropping values are not outside of particular subsampling. Not necesarily number 2 but 1,2 or 4 (411 for width).
    Might do it, but perhaps sometime later. Or using RGB or YUV444 for those transitions would most likely work.
    Last edited by _Al_; 21st Mar 2023 at 15:40.
    Quote Quote  
  27. ok, now I cannot replicate it, it just works, I tried to replicate those settings but whatever I do, it just works with odd numbers, heck I do not know what was happening.
    Quote Quote  
  28. Member hydra3333's Avatar
    Join Date
    Oct 2009
    Location
    Australia
    Search Comp PM
    me too. I've been down a couple of tracks with errors like these

    Code:
    Error: Failed to retrieve frame 88 with error: StackVertical: clip format and width must match
    Error: Failed to retrieve frame 2096 with error: FrameEval: Function didn't return a clip
    and am still fiddling to see if I can find out what's happening.

    I did see some suggestion that it had to be 4:4:4 for some functions and so I converted as the clips came in, then once at the end to 4:2:0 ... but that didn't fix that.

    Given I pre-size all material to 1920x1080 I would have expected if one clip worked then they all would, being 4:4:4 1920x1080, but no.
    If I get a clip in the middle of a bunch that fails then it consistently fails (I think). I need to do some more work with it.
    VSpipe just quits sometimes.
    Last edited by hydra3333; 24th Mar 2023 at 09:17.
    Quote Quote  
  29. Not sure where that StackVertical comes from or FrameEval, are those your additions?

    That Error, that clip must be 4:4:4 is for "linear_boudry" transition, which is a general case, where bunch of others like "push" use it with specific clipa_movement and clipb_movement. So I do not use that general "linear_boundry" but those others dependants are ok, because it passes correct defaults, it does not have to be passed while calling that transition.

    Above example I used RGB24 and it worked that is why I was confused that afterwords it worked .
    But using 4:4:4 it works I added that conversion to get_clip(), and at the end there is YUV420P8 conversion. To be sure I post here whole script. It just rotates all transitions to test them all (except fade to and from black plus general linear_boundry)
    Code:
    import vapoursynth as vs
    from vapoursynth import core
    from functools import partial
    from pathlib import Path
    import itertools
    import sys
    import os
    
    from PIL import Image, ExifTags, UnidentifiedImageError   #pip install Pillow
    ##sys.path.append(str(Path(__file__).parent / 'python_modules'))
    sys.path.append(str(Path(__file__).parent))
    import load
    import vs_transitions
    
    transitions = [
        "cover",
        "cube_rotate",
        "curtain_cover",
        "curtain_reveal",
        "fade",
    ##    "fade_from_black",
    ##    "fade_to_black",
    ##    "linear_boundary",
        "poly_fade",
        "push",
        "reveal",
        "slide_expand",
        "squeeze_expand",
        "squeeze_slide",
        "wipe",
    ]
    
    #neverending cycling from list
    transition_generator = itertools.cycle(transitions)
    
    DIRECTORY       = r'D:\paths_to_tests\test2'
    EXTENSIONS      = [".jpg"]
    WIDTH           = 1920
    HEIGHT          = 1080
    LENGTH          = 52
    CROSS_DUR       = 25
    FPSNUM          = 60000,
    FPSDEN          = 1001
    UPSIZE_KERNEL   = 'Lanczos'
    DOWNSIZE_KERNEL = 'Spline36'
    BOX             = True    # True would initiate letterboxing or pillarboxing. False fills to WIDTH,HEIGTH
    MODX            = 2       # mods for letterboxing calculations, example, for 411 YUV as an extreme
    MODY            = 2       # mods would have to be MODX=4, MODY=1 as minimum
    SAVE_ROTATED_IMAGES = False   #saves image to disk with name suffix: "_rotated" using PIL module
    
    
    def rotation_check(clip, path, save_rotated_image=False):
        #PIL module loads an image, checks if EXIF data, checks for 'Orientation'
        try:
            image = Image.open(str(path))
        except UnidentifiedImageError:
            return clip
        except PermissionError:
            print(f'PIL, Permission denied to load: {path}')
            return clip
        except Exception as e:
            print(f'PIL, {e}')
            return clip
        try:        
            for key in ExifTags.TAGS.keys():
                if ExifTags.TAGS[key] == 'Orientation':
                    break
            exif = dict(image.getexif().items())
            value = exif[key]
        except (AttributeError, KeyError, IndexError):
            # no getexif
            return clip
        else:
            if   value == 3: clip=clip.std.Turn180()
            elif value == 8: clip=clip.std.Transpose().std.FlipVertical()
            elif value == 6: clip=clip.std.Transpose().std.FlipHorizontal()
            if save_rotated_image and value in [3,8,6]:
                #rotation degrees are in counterclockwise direction!
                rotate = {3:Image.ROTATE_180, 6:Image.ROTATE_270, 8:Image.ROTATE_90}
                image = image.transpose(rotate[value])
                path = path.parent / f'{path.stem}_rotated{path.suffix}'
                image.save(str(path))
        image.close()    
        return clip
    
    def boxing(clip, W=WIDTH, H=HEIGHT):
        cw, ch = clip.width, clip.height
        if W/H > cw/ch:
            w = cw*H/ch
            x = int((W-w)/2)
            x = x - x%MODX
            x = max(0, x)
            clip = resize_clip(clip, W-2*x, H)
            if x: return clip.std.AddBorders(left=x, right=x, color=(16,128,128))  #RGB is out then (16,16,16)
            else: return clip
        else:
            h = ch*W/cw
            y = int((H-h)/2)
            y = y - y%MODY
            y = max(0, y)
            clip = resize_clip(clip, W, H-2*y)
            if y: return clip.std.AddBorders(top=y, bottom=y, color=(16,128,128))
            else: return clip
          
    def resize_clip(clip,w,h, W=WIDTH, H=HEIGHT):
        if w>W or h>H: resize = getattr(clip.resize, DOWNSIZE_KERNEL)
        else:          resize = getattr(clip.resize, UPSIZE_KERNEL)
        if clip.format.color_family==vs.RGB:
            #rgb to YUV, perhaps only for png images, figure out what matrix out is needed
            return resize(width=w, height=h, format=vs.YUV420P8, matrix_s='709')
        else:
            #YUV to YUV
            return resize(width=w, height=h, format=vs.YUV420P8)  
    
    def get_clip(path):
        #get video
        data = LOADER.get_data([str(path),])[0] 
        video = data.clip
        if data.load_isError:
            raise ValueError(f'{data.load_log}{data.load_log_error}\nload.py could not create vs.VideoNode from {path}')
        video = rotation_check(video, path, save_rotated_image=SAVE_ROTATED_IMAGES)
        video = video.std.AssumeFPS(fpsnum=FPSNUM, fpsden=FPSDEN)
        if BOX:  video = boxing(video, WIDTH, HEIGHT)
        else:    video = resize_clip(video, WIDTH, HEIGHT)
        video = video.resize.Bicubic(format = vs.YUV444P8, matrix_in_s='709')
        clip = video[0]*LENGTH if len(video)<5 else video
        return clip
    
    def get_path(path_generator):
        #get path of desired extensions from generator
        while 1:
            try:
                path = next(path_generator)
            except StopIteration:
                return None
            if path.suffix.lower() in EXTENSIONS:
                print(f'{path}')
                return path
              
    def clip_transition(a, b, duration, transition='fade'):
        return getattr(vs_transitions, transition)(a, b, frames=duration)
    
    LOADER = load.Sources()
    CROSS_DUR = max(1,CROSS_DUR)
    paths = Path(DIRECTORY).glob("*.*")
    paths1, paths2 = itertools.tee(paths, 2)
    print('wait loading paths ...')
    path = get_path(paths1)
    if path is None:
        raise ValueError(f'Extensions: {EXTENSIONS}, not found in {DIRECTORY}')
    starter = get_clip(path)
    clips = starter[0: starter.num_frames-CROSS_DUR]
    left_clip = None
    while 1:
        path = get_path(paths1)
        if path is None:
            if left_clip is None: clips = starter
            break
        right_clip = get_clip(path)
        left_clip = get_clip(get_path(paths2))
        transition_clip = clip_transition(left_clip, right_clip, CROSS_DUR, next(transition_generator))
        right  = right_clip[CROSS_DUR:-CROSS_DUR]
        clips = clips + transition_clip + right
    clips = clips.std.AssumeFPS(fpsnum=FPSNUM, fpsden=FPSDEN)
    clips = clips.resize.Bicubic(format = vs.YUV420P8, matrix_in_s='709')
    print('done')
    
    clips.set_output()
    Does vsedit previewer stop previewing when encounters a first frame error? If not, you can see if there are errors or more errors while previewing transitions roughly.
    Or this can be used to check if there is an error before encoding:
    Code:
    .
    .
    clips.set_output()
    
    log = []
    print('test loading frames ...')
    for n in range(len(clips)):
        try:
            clips.get_frame(n)
        except Exception as e:
            log.append(f'frame {n}: {e}')
    log = '\n'.join(log) or 'No errors found'
    print(log)
    #raise ValueError(log) #this if previewer does not print
    comment it all out if encoding
    Last edited by _Al_; 24th Mar 2023 at 17:01.
    Quote Quote  
  30. Member hydra3333's Avatar
    Join Date
    Oct 2009
    Location
    Australia
    Search Comp PM
    Originally Posted by _Al_ View Post
    Not sure where that StackVertical comes from or FrameEval, are those your additions?
    Not mine, and that's the funny thing.
    I did place a bunch of try/except around nearly everything, but it is not one of my messages. How strange.

    Originally Posted by _Al_ View Post
    That Error, that clip must be 4:4:4 is for "linear_boundary" transition, which is a general case, where bunch of others like "push" use it with specific clipa_movement and clipb_movement. So I do not use that general "linear_boundry" but those others dependants are ok, because it passes correct defaults, it does not have to be passed while calling that transition.
    Yes. I had a random transition generator which effectively used almost all combinations of function/direction/axis in a slideshow, which is where I noticed the issue; I think 4 functions use linear_boundary as you say.

    I refreshed from original source, just adding messages around what gets called and also wrappers around StackVertical / StackHorizontal / "raise ValueError" and added a (new) get_clip thing at end similar to yours.
    I should also change the cropping to be like yours, however will stick with to/from 4:4:4 for now.
    Scripted ran every function albeit with fixed left direction (but random axis) just now ... everything worked, all frames output. Um, what ? It had previously aborted in the same file-set
    Code:
    Finished processing 166 image/video files in "D:\ssTEST\TEST_VIDS_IMAGES\1TEST" num_frames=21677 crossfade="cube_rotate/left" RECURSIVE=True DEBUG_MODE=False SILENT_MODE=False  glob_var="**/*.*" 
          with Extensions=['.png', '.jpg', '.jpeg', '.gif', '.mp4', '.mpeg4', '.mpg', '.mpeg', '.avi', '.mjpeg', '.3gp', '.mov', '.m2ts']
    Start removing temporary *.ffindex files from directory "D:\ssTEST\TEST_VIDS_IMAGES\1TEST" with recursive="True" ...
    Finished removing 166 temporary .ffindex files with recursive="True"
    Done.
    Script evaluation done in 28.56 seconds
    .
    .
    .
    DEBUG: vs_transitions: Entered check_clips_preStack_and_abort
    DEBUG: vs_transitions: check_clips_preStack_and_abort: WARNING: inconsistent clip dimensions clip[0] width/height=33/1080 clip[1] width/height=1887/1080
    DEBUG: vs_transitions: Entered _rotate
    DEBUG: vs_transitions: Entered position
    DEBUG: vs_transitions: Entered _fitted
    DEBUG: vs_transitions: Entered rotation
    Output 21677 frames in 292.63 seconds (74.08 fps)
    I'll re-try random, and then separately script-run every combination on the same fileset.

    Thanks for the code. I really enjoy the way you go about it, sleek and elegant. Unfortunately I'm a 1970's vintage worker so hacks are fortran77 style like I'm paid by the line (I'm not).

    I must have assumed vs_transitions behaviour; I had naively thought transitions merged the clips at a transition "stealing" x frames from left and from right sides to overlap the clips, so adding the clips like that might not have worked (videos highlight such things).
    I had also decided to chop only the "necessary for transition" frames from left and right and only give those to vs_transitions.
    Oh dear, I'll have to have another look tomorrow to fix my unfortunate mis-read.

    I like yours, will use that instead of what I had.
    Originally Posted by _Al_ View Post
    Or this can be used to check if there is an error before encoding:
    Code:
    .
    .
    clips.set_output()
    
    log = []
    print('test loading frames ...')
    for n in range(len(clips)):
        try:
            clips.get_frame(n)
        except Exception as e:
            log.append(f'frame {n}: {e}')
    log = '\n'.join(log) or 'No errors found'
    print(log)
    #raise ValueError(log) #this if previewer does not print
    comment it all out if encoding
    Last edited by hydra3333; 25th Mar 2023 at 08:15.
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!