VideoHelp Forum
+ Reply to Thread
Page 3 of 4
FirstFirst 1 2 3 4 LastLast
Results 61 to 90 of 115
Thread
  1. Originally Posted by hydra3333 View Post
    I must have assumed vs_transitions behaviour; I had naively thought transitions merged the clips at a transition "stealing" x frames from left and from right sides to overlap the clips, so adding the clips like that might not have worked (videos highlight such things).
    I had also decided to chop only the "necessary for transition" frames from left and right and only give those to vs_transitions.
    No, I think you have it right!
    It just eats up from given clips evenly from left and right.
    I did not elaborate about it much at all, because it does not matter if using images. Just modified couple of lines in old script and added vs_transitions

    So to let vs_transitions eat up whatever it wants by passing frames=length, which is CROSS_DUR, and therefore make everything work even for video would be like this (also adding fading from and to black at ends):
    Code:
    .
    .
    LOADER = load.Sources()
    CROSS_DUR = max(2,CROSS_DUR)
    paths = Path(DIRECTORY).glob("*.*")
    print('wait loading paths ...')
    path = get_path(paths)
    if path is None:
        raise ValueError(f'Extensions: {EXTENSIONS}, not found in {DIRECTORY}')
    clips = vs_transitions.fade_from_black(get_clip(path))
    while 1:
        path = get_path(paths)
        if path is None:
            break
        next_clip = get_clip(path)
        clips = clip_transition(clips, next_clip, CROSS_DUR, next(transition_generator))
    clips = vs_transitions.fade_to_black(clips)
    clips = clips.resize.Bicubic(format = vs.YUV420P8, matrix_in_s='709')
    print('done')
    
    clips.set_output()
    edit: but again, just realized, there needs to be audio with videos, so this modifications is again good for images only.
    There is a script on previous page for audio also, but no transitions, so it would need to be modified to use transition only for images and come up with correct quiet audio length. And if next clip is a video, then no transitions and use original audio.
    Last edited by _Al_; 24th Mar 2023 at 23:27.
    Quote Quote  
  2. There is a bug in vs_transitions, in "fade_to_black" and "fade_from_black" , where black clip is hardcoded to 24fps (using BlankClip), so if other clip has different fps, it will come out all right, but fps is "dynamic", so it could give a trouble later if fps is used in code, getting something like "ZeroDivisionError" or something else.

    So in __init__.py inside those two functions, just before return there should be added this line:
    Code:
    black_clip_resized = black_clip_resized.std.AssumeFPS(fpsnum=src_clip.fps.numerator, fpsden=src_clip.fps.denominator)
    https://github.com/OrangeChannel/vs-transitions/issues/1
    I got some response, there suppose to be a new package ready that is updated or currently being updated with a new link.
    Last edited by _Al_; 25th Mar 2023 at 18:42.
    Quote Quote  
  3. Images and videos using transitions and audio also.

    If image is loaded audio is silent, but to have audio when video is loaded.
    To define ATTRIBUTE_AUDIO_PATH is important, it could be any video that is going to be loaded. It is to create silent audio parts based on that clip audio attributes.
    Also FPSNUM and FPSDEN should match values in video clips loaded (if they are), set LENGTH or CROSS_DUR more properly, CROSS_DUR cannot be bigger that LENGTH. LENGTH is ignored if video is loaded. It uses video length.

    media_to_video.py
    Code:
    import vapoursynth as vs
    from vapoursynth import core
    from functools import partial
    from pathlib import Path
    import itertools
    import sys
    import os
    
    from PIL import Image, ExifTags, UnidentifiedImageError   #pip install Pillow
    #sys.path.append(str(Path(__file__).parent)) 
    
    import load
    import vs_transitions
    
    transitions = [
        "cover",
        "cube_rotate",
        "curtain_cover",
        "curtain_reveal",
        "fade",
    ##    "fade_from_black",
    ##    "fade_to_black",
    ##    "linear_boundary",
        "poly_fade",
        "push",
        "reveal",
        "slide_expand",
        "squeeze_expand",
        "squeeze_slide",
        "wipe",
    ]
    
    #neverending cycling from list
    TRANSITION_GENERATOR = itertools.cycle(transitions)
    
    DIRECTORY       = r'D:\path_to_tests\test2'
    EXTENSIONS      = [".jpg",".m2ts"]
    WIDTH           = 1920
    HEIGHT          = 1080
    LENGTH          = 56
    CROSS_DUR       = 26
    FPSNUM          = 60000,
    FPSDEN          = 1001
    UPSIZE_KERNEL   = 'Lanczos'
    DOWNSIZE_KERNEL = 'Spline36'
    BOX             = True    # True would initiate letterboxing or pillarboxing. False fills to WIDTH,HEIGTH
    MODX            = 2       # mods for letterboxing calculations, example, for 411 YUV as an extreme
    MODY            = 2       # mods would have to be MODX=4, MODY=1 as minimum
    SAVE_ROTATED_IMAGES = False   #saves image to disk with name suffix: "_rotated" using PIL module
    ATTRIBUTE_AUDIO_PATH = r'D:\path_to_tests\test2\20230131193501.m2ts'
    
    class Clip:
        def __init__(self, video = None, audio=None, attribute_audio_path=None):
            self.video = video
            self.audio = audio
            if self.video is None:
                self.video = core.std.BlankClip()
            if self.audio is None:
                if attribute_audio_path is None:
                    raise ValueError('arguent default_audio (could be very short) to get default audio for images')
                attr_audio = core.bas.Source(attribute_audio_path)
                length = int(attr_audio.sample_rate/self.video.fps*self.video.num_frames)
                self.audio = attr_audio.std.BlankAudio(length = length)
        def trim(self, first=0, last=None, length=None):
            afirst  = self.to_samples(first)    if first  is not None else None
            alast   = self.to_samples(last+1)-1 if last   is not None else None
            alength = self.to_samples(length)   if length is not None else None
            return Clip( self.video.std.Trim(first=first, last=last, length=length),
                         self.audio.std.AudioTrim(first=afirst,last=alast,length=alength)
                        )
        def to_samples(self, frame):
            return int((self.audio.sample_rate/self.video.fps)*frame)
    
        def __add__(self, other):
            return Clip(self.video + other.video, self.audio + other.audio)
    
        def __mul__(self, multiple):
            return Clip(self.video*multiple, self.audio*multiple)
    
        def __getitem__(self, val):
            if isinstance(val, slice):
                if val.step is not None:
                    raise ValueError('Using steps while slicing AudioNode together with VideoNode makes no sense')
                start = self.to_samples(val.start) if val.start is not None else None
                stop =  self.to_samples(val.stop)  if val.stop  is not None else None
                return Clip( self.video.__getitem__(val),
                             self.audio.__getitem__(slice(start,stop))
                             )
            elif isinstance(val, int):
                start = self.to_samples(val)
                stop = int(start + self.audio.sample_rate/self.video.fps)
                return Clip( self.video[val],
                             self.audio.__getitem__(slice(start,stop))
                             )        
        def __repr__(self):
            return '{}\n{}'.format(repr(self.video), repr(self.audio))
    
        def __str__(self):
            return '{}\n{}'.format(str(self.video), str(self.audio))
    
    
    def rotation_check(clip, path, save_rotated_image=False):
        #PIL module loads an image, checks if EXIF data, checks for 'Orientation'
        try:
            image = Image.open(str(path))
        except UnidentifiedImageError:
            return clip
        except PermissionError:
            print(f'PIL, Permission denied to load: {path}')
            return clip
        except Exception as e:
            print(f'PIL, {e}')
            return clip
        try:        
            for key in ExifTags.TAGS.keys():
                if ExifTags.TAGS[key] == 'Orientation':
                    break
            exif = dict(image.getexif().items())
            value = exif[key]
        except (AttributeError, KeyError, IndexError):
            # no getexif
            return clip
        else:
            if   value == 3: clip=clip.std.Turn180()
            elif value == 8: clip=clip.std.Transpose().std.FlipVertical()
            elif value == 6: clip=clip.std.Transpose().std.FlipHorizontal()
            if save_rotated_image and value in [3,8,6]:
                #rotation degrees are in counterclockwise direction!
                rotate = {3:Image.ROTATE_180, 6:Image.ROTATE_270, 8:Image.ROTATE_90}
                image = image.transpose(rotate[value])
                path = path.parent / f'{path.stem}_rotated{path.suffix}'
                image.save(str(path))
        image.close()    
        return clip
    
    def boxing(clip, W=WIDTH, H=HEIGHT):
        cw, ch = clip.width, clip.height
        if W/H > cw/ch:
            w = cw*H/ch
            x = int((W-w)/2)
            x = x - x%MODX
            x = max(0, x)
            clip = resize_clip(clip, W-2*x, H)
            if x: return clip.std.AddBorders(left=x, right=x, color=(16,128,128))  #RGB is out then (16,16,16)
            else: return clip
        else:
            h = ch*W/cw
            y = int((H-h)/2)
            y = y - y%MODY
            y = max(0, y)
            clip = resize_clip(clip, W, H-2*y)
            if y: return clip.std.AddBorders(top=y, bottom=y, color=(16,128,128))
            else: return clip
          
    def resize_clip(clip,w,h, W=WIDTH, H=HEIGHT):
        if w>W or h>H: resize = getattr(clip.resize, DOWNSIZE_KERNEL)
        else:          resize = getattr(clip.resize, UPSIZE_KERNEL)
        if clip.format.color_family==vs.RGB:
            #rgb to YUV, perhaps only for png images, figure out what matrix out is needed
            return resize(width=w, height=h, format=vs.YUV420P8, matrix_s='709')
        else:
            #YUV to YUV
            return resize(width=w, height=h, format=vs.YUV420P8)  
    
    def get_clip(path):
        #get video
        data = LOADER.get_data([str(path),])[0] 
        video = data.clip
        if data.load_isError:
            raise ValueError(f'{data.load_log}{data.load_log_error}\nload.py could not create vs.VideoNode from {path}')
        video = rotation_check(video, path, save_rotated_image=SAVE_ROTATED_IMAGES)
        video = video.std.AssumeFPS(fpsnum=FPSNUM, fpsden=FPSDEN)
        if BOX:  video = boxing(video, WIDTH, HEIGHT)
        else:    video = resize_clip(video, WIDTH, HEIGHT)
        if len(video)==1:    video = video[0]*LENGTH
        video = video.resize.Bicubic(format = vs.YUV444P8, matrix_in_s='709')
        
        #get audio  
        try:
            audio = core.bas.Source(str(path))
        except AttributeError:
            raise ImportError('Vapoursynth audio source plugin "BestAudioSource.dll" could not be loaded\n'
                              'download: https://github.com/vapoursynth/bestaudiosource/releases/tag/R1')
        except (vs.Error, Exception) as e:
            #quiet audio, could not load audio , either video is an image or some problem
            clip = Clip(video, attribute_audio_path=ATTRIBUTE_AUDIO_PATH) #will generate silent clip with desired parameters
        else:
            #audio loaded
            clip = Clip(video, audio)
        return clip
    
            
    def get_path(path_generator):
        #get path of desired extensions from generator
        while 1:
            try:
                path = next(path_generator)
            except StopIteration:
                return None
            if path.suffix.lower() in EXTENSIONS:
                print(f'{path}')
                return path
              
    def get_transition_clip(a, b, duration, transition='fade'):
        left_video  = a.video[-1] * duration
        right_video = b.video[0]  * duration
        transition_func = getattr(vs_transitions, transition)
        video_transition = transition_func(left_video, right_video, frames=duration)
        silent_transition_clip = Clip(video_transition,  attribute_audio_path=ATTRIBUTE_AUDIO_PATH)
        return silent_transition_clip
    
    LOADER = load.Sources()
    CROSS_DUR = max(2,CROSS_DUR + CROSS_DUR%2) #minimum 2 and mod2 to be sure
    paths = Path(DIRECTORY).glob("*.*")
    print('wait loading paths ...')
    path = get_path(paths)
    if path is None:
        raise ValueError(f'Extensions: {EXTENSIONS}, not found in {DIRECTORY}')
    clips = get_clip(path) #get_clip() loads video AND audio as well
    clips.video = vs_transitions.fade_from_black(clips.video, frames=CROSS_DUR)
    while 1:
        path = get_path(paths)
        if path is None:
            break
        next_clip = get_clip(path)
        silent_transition_clip = get_transition_clip( clips,
                                                      next_clip,
                                                      duration=CROSS_DUR,
                                                      transition=next(TRANSITION_GENERATOR) #or just put desired available transition:  transition="wipe"
                                                      )
        clips = clips + silent_transition_clip + next_clip
    clips.video = vs_transitions.fade_to_black(clips.video, frames=CROSS_DUR)
    clips.video = clips.video.resize.Bicubic(format = vs.YUV420P8, matrix_in_s='709')
    print('done')
    
    clips.video.set_output()
    clips.audio.set_output(1)
    command lines could be:
    Code:
    VSPipe.exe --outputindex 0 --container y4m  media_to_video.py - | x264.exe --demuxer y4m --crf 18 --vbv-maxrate 30000 --vbv-bufsize 30000 --keyint 60 --tune film --colorprim bt709 --transfer bt709 --colormatrix bt709 --output output.264 - 
    VSPipe.exe --outputindex 1 --container wav  media_to_video.py - | neroAacEnc.exe -ignorelength -lc -cbr 96000 -if - -of output.m4a
    Mp4box.exe   -add  output.264 -add  output.m4a#audio  -new output.mp4
    Last edited by _Al_; 25th Mar 2023 at 20:24.
    Quote Quote  
  4. Member hydra3333's Avatar
    Join Date
    Oct 2009
    Location
    Australia
    Search Comp PM
    Thank you I'll look at those.

    vs_transitions function within function "_squeeze_expand" has a bug too, referencing width instead of height.

    Partial fix to reference height may not work, not tested properly, so far "up" yields no direct errors
    Code:
    			elif direction in [Direction.UP, Direction.DOWN]:
    				h_inc = math.floor(scale * clipa.height)
    				h_dec = clipa.height - h_inc
    
    				if h_inc == 0:
    					return clipa_t_zone
    
    				if direction == Direction.UP:
    					return StackVertical_wrapper(ID, 
    						[clipa_t_zone.resize.Spline36(height=h_dec), clipb_t_zone.resize.Spline36(height=h_inc)]
    					)
    				elif direction == Direction.RIGHT:
    					return StackVertical_wrapper(ID, 
    						[clipb_t_zone.resize.Spline36(height=h_inc), clipa_t_zone.resize.Spline36(height=h_dec)]
    					)
    however "down" yields
    Code:
    2023-03-26.19:14:36.454997 DEBUG: vs_transitions: linear_boundary: Entered _squeeze_expand ID=14 clipa_movement=squeeze clipb_movement=expand direction=down
    2023-03-26.19:14:36.454997 DEBUG: vs_transitions: linear_boundary: Entered _squeeze_expand ID=14 clipa_movement=squeeze clipb_movement=expand direction=down
    2023-03-26.19:14:36.548749 Post set_output: Consistency Check: FAILED to get_frame at frame 101 (base 0) of 1649 (base 0): FrameEval: Function didn't return a clip
    2023-03-26.19:14:36.548749 Post set_output: Consistency Check: FAILED to get_frame at frame 102 (base 0) of 1649 (base 0): FrameEval: Function didn't return a clip
    2023-03-26.19:14:36.548749 Post set_output: Consistency Check: FAILED to get_frame at frame 103 (base 0) of 1649 (base 0): FrameEval: Function didn't return a clip
    2023-03-26.19:14:36.548749 Post set_output: Consistency Check: FAILED to get_frame at frame 104 (base 0) of 1649 (base 0): FrameEval: Function didn't return a clip

    edit: my bad, didn't notice "elif direction == Direction.RIGHT" needed to be changed to Direction.DOWN.
    Last edited by hydra3333; 27th Mar 2023 at 07:40.
    Quote Quote  
  5. Member hydra3333's Avatar
    Join Date
    Oct 2009
    Location
    Australia
    Search Comp PM
    he he, testing with a poor person's (non-audio) version of your scripts ... with an old amd 3900X (12 core) & 32gb mem, on a separate SSD,
    • 14 images/vids and random transitions yields ~184 fps (starting slow then increasing fps) from vspipe into ffmpeg libx264
    • 166 images/vids and random transitions yields ~2.5 fps from vspipe into ffmpeg libx264
    • 1,110 images/vids and random transitions yields ~1-2 fps (starting at 1 fps then slowly increasing)

    A tad on the slow-ish side Non-linearly.
    I suspect it's related to the number of files being opened/closed (mainly by ffms2) rather than to time processing each clip. But I don't know.
    I guess vspipe may be able to output some filter stats but I found them uncertain to interpret, last time I looked.

    Still, can't be unhappy with pointing it at folder trees and saying "go" with no intervention apart from 2 mins up-front config.

    I really must look at your audio processing, however admit to being afraid of what it may do to the fps
    Quote Quote  
  6. I see, so it drops down speed with number of images. So it becomes impractical with number of images.

    There might be a solution for that. I tested a while ago a dynamic piping of frames into a previewer, sequence of clips. I was afraid it would take a lots of RAM. It looks now vapoursynth is more set to not increase RAM but it could slow down considerably if loading lots of stuff. Which is basically very unusual. I could be wrong, it is just a guess.

    That dynamic loading solution, load only one clip at a time, would work only in linear fashion. Frame is loaded and then is gone, because one source plugin is working and opened at a time. No seeking available for that vapoursynth script. So if previewing it, it would go only linearly forward. For encoding that should not be a problem as it was not a problem for linear previewing.

    I have to find it in PC and set it up. I don't think it should be a problem. At most two source plugins would be open at a time because of transitions.
    Last edited by _Al_; 27th Mar 2023 at 10:47.
    Quote Quote  
  7. But for now I'd AUDIO test this solution:

    it has improved transitions, custom, depending what clip follows what. But that could be customized.
    There is a transition between images, but if there is a transition from an image to a video it is just fades outs, also from video to images. I was testing it and realized that transitions between two video clips are a nonsense. There is no transition between two videos. Script recognizes image and video clip. There is a new class called Transition that handles that.

    Also if there are videos with two DIFFERENT FPS, it would Error and tell fps discrepancy is present. All videos have to have same fps (different resolution is fine). That would be a whole different league to auto change fps.
    Code:
    import vapoursynth as vs
    from vapoursynth import core
    from functools import partial
    from pathlib import Path
    import itertools
    import sys
    import os
    from typing import Union, List
    from PIL import Image, ExifTags, UnidentifiedImageError   #pip install Pillow
    
    #sys.path.append(str(Path(__file__).parent))
    try:
        is_API4 = vs.__api_version__.api_major >= 4
    except AttributeError:
        is_API4 = False
    
    import load
    import vs_transitions
    
    TRANSITIONS = [
        "cover",
        "cube_rotate",
        "curtain_cover",
        "curtain_reveal",
        "fade",
    ##    "fade_from_black",
    ##    "fade_to_black",
    ##    "linear_boundary",
        "poly_fade",
        "push",
        "reveal",
        "slide_expand",
        "squeeze_expand",
        "squeeze_slide",
        "wipe",
    ]
    
    TRANSITION      = 'cycle'     #'cycle' will cycle list with transitions or put some concrete transition like 'fade'
    DIRECTORY       = r'D:\paths_to_tests\test2'
    EXTENSIONS      = ['.jpg','.m2ts']
    WIDTH           = 640
    HEIGHT          = 360
    LENGTH          = 100
    TRANSITION_DUR  = 35
    FPSNUM          = 60000,
    FPSDEN          = 1001
    UPSIZE_KERNEL   = 'Lanczos'
    DOWNSIZE_KERNEL = 'Spline36'
    BOX             = True    # True would initiate letterboxing or pillarboxing. False fills to WIDTH,HEIGTH
    MODX            = 2       # mods for letterboxing calculations, example, for 411 YUV as an extreme
    MODY            = 2       # mods would have to be MODX=4, MODY=1 as minimum
    SAVE_ROTATED_IMAGES = False   #saves image to disk with name suffix: "_rotated" using PIL module
    ATTRIBUTE_AUDIO_PATH = r'D:\paths_to_tests\test2\20190421085114.m2ts'
    
    
    
    class Clip:
        def __init__(self, video = None, audio=None, attribute_audio_path=None):
            self.video = video
            self.audio = audio
            if self.video is None:
                self.video = core.std.BlankClip()
            if self.audio is None:
                if attribute_audio_path is None:
                    raise ValueError('arguent default_audio (could be very short) to get default audio for images')
                attr_audio = core.bas.Source(attribute_audio_path)
                length = int(attr_audio.sample_rate/self.video.fps*self.video.num_frames)
                self.audio = attr_audio.std.BlankAudio(length = length)
    
        def trim(self, first=0, last=None, length=None):
            afirst  = self.to_samples(first)    if first  is not None else None
            alast   = self.to_samples(last+1)-1 if last   is not None else None
            alength = self.to_samples(length)   if length is not None else None
            return Clip( self.video.std.Trim(first=first, last=last, length=length),
                         self.audio.std.AudioTrim(first=afirst,last=alast,length=alength)
                        )
        def to_samples(self, frame):
            return int((self.audio.sample_rate/self.video.fps)*frame)
    
        def __add__(self, other):
            return Clip(self.video + other.video, self.audio + other.audio)
    
        def __mul__(self, multiple):
            return Clip(self.video*multiple, self.audio*multiple)
    
        def __getitem__(self, val):
            if isinstance(val, slice):
                if val.step is not None:
                    raise ValueError('Using steps while slicing AudioNode together with VideoNode makes no sense')
                start = self.to_samples(val.start) if val.start is not None else None
                stop =  self.to_samples(val.stop)  if val.stop  is not None else None
                return Clip( self.video.__getitem__(val),
                             self.audio.__getitem__(slice(start,stop))
                             )
            elif isinstance(val, int):
                start = self.to_samples(val)
                stop = int(start + self.audio.sample_rate/self.video.fps)
                return Clip( self.video[val],
                             self.audio.__getitem__(slice(start,stop))
                             )        
        def __repr__(self):
            return '{}\n{}'.format(repr(self.video), repr(self.audio))
    
        def __str__(self):
            return '{}\n{}'.format(str(self.video), str(self.audio))
    
    class Transition:
        '''
        -enveloping vs_transition module to create transitions for Clip class clips(video and audio)
        -clips a and b are extended using edge frames for transition duration needed
         because vapoursynth cannot merge audios
        -if passing vs.VideoNodes here (not class Clip) it also extends clips so better use vs_transitions directly with vs.VideoNode clips
         no need to use this class for it (unless you want to extend ends as well)
         '''
    
        CUSTOM1 = {
            'image_to_image': 'regular_transition',
            'image_to_video': 'fade_to_and_from_black',
            'video_to_image': 'fade_to_and_from_black',
            'video_to_video': 'no_transition'
            }
       
        def __init__(self,
                     a:           Union[Clip, vs.VideoNode],
                     b:           Union[Clip, vs.VideoNode],
                     duration:    int = 30,
                     transition:  str = 'fade',
                     **kwargs):
    
            self.a_orig = a
            self.b_orig = b
            if isinstance(a, Clip) and isinstance(b, Clip):
                self.clip_type = 'Clip'
                self.a = a.video
                self.b = b.video
            elif isinstance(a, vs.VideoNode) and isinstance(b, vs.VideoNode):
                self.clip_type = 'VideoNode'
                self.a = a
                self.b = b         
            else:
                raise ValueError('Transitions: both clips must the same Clip class or vs.VideoNode class')
            fps_a = round(self.a.fps.numerator/self.a.fps.denominator, 3) if self.a.fps.numerator else 'dynamic'
            fps_b = round(self.b.fps.numerator/self.b.fps.denominator, 3) if self.b.fps.numerator else 'dynamic'
            if 'dynamic' in [fps_a, fps_b] or  abs(fps_a-fps_b) > 0.01:
                raise ValueError(f'Transitions: both clips must have the same fps and cannot be "dynamic", got: {fps_a} fps and {fps_b} fps')
            self.duration = duration
            self.transition = transition
            self.kwargs = kwargs
            #extending ends for transition durations
            self.a = self.a[-1] * duration
            self.b = self.b[0] * duration
            
        def custom1(self):
            first  = 'image' if 'is_image' in self.a[-1].get_frame(0).props else 'video' #must be always last frame prop, 
            second = 'image' if 'is_image' in self.b.get_frame(0).props else 'video'
            return getattr(self, self.CUSTOM1[f'{first}_to_{second}'])()
            
        def regular_transition(self):
            transition_func = getattr(vs_transitions, self.transition)
            transition_videonode = transition_func(self.a, self.b, frames=self.duration)
            return self.out(transition_videonode)
            
        def fade_to_and_from_black(self):
            left  = vs_transitions.fade_to_black(self.a,   frames=self.duration)
            right = vs_transitions.fade_from_black(self.b, frames=self.duration)
            return self.out(left+right)
        
        def no_transition(self):
            return self.a_orig + self.b_orig
    
        def out(self, transition_videonode):
            if self.clip_type == 'Clip':
                self.attribute_audio_path = self.kwargs.pop('attribute_audio_path', ATTRIBUTE_AUDIO_PATH)
                silent_transition_clip = Clip(transition_videonode,  attribute_audio_path=self.attribute_audio_path)
                return self.a_orig + silent_transition_clip + self.b_orig
            elif self.clip_type == 'VideoNode':
                return self.a_orig + transition_videonode + self.b_orig
    
    
    def rotation_check(clip, path, save_rotated_image=False):
        #PIL module loads an image, checks if EXIF data, checks for 'Orientation'
        try:
            image = Image.open(str(path))
        except UnidentifiedImageError:
            return clip
        except PermissionError:
            print(f'PIL, Permission denied to load: {path}')
            return clip
        except Exception as e:
            print(f'PIL, {e}')
            return clip
        try:        
            for key in ExifTags.TAGS.keys():
                if ExifTags.TAGS[key] == 'Orientation':
                    break
            exif = dict(image.getexif().items())
            value = exif[key]
        except (AttributeError, KeyError, IndexError):
            # no getexif
            return clip
        else:
            if   value == 3: clip=clip.std.Turn180()
            elif value == 8: clip=clip.std.Transpose().std.FlipVertical()
            elif value == 6: clip=clip.std.Transpose().std.FlipHorizontal()
            if save_rotated_image and value in [3,8,6]:
                #rotation degrees are in counterclockwise direction!
                rotate = {3:Image.ROTATE_180, 6:Image.ROTATE_270, 8:Image.ROTATE_90}
                image = image.transpose(rotate[value])
                path = path.parent / f'{path.stem}_rotated{path.suffix}'
                image.save(str(path))
        image.close()    
        return clip
    
    def boxing(clip, W=WIDTH, H=HEIGHT):
        cw, ch = clip.width, clip.height
        if W/H > cw/ch:
            w = cw*H/ch
            x = int((W-w)/2)
            x = x - x%MODX
            x = max(0, x)
            clip = resize_clip(clip, W-2*x, H)
            if x: return clip.std.AddBorders(left=x, right=x, color=(16,128,128))  #RGB is out then (16,16,16)
            else: return clip
        else:
            h = ch*W/cw
            y = int((H-h)/2)
            y = y - y%MODY
            y = max(0, y)
            clip = resize_clip(clip, W, H-2*y)
            if y: return clip.std.AddBorders(top=y, bottom=y, color=(16,128,128))
            else: return clip
          
    def resize_clip(clip,w,h, W=WIDTH, H=HEIGHT):
        if w>W or h>H: resize = getattr(clip.resize, DOWNSIZE_KERNEL)
        else:          resize = getattr(clip.resize, UPSIZE_KERNEL)
        if clip.format.color_family==vs.RGB:
            #rgb to YUV, perhaps only for png images, figure out what matrix out is needed
            return resize(width=w, height=h, format=vs.YUV420P8, matrix_s='709')
        else:
            #YUV to YUV
            return resize(width=w, height=h, format=vs.YUV420P8)  
    
    def get_clip(path):
        #get video
        data = LOADER.get_data([str(path),])[0] 
        video = data.clip
        if data.load_isError:
            raise ValueError(f'{data.load_log}\nload.py could not create vs.VideoNode from {path}')
        video = rotation_check(video, path, save_rotated_image=SAVE_ROTATED_IMAGES)
        if BOX:  video = boxing(video, WIDTH, HEIGHT)
        else:    video = resize_clip(video, WIDTH, HEIGHT)
        if len(video)==1:
            if is_API4: video = video.std.SetFrameProps(is_image=1)
            else:       video = video.std.SetFrameProp(prop='is_image', intval=1)
            video = video[0]*LENGTH
            video = video.std.AssumeFPS(fpsnum=FPSNUM, fpsden=FPSDEN)
        video = video.resize.Bicubic(format = vs.YUV444P8, matrix_in_s='709')
       
        #get audio  
        try:
            audio = core.bas.Source(str(path))
        except AttributeError:
            raise ImportError('Vapoursynth audio source plugin "BestAudioSource.dll" could not be loaded\n'
                              'download: https://github.com/vapoursynth/bestaudiosource/releases/tag/R1')
        except (vs.Error, Exception) as e:
            #quiet audio, could not load audio , either video is an image or some problem
            clip = Clip(video, attribute_audio_path=ATTRIBUTE_AUDIO_PATH) #will generate silent clip with desired parameters
        else:
            #audio loaded
            clip = Clip(video, audio)
        return clip
    
            
    def get_path(path_generator):
        while 1:
            try:
                path = next(path_generator)
            except StopIteration:
                return None
            if path.suffix.lower() in EXTENSIONS:
                print(f'{path}')
                return path
    
                
    if TRANSITION == 'cycle':
        TRANSITION_GENERATOR = itertools.cycle(TRANSITIONS)
    else:
        TRANSITION_GENERATOR = itertools.cycle([TRANSITION])  #cycles always the same transition
    
    LOADER = load.Sources()
    TRANSITION_DUR = max(2,TRANSITION_DUR + TRANSITION_DUR%2) #minimum 2 and mod2 to be sure
    paths = Path(DIRECTORY).glob("*.*")
    print('wait loading paths ...')
    path = get_path(paths)
    if path is None:
        raise ValueError(f'Extensions: {EXTENSIONS}, not found in {DIRECTORY}')
    clips = get_clip(path) #get_clip() loads video AND audio as well
    
    while 1:
        path = get_path(paths)
        if path is None:
            break
        second_clip = get_clip(path)
        clips = Transition(clips, second_clip, duration=TRANSITION_DUR, transition=next(TRANSITION_GENERATOR)).custom1()
    
    clips.video = vs_transitions.fade_from_black(clips.video, frames=TRANSITION_DUR)
    clips.video = vs_transitions.fade_to_black(clips.video, frames=TRANSITION_DUR)
    clips.video = clips.video.resize.Bicubic(format = vs.YUV420P8, matrix_in_s='709')
    print('done')
    
    clips.video.set_output()
    clips.audio.set_output(1)
    Last edited by _Al_; 27th Mar 2023 at 10:53.
    Quote Quote  
  8. Member hydra3333's Avatar
    Join Date
    Oct 2009
    Location
    Australia
    Search Comp PM
    Originally Posted by _Al_ View Post
    But for now I'd AUDIO test this solution
    it has improved transitions, custom, depending what clip follows what. But that could be customized.
    OK and thanks, I will, a really nice job

    Originally Posted by _Al_ View Post
    There is a transition between images, but if there is a transition from an image to a video it is just fades outs, also from video to images. I was testing it and realized that transitions between two video clips are a nonsense. There is no transition between two videos. Script recognizes image and video clip. There is a new class called Transition that handles that.
    The world is a funny place, I tested it as well and thought inter-video transitions looked really cool (audio aside).
    Tested at image display time 4s, max video display time 15s (clipped), transition times 0.5s.

    Originally Posted by _Al_ View Post
    Also if there are videos with two DIFFERENT FPS, it would Error and tell fps discrepancy is present. All videos have to have same fps (different resolution is fine). That would be a whole different league to auto change fps.
    Ah. I did a review and found many different cameras and phones took photos and videos in our and the in-law's archives (please see a draft list below).
    All have different settings specifically including fps; some PAL some NTSC, some quite odd, some VFR (oh dear), users inevitably play with such settings.

    May I suggest pointing this vpy at an home pictures archive folder tree (probably its main function ?) would inevitably encounter that scenario.
    I'd encountered and considered it and chose not to convert fps, although apparently its doable (audio aside), just did an "assumefps" on the lot and some sped up and some slowed down ... since a slideshow could be considered to be a sampler and if one wanted the lot then one could go to the source
    Having said that, proper fps conversion, eg 30 to 25 (and audio "stays the same" ? Oh, not, since you say it's a real issue.) ... VFR probably makes that too hard (VFR definitely exists with "later" phone cameras eg S8+) ... hmm, I wonder if there's a "close enough" fps conversion given I'm only intending to display a nominated-seconds "short clip" of a video in a slideshow ...


    Code:
     'EXIF_Model': '',
     'EXIF_Model': '<KENOX S630  / Samsung S630>',
     'EXIF_Model': '1234',
     'EXIF_Model': '5300',
     'EXIF_Model': '5800 Xpres',
     'EXIF_Model': '5MP-9Q3',
     'EXIF_Model': '6120c',
     'EXIF_Model': '6288',
     'EXIF_Model': '6300',
     'EXIF_Model': 'A411',
     'EXIF_Model': 'C4100Z,C4000Z',
     'EXIF_Model': 'C8080WZ',
     'EXIF_Model': 'Canon DIGITAL IXUS 430',
     'EXIF_Model': 'Canon DIGITAL IXUS 50',
     'EXIF_Model': 'Canon DIGITAL IXUS 500',
     'EXIF_Model': 'Canon DIGITAL IXUS 980 IS',
     'EXIF_Model': 'Canon DIGITAL IXUS v3',
     'EXIF_Model': 'Canon EOS-1D',
     'EXIF_Model': 'Canon EOS-1D X',
     'EXIF_Model': 'Canon EOS-1Ds Mark II',
     'EXIF_Model': 'Canon EOS 10D',
     'EXIF_Model': 'Canon EOS 200D',
     'EXIF_Model': 'Canon EOS 20D',
     'EXIF_Model': 'Canon EOS 20D\x00',
     'EXIF_Model': 'Canon EOS 300D DIGITAL',
     'EXIF_Model': 'Canon EOS 30D',
     'EXIF_Model': 'Canon EOS 350D DIGITAL',
     'EXIF_Model': 'Canon EOS 40D',
     'EXIF_Model': 'Canon EOS 550D',
     'EXIF_Model': 'Canon EOS 5D',
     'EXIF_Model': 'Canon EOS 60D',
     'EXIF_Model': 'Canon EOS 6D',
     'EXIF_Model': 'Canon EOS 7D',
     'EXIF_Model': 'Canon EOS DIGITAL REBEL',
     'EXIF_Model': 'Canon EOS DIGITAL REBEL XT',
     'EXIF_Model': 'Canon EOS DIGITAL REBEL XTi',
     'EXIF_Model': 'Canon EOS Kiss Digital N',
     'EXIF_Model': 'Canon MG3600 series Network',
     'EXIF_Model': 'Canon PowerShot A3100 IS',
     'EXIF_Model': 'Canon PowerShot A3200 IS',
     'EXIF_Model': 'Canon PowerShot A3200 IS\x00\x00\x00\x00\x00\x00\x00',
     'EXIF_Model': 'Canon PowerShot A400',
     'EXIF_Model': 'Canon PowerShot A520',
     'EXIF_Model': 'Canon PowerShot A570 IS',
     'EXIF_Model': 'Canon PowerShot A620',
     'EXIF_Model': 'Canon PowerShot A720 IS',
     'EXIF_Model': 'Canon PowerShot A75',
     'EXIF_Model': 'Canon PowerShot A80',
     'EXIF_Model': 'Canon PowerShot A95',
     'EXIF_Model': 'Canon PowerShot A95\x00',
     'EXIF_Model': 'Canon PowerShot G5',
     'EXIF_Model': 'Canon PowerShot G6',
     'EXIF_Model': 'Canon PowerShot S1 IS',
     'EXIF_Model': 'Canon PowerShot S2 IS',
     'EXIF_Model': 'Canon PowerShot S3 IS',
     'EXIF_Model': 'Canon PowerShot S50',
     'EXIF_Model': 'CONTAX i4R    ',
     'EXIF_Model': 'COOLPIX P530',
     'EXIF_Model': 'COOLPIX P600',
     'EXIF_Model': 'COOLPIX S6100',
     'EXIF_Model': 'CYBERSHOT',
     'EXIF_Model': 'DC-3305    ',
     'EXIF_Model': 'Digimax 201',
     'EXIF_Model': 'DiMAGE A1',
     'EXIF_Model': 'DiMAGE Z5',
     'EXIF_Model': 'DMC-FT2',
     'EXIF_Model': 'DMC-FT3',
     'EXIF_Model': 'DMC-FT5',
     'EXIF_Model': 'DMC-FT6',
     'EXIF_Model': 'DMC-FX7',
     'EXIF_Model': 'DMC-FX8',
     'EXIF_Model': 'DMC-FZ30',
     'EXIF_Model': 'DMC-TS2',
     'EXIF_Model': 'DMC-TS3',
     'EXIF_Model': 'DSC-H2',
     'EXIF_Model': 'DSC-H5',
     'EXIF_Model': 'DSC-N1',
     'EXIF_Model': 'DSC-P10',
     'EXIF_Model': 'DSC-P92',
     'EXIF_Model': 'DSC-P93',
     'EXIF_Model': 'DSC-RX100',
     'EXIF_Model': 'DSC-T7',
     'EXIF_Model': 'DSC-TX20',
     'EXIF_Model': 'DSC-W1',
     'EXIF_Model': 'DSC-W40',
     'EXIF_Model': 'DSLR-A100',
     'EXIF_Model': 'DYNAX 5D',
     'EXIF_Model': 'E-300',
     'EXIF_Model': 'E-300           ',
     'EXIF_Model': 'E-500           ',
     'EXIF_Model': 'E-500           \x00',
     'EXIF_Model': 'E-510           ',
     'EXIF_Model': 'E4500',
     'EXIF_Model': 'E5653\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00',
     'EXIF_Model': 'E5700',
     'EXIF_Model': 'E5900',
     'EXIF_Model': 'E65',
     'EXIF_Model': 'E8700',
     'EXIF_Model': 'FinePix F50fd  ',
     'EXIF_Model': 'FinePix L30',
     'EXIF_Model': 'FinePix S5600  ',
     'EXIF_Model': 'FinePix S9100',
     'EXIF_Model': 'GT-C5510',
     'EXIF_Model': 'GT-I9100',
     'EXIF_Model': 'GT-I9100\x00',
     'EXIF_Model': 'GT-I9305T',
     'EXIF_Model': 'HERO4 Silver',
     'EXIF_Model': 'HERO7 Black\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00',
     'EXIF_Model': 'HP pstc6100',
     'EXIF_Model': 'HTC Desire',
     'EXIF_Model': 'iPad',
     'EXIF_Model': 'iPhone 3GS',
     'EXIF_Model': 'iPhone 4',
     'EXIF_Model': 'iPhone 5c',
     'EXIF_Model': 'iPhone 6',
     'EXIF_Model': 'iPhone 6 Plus',
     'EXIF_Model': 'iPhone 6s',
     'EXIF_Model': 'iPhone SE (2nd generation)',
     'EXIF_Model': 'iPhone X',
     'EXIF_Model': 'KODAK CX6330 ZOOM DIGITAL CAMERA',
     'EXIF_Model': 'KODAK EASYSHARE C1013 DIGITAL CAMERA',
     'EXIF_Model': 'KODAK EASYSHARE DX3700 Digital Camera',
     'EXIF_Model': 'KS360',
     'EXIF_Model': 'MG6200 series',
     'EXIF_Model': 'MP270 series',
     'EXIF_Model': 'my411X',
     'EXIF_Model': 'NIKON D200',
     'EXIF_Model': 'NIKON D50',
     'EXIF_Model': 'NIKON D70',
     'EXIF_Model': 'NIKON D70\x00',
     'EXIF_Model': 'NIKON D7000',
     'EXIF_Model': 'NIKON D70s',
     'EXIF_Model': 'NIKON D7200',
     'EXIF_Model': 'NIKON D80',
     'EXIF_Model': 'NIKON D800',
     'EXIF_Model': 'Omni-vision OV9655-SXGA',
     'EXIF_Model': 'OPPO A72',
     'EXIF_Model': 'PENTAX *ist DS     ',
     'EXIF_Model': 'PENTAX Optio 33LF',
     'EXIF_Model': 'PENTAX Optio WP',
     'EXIF_Model': 'Perfection V600',
     'EXIF_Model': 'Portable Scanner',
     'EXIF_Model': 'RS4110Z     ',
     'EXIF_Model': 'S8300',
     'EXIF_Model': 'SAMSUNG ES30/VLUU '
     'EXIF_Model': 'SLP1000SE',
     'EXIF_Model': 'SM-A520F',
     'EXIF_Model': 'SM-G900I',
     'EXIF_Model': 'SM-G925I',
     'EXIF_Model': 'SM-G935F',
     'EXIF_Model': 'SM-G955F',
     'EXIF_Model': 'SM-G973F',
     'EXIF_Model': 'SM-G975F',
     'EXIF_Model': 'SM-J100Y',
     'EXIF_Model': 'SM-J250G',
     'EXIF_Model': 'ST66 / ST68',
     'EXIF_Model': 'Sx500A',
     'EXIF_Model': 'T4_T04',
     'EXIF_Model': 'u20D,S400D,u400D',
     'EXIF_Model': 'u30D,S410D,u410D',
     'EXIF_Model': 'u790SW,S790SW   ',
     'EXIF_Model': 'uD800,S800      ',
     'EXIF_Model': 'Unknown',
     'EXIF_Model': 'VG140,D715      ',
     'EXIF_Model': 'Z610i',
    {'EXIF_Model': 'Canon EOS 300D DIGITAL',
    {'EXIF_Model': 'KODAK Z740 ZOOM DIGITAL CAMERA',
    {'EXIF_Model': 'Sony Visual Communication Camera', 'EXIF_Software': 'ArcSoft WebCam Companion 3', 'EXIF_Copyright': 'ArcSoft Inc.'}
    Quote Quote  
  9. The world is a funny place, I tested it as well and thought inter-video transitions looked really cool (audio aside).
    I thought it is a personal choice, so that class Transition handles that. You can just add different custom and call it like custom2 and call it:
    Code:
        t = Transition(clips, second_clip, duration=TRANSITION_DUR, transition=next(TRANSITION_GENERATOR))
        clips = t.custom2()
    or just simply instead of existing custom1:
    Code:
        CUSTOM1 = {
            'image_to_image': 'regular_transition',
            'image_to_video': 'fade_to_and_from_black',
            'video_to_image': 'fade_to_and_from_black',
            'video_to_video': 'no_transition'
            }
    you can fix it to:
    Code:
        CUSTOM1 = {
            'image_to_image': 'regular_transition',
            'image_to_video': 'fade_to_and_from_black',
            'video_to_image': 'fade_to_and_from_black',
            'video_to_video': 'regular_transition'
            }
    Or 'regular_transition' could be used for all of them

    Changing video fps must be done so length stays the same (so audio does not change), there is many inputs and possibly many outputs, except there might be a rule for example output would be fixed 1920x1080(60000/1001). And even that is a huge task.
    Mediainfo inquiry for interlace flag and fps (or can we rely on ffms2 or d2vSource FieldOrder flags in vapoursynth? Never tested if it can be relied on that, but basically using MediaInfo is not difficult at all to implement):
    --is interlace: deintelace
    --if different fps use converfps(blends) or changefps(duplicates)
    --changing resolution (not a problem it is done already)
    To output vfr seams unrealistic in terms how to do it and have good playback.
    But anyway, even to get cfr it is a lots of code.

    But to me as of now priority seams to pipe frames to encoder on the need bases, so it is performed much faster. Is it usable, I mean these scripts, if a hundreds of images take just some little frames per second to encode? If as you tested, speed drops with amount of items.
    Last edited by _Al_; 28th Mar 2023 at 12:14.
    Quote Quote  
  10. Member hydra3333's Avatar
    Join Date
    Oct 2009
    Location
    Australia
    Search Comp PM
    Sounds good.

    As an aside, I found some very large video clips amongst the archives, circa 800Mb avc each, so I suppose I'd need to trim video clips immediately upon open, taking into account vfr,
    perhaps using a "roughy estimate" for #frames along the lines of
    Code:
    num_frames_to_keep = max_slideshow_CFR_displayable_frames * clip_max_framerate * 1.1
    and them immediately "del" the original clip that arrived upon open,
    and worry about clipping it properly to the actual maximum number of frames I wish a video to display in the slideshow, a bit later after conversion to CFR and whatnot.

    Not sure if that'd do anything useful, it feels like it perhaps "should".

    edit:
    I see ffms2 Source https://github.com/FFMS/ffms2/blob/master/doc/ffms2-vapoursynth.md has
    int fpsnum = -1, int fpsden = 1
    Controls the framerate of the output; used for VFR to CFR conversions. If fpsnum is less than or equal to zero (the default), the output will contain the same frames that the input did, and the frame rate reported to VapourSynth will be set based on the input clip's average frame duration. If fpsnum is greater than zero, Source will force a constant frame rate, expressed as a rational number where fpsnum is the numerator and fpsden is the denominator. This may naturally cause Source to drop or duplicate frames to achieve the desired frame rate, and the output is not guaranteed to have the same number of frames that the input did.
    However I'm not quite convinced dropping and duplicating frames would yield an acceptable result, in PAL country 30fps and whatnot converted that way does not always look ok.
    mediainfo can tell whether it's VFR or CFR, but I don't know if vfr stuff is exposed and vfr to cfr is even doable in vapoursynth. will have to have a look.
    Last edited by hydra3333; 29th Mar 2023 at 00:05.
    Quote Quote  
  11. Member hydra3333's Avatar
    Join Date
    Oct 2009
    Location
    Australia
    Search Comp PM
    Just as an example of variety, one archive's characteristics;

    Code:
    Unique "Rotation"
     'Rotation': '0.000',
     'Rotation': '180.000',
     'Rotation': '270.000',
     'Rotation': '90.000',
     'Rotation': None,
    PIL Rotation Value=180 anti-clockwise
    PIL Rotation Value=270 anti-clockwise
    PIL Rotation Value=90 anti-clockwise
    
    Unique "FrameRate_Mode"
     'FrameRate_Mode': 'CFR',
     'FrameRate_Mode': 'VFR',
     'FrameRate_Mode': None,
    
    Unique "FrameRate_Minimum"
     'FrameRate_Minimum': 0.545,
     'FrameRate_Minimum': 1.0,
     'FrameRate_Minimum': 1.471,
     'FrameRate_Minimum': 1.665,
     'FrameRate_Minimum': 1.808,
     'FrameRate_Minimum': 1.934,
     'FrameRate_Minimum': 1.953,
     'FrameRate_Minimum': 1.976,
     'FrameRate_Minimum': 1.992,
     'FrameRate_Minimum': 10.0,
     'FrameRate_Minimum': 10.002,
     'FrameRate_Minimum': 10.011,
     'FrameRate_Minimum': 13.932,
     'FrameRate_Minimum': 14.634,
     'FrameRate_Minimum': 14.925,
     'FrameRate_Minimum': 14.95,
     'FrameRate_Minimum': 14.963,
     'FrameRate_Minimum': 14.97,
     'FrameRate_Minimum': 14.998,
     'FrameRate_Minimum': 15.0,
     'FrameRate_Minimum': 15.003,
     'FrameRate_Minimum': 15.008,
     'FrameRate_Minimum': 15.018,
     'FrameRate_Minimum': 15.02,
     'FrameRate_Minimum': 15.03,
     'FrameRate_Minimum': 15.043,
     'FrameRate_Minimum': 15.05,
     'FrameRate_Minimum': 15.149,
     'FrameRate_Minimum': 15.164,
     'FrameRate_Minimum': 15.291,
     'FrameRate_Minimum': 15.293,
     'FrameRate_Minimum': 15.324,
     'FrameRate_Minimum': 15.483,
     'FrameRate_Minimum': 15.587,
     'FrameRate_Minimum': 15.625,
     'FrameRate_Minimum': 15.628,
     'FrameRate_Minimum': 15.669,
     'FrameRate_Minimum': 15.715,
     'FrameRate_Minimum': 15.718,
     'FrameRate_Minimum': 15.723,
     'FrameRate_Minimum': 15.743,
     'FrameRate_Minimum': 15.754,
     'FrameRate_Minimum': 15.801,
     'FrameRate_Minimum': 15.814,
     'FrameRate_Minimum': 15.881,
     'FrameRate_Minimum': 15.938,
     'FrameRate_Minimum': 15.943,
     'FrameRate_Minimum': 15.972,
     'FrameRate_Minimum': 15.983,
     'FrameRate_Minimum': 16.006,
     'FrameRate_Minimum': 16.08,
     'FrameRate_Minimum': 16.089,
     'FrameRate_Minimum': 16.181,
     'FrameRate_Minimum': 16.202,
     'FrameRate_Minimum': 16.251,
     'FrameRate_Minimum': 16.257,
     'FrameRate_Minimum': 16.278,
     'FrameRate_Minimum': 16.287,
     'FrameRate_Minimum': 16.382,
     'FrameRate_Minimum': 16.387,
     'FrameRate_Minimum': 16.402,
     'FrameRate_Minimum': 16.426,
     'FrameRate_Minimum': 16.429,
     'FrameRate_Minimum': 16.447,
     'FrameRate_Minimum': 16.496,
     'FrameRate_Minimum': 16.511,
     'FrameRate_Minimum': 16.562,
     'FrameRate_Minimum': 16.593,
     'FrameRate_Minimum': 16.596,
     'FrameRate_Minimum': 16.62,
     'FrameRate_Minimum': 16.654,
     'FrameRate_Minimum': 16.664,
     'FrameRate_Minimum': 16.701,
     'FrameRate_Minimum': 16.707,
     'FrameRate_Minimum': 16.747,
     'FrameRate_Minimum': 16.763,
     'FrameRate_Minimum': 16.769,
     'FrameRate_Minimum': 16.832,
     'FrameRate_Minimum': 16.841,
     'FrameRate_Minimum': 16.873,
     'FrameRate_Minimum': 16.908,
     'FrameRate_Minimum': 16.94,
     'FrameRate_Minimum': 16.943,
     'FrameRate_Minimum': 16.994,
     'FrameRate_Minimum': 16.997,
     'FrameRate_Minimum': 17.02,
     'FrameRate_Minimum': 17.023,
     'FrameRate_Minimum': 17.033,
     'FrameRate_Minimum': 17.055,
     'FrameRate_Minimum': 17.075,
     'FrameRate_Minimum': 17.081,
     'FrameRate_Minimum': 17.091,
     'FrameRate_Minimum': 17.101,
     'FrameRate_Minimum': 17.114,
     'FrameRate_Minimum': 17.117,
     'FrameRate_Minimum': 17.13,
     'FrameRate_Minimum': 17.143,
     'FrameRate_Minimum': 17.169,
     'FrameRate_Minimum': 17.228,
     'FrameRate_Minimum': 17.248,
     'FrameRate_Minimum': 17.268,
     'FrameRate_Minimum': 17.331,
     'FrameRate_Minimum': 17.341,
     'FrameRate_Minimum': 17.381,
     'FrameRate_Minimum': 17.395,
     'FrameRate_Minimum': 17.401,
     'FrameRate_Minimum': 17.408,
     'FrameRate_Minimum': 17.411,
     'FrameRate_Minimum': 17.432,
     'FrameRate_Minimum': 17.459,
     'FrameRate_Minimum': 17.523,
     'FrameRate_Minimum': 17.527,
     'FrameRate_Minimum': 17.53,
     'FrameRate_Minimum': 17.571,
     'FrameRate_Minimum': 17.575,
     'FrameRate_Minimum': 17.585,
     'FrameRate_Minimum': 17.592,
     'FrameRate_Minimum': 17.595,
     'FrameRate_Minimum': 17.613,
     'FrameRate_Minimum': 17.63,
     'FrameRate_Minimum': 17.64,
     'FrameRate_Minimum': 17.651,
     'FrameRate_Minimum': 17.692,
     'FrameRate_Minimum': 17.696,
     'FrameRate_Minimum': 17.703,
     'FrameRate_Minimum': 17.713,
     'FrameRate_Minimum': 17.737,
     'FrameRate_Minimum': 17.741,
     'FrameRate_Minimum': 17.748,
     'FrameRate_Minimum': 17.751,
     'FrameRate_Minimum': 17.78,
     'FrameRate_Minimum': 17.783,
     'FrameRate_Minimum': 17.787,
     'FrameRate_Minimum': 17.794,
     'FrameRate_Minimum': 17.801,
     'FrameRate_Minimum': 17.811,
     'FrameRate_Minimum': 17.818,
     'FrameRate_Minimum': 17.871,
     'FrameRate_Minimum': 17.9,
     'FrameRate_Minimum': 17.903,
     'FrameRate_Minimum': 17.907,
     'FrameRate_Minimum': 17.914,
     'FrameRate_Minimum': 17.921,
     'FrameRate_Minimum': 17.946,
     'FrameRate_Minimum': 17.989,
     'FrameRate_Minimum': 18.025,
     'FrameRate_Minimum': 18.047,
     'FrameRate_Minimum': 18.065,
     'FrameRate_Minimum': 18.072,
     'FrameRate_Minimum': 18.076,
     'FrameRate_Minimum': 18.204,
     'FrameRate_Minimum': 18.259,
     'FrameRate_Minimum': 18.27,
     'FrameRate_Minimum': 18.278,
     'FrameRate_Minimum': 18.289,
     'FrameRate_Minimum': 18.304,
     'FrameRate_Minimum': 18.33,
     'FrameRate_Minimum': 18.379,
     'FrameRate_Minimum': 18.386,
     'FrameRate_Minimum': 18.416,
     'FrameRate_Minimum': 18.42,
     'FrameRate_Minimum': 18.424,
     'FrameRate_Minimum': 18.439,
     'FrameRate_Minimum': 18.465,
     'FrameRate_Minimum': 18.496,
     'FrameRate_Minimum': 18.511,
     'FrameRate_Minimum': 18.553,
     'FrameRate_Minimum': 18.568,
     'FrameRate_Minimum': 18.591,
     'FrameRate_Minimum': 18.614,
     'FrameRate_Minimum': 18.657,
     'FrameRate_Minimum': 18.688,
     'FrameRate_Minimum': 18.695,
     'FrameRate_Minimum': 18.715,
     'FrameRate_Minimum': 18.742,
     'FrameRate_Minimum': 18.762,
     'FrameRate_Minimum': 18.766,
     'FrameRate_Minimum': 18.789,
     'FrameRate_Minimum': 18.793,
     'FrameRate_Minimum': 18.809,
     'FrameRate_Minimum': 18.821,
     'FrameRate_Minimum': 18.832,
     'FrameRate_Minimum': 18.884,
     'FrameRate_Minimum': 18.892,
     'FrameRate_Minimum': 18.9,
     'FrameRate_Minimum': 18.904,
     'FrameRate_Minimum': 18.979,
     'FrameRate_Minimum': 19.023,
     'FrameRate_Minimum': 19.048,
     'FrameRate_Minimum': 19.052,
     'FrameRate_Minimum': 19.064,
     'FrameRate_Minimum': 19.072,
     'FrameRate_Minimum': 19.076,
     'FrameRate_Minimum': 19.08,
     'FrameRate_Minimum': 19.092,
     'FrameRate_Minimum': 19.125,
     'FrameRate_Minimum': 19.141,
     'FrameRate_Minimum': 19.145,
     'FrameRate_Minimum': 19.169,
     'FrameRate_Minimum': 19.202,
     'FrameRate_Minimum': 19.223,
     'FrameRate_Minimum': 19.239,
     'FrameRate_Minimum': 19.255,
     'FrameRate_Minimum': 19.297,
     'FrameRate_Minimum': 19.322,
     'FrameRate_Minimum': 19.367,
     'FrameRate_Minimum': 19.38,
     'FrameRate_Minimum': 19.388,
     'FrameRate_Minimum': 19.397,
     'FrameRate_Minimum': 19.401,
     'FrameRate_Minimum': 19.434,
     'FrameRate_Minimum': 19.464,
     'FrameRate_Minimum': 19.472,
     'FrameRate_Minimum': 19.481,
     'FrameRate_Minimum': 19.485,
     'FrameRate_Minimum': 19.489,
     'FrameRate_Minimum': 19.493,
     'FrameRate_Minimum': 19.552,
     'FrameRate_Minimum': 19.557,
     'FrameRate_Minimum': 19.595,
     'FrameRate_Minimum': 19.621,
     'FrameRate_Minimum': 19.625,
     'FrameRate_Minimum': 19.629,
     'FrameRate_Minimum': 19.634,
     'FrameRate_Minimum': 19.702,
     'FrameRate_Minimum': 19.707,
     'FrameRate_Minimum': 19.733,
     'FrameRate_Minimum': 19.759,
     'FrameRate_Minimum': 19.815,
     'FrameRate_Minimum': 19.841,
     'FrameRate_Minimum': 19.846,
     'FrameRate_Minimum': 19.854,
     'FrameRate_Minimum': 19.868,
     'FrameRate_Minimum': 19.885,
     'FrameRate_Minimum': 19.89,
     'FrameRate_Minimum': 19.898,
     'FrameRate_Minimum': 19.903,
     'FrameRate_Minimum': 19.925,
     'FrameRate_Minimum': 19.938,
     'FrameRate_Minimum': 19.956,
     'FrameRate_Minimum': 19.96,
     'FrameRate_Minimum': 19.969,
     'FrameRate_Minimum': 19.982,
     'FrameRate_Minimum': 19.996,
     'FrameRate_Minimum': 2.075,
     'FrameRate_Minimum': 2.174,
     'FrameRate_Minimum': 2.188,
     'FrameRate_Minimum': 2.193,
     'FrameRate_Minimum': 2.203,
     'FrameRate_Minimum': 2.208,
     'FrameRate_Minimum': 2.217,
     'FrameRate_Minimum': 2.222,
     'FrameRate_Minimum': 2.227,
     'FrameRate_Minimum': 2.232,
     'FrameRate_Minimum': 2.237,
     'FrameRate_Minimum': 2.247,
     'FrameRate_Minimum': 2.252,
     'FrameRate_Minimum': 2.257,
     'FrameRate_Minimum': 2.263,
     'FrameRate_Minimum': 2.268,
     'FrameRate_Minimum': 2.273,
     'FrameRate_Minimum': 2.278,
     'FrameRate_Minimum': 2.283,
     'FrameRate_Minimum': 2.294,
     'FrameRate_Minimum': 2.299,
     'FrameRate_Minimum': 2.304,
     'FrameRate_Minimum': 2.31,
     'FrameRate_Minimum': 2.32,
     'FrameRate_Minimum': 2.331,
     'FrameRate_Minimum': 2.336,
     'FrameRate_Minimum': 2.393,
     'FrameRate_Minimum': 2.41,
     'FrameRate_Minimum': 2.422,
     'FrameRate_Minimum': 2.427,
     'FrameRate_Minimum': 2.5,
     'FrameRate_Minimum': 2.506,
     'FrameRate_Minimum': 2.598,
     'FrameRate_Minimum': 2.755,
     'FrameRate_Minimum': 2.785,
     'FrameRate_Minimum': 2.968,
     'FrameRate_Minimum': 2.994,
     'FrameRate_Minimum': 20.004,
     'FrameRate_Minimum': 20.013,
     'FrameRate_Minimum': 20.022,
     'FrameRate_Minimum': 20.089,
     'FrameRate_Minimum': 20.094,
     'FrameRate_Minimum': 20.107,
     'FrameRate_Minimum': 20.139,
     'FrameRate_Minimum': 20.148,
     'FrameRate_Minimum': 20.161,
     'FrameRate_Minimum': 20.188,
     'FrameRate_Minimum': 20.211,
     'FrameRate_Minimum': 20.229,
     'FrameRate_Minimum': 20.275,
     'FrameRate_Minimum': 20.279,
     'FrameRate_Minimum': 20.289,
     'FrameRate_Minimum': 20.302,
     'FrameRate_Minimum': 20.311,
     'FrameRate_Minimum': 20.348,
     'FrameRate_Minimum': 20.357,
     'FrameRate_Minimum': 20.362,
     'FrameRate_Minimum': 20.371,
     'FrameRate_Minimum': 20.376,
     'FrameRate_Minimum': 20.385,
     'FrameRate_Minimum': 20.404,
     'FrameRate_Minimum': 20.417,
     'FrameRate_Minimum': 20.422,
     'FrameRate_Minimum': 20.464,
     'FrameRate_Minimum': 20.487,
     'FrameRate_Minimum': 20.501,
     'FrameRate_Minimum': 20.534,
     'FrameRate_Minimum': 20.543,
     'FrameRate_Minimum': 20.567,
     'FrameRate_Minimum': 20.581,
     'FrameRate_Minimum': 20.595,
     'FrameRate_Minimum': 20.6,
     'FrameRate_Minimum': 20.604,
     'FrameRate_Minimum': 20.619,
     'FrameRate_Minimum': 20.633,
     'FrameRate_Minimum': 20.666,
     'FrameRate_Minimum': 20.709,
     'FrameRate_Minimum': 20.713,
     'FrameRate_Minimum': 20.776,
     'FrameRate_Minimum': 20.785,
     'FrameRate_Minimum': 20.814,
     'FrameRate_Minimum': 20.857,
     'FrameRate_Minimum': 20.862,
     'FrameRate_Minimum': 20.887,
     'FrameRate_Minimum': 20.901,
     'FrameRate_Minimum': 20.906,
     'FrameRate_Minimum': 20.921,
     'FrameRate_Minimum': 20.93,
     'FrameRate_Minimum': 20.964,
     'FrameRate_Minimum': 20.994,
     'FrameRate_Minimum': 21.013,
     'FrameRate_Minimum': 21.033,
     'FrameRate_Minimum': 21.038,
     'FrameRate_Minimum': 21.048,
     'FrameRate_Minimum': 21.077,
     'FrameRate_Minimum': 21.087,
     'FrameRate_Minimum': 21.092,
     'FrameRate_Minimum': 21.107,
     'FrameRate_Minimum': 21.132,
     'FrameRate_Minimum': 21.142,
     'FrameRate_Minimum': 21.157,
     'FrameRate_Minimum': 21.206,
     'FrameRate_Minimum': 21.216,
     'FrameRate_Minimum': 21.256,
     'FrameRate_Minimum': 21.297,
     'FrameRate_Minimum': 21.322,
     'FrameRate_Minimum': 21.337,
     'FrameRate_Minimum': 21.347,
     'FrameRate_Minimum': 21.408,
     'FrameRate_Minimum': 21.413,
     'FrameRate_Minimum': 21.429,
     'FrameRate_Minimum': 21.454,
     'FrameRate_Minimum': 21.624,
     'FrameRate_Minimum': 21.64,
     'FrameRate_Minimum': 21.645,
     'FrameRate_Minimum': 21.671,
     'FrameRate_Minimum': 21.687,
     'FrameRate_Minimum': 21.718,
     'FrameRate_Minimum': 21.729,
     'FrameRate_Minimum': 21.755,
     'FrameRate_Minimum': 21.85,
     'FrameRate_Minimum': 21.898,
     'FrameRate_Minimum': 21.919,
     'FrameRate_Minimum': 21.93,
     'FrameRate_Minimum': 21.935,
     'FrameRate_Minimum': 21.994,
     'FrameRate_Minimum': 22.0,
     'FrameRate_Minimum': 22.005,
     'FrameRate_Minimum': 22.026,
     'FrameRate_Minimum': 22.037,
     'FrameRate_Minimum': 22.075,
     'FrameRate_Minimum': 22.2,
     'FrameRate_Minimum': 22.206,
     'FrameRate_Minimum': 22.217,
     'FrameRate_Minimum': 22.255,
     'FrameRate_Minimum': 22.283,
     'FrameRate_Minimum': 22.288,
     'FrameRate_Minimum': 22.316,
     'FrameRate_Minimum': 22.327,
     'FrameRate_Minimum': 22.36,
     'FrameRate_Minimum': 22.382,
     'FrameRate_Minimum': 22.41,
     'FrameRate_Minimum': 22.416,
     'FrameRate_Minimum': 22.433,
     'FrameRate_Minimum': 22.455,
     'FrameRate_Minimum': 22.461,
     'FrameRate_Minimum': 22.494,
     'FrameRate_Minimum': 22.5,
     'FrameRate_Minimum': 22.511,
     'FrameRate_Minimum': 22.517,
     'FrameRate_Minimum': 22.534,
     'FrameRate_Minimum': 22.551,
     'FrameRate_Minimum': 22.579,
     'FrameRate_Minimum': 22.59,
     'FrameRate_Minimum': 22.596,
     'FrameRate_Minimum': 22.607,
     'FrameRate_Minimum': 22.613,
     'FrameRate_Minimum': 22.676,
     'FrameRate_Minimum': 22.681,
     'FrameRate_Minimum': 22.699,
     'FrameRate_Minimum': 22.722,
     'FrameRate_Minimum': 22.727,
     'FrameRate_Minimum': 22.796,
     'FrameRate_Minimum': 22.848,
     'FrameRate_Minimum': 22.889,
     'FrameRate_Minimum': 22.912,
     'FrameRate_Minimum': 22.93,
     'FrameRate_Minimum': 22.936,
     'FrameRate_Minimum': 22.942,
     'FrameRate_Minimum': 22.977,
     'FrameRate_Minimum': 22.983,
     'FrameRate_Minimum': 23.018,
     'FrameRate_Minimum': 23.024,
     'FrameRate_Minimum': 23.065,
     'FrameRate_Minimum': 23.071,
     'FrameRate_Minimum': 23.077,
     'FrameRate_Minimum': 23.101,
     'FrameRate_Minimum': 23.107,
     'FrameRate_Minimum': 23.142,
     'FrameRate_Minimum': 23.16,
     'FrameRate_Minimum': 23.178,
     'FrameRate_Minimum': 23.238,
     'FrameRate_Minimum': 23.274,
     'FrameRate_Minimum': 23.28,
     'FrameRate_Minimum': 23.31,
     'FrameRate_Minimum': 23.352,
     'FrameRate_Minimum': 23.413,
     'FrameRate_Minimum': 23.419,
     'FrameRate_Minimum': 23.438,
     'FrameRate_Minimum': 23.444,
     'FrameRate_Minimum': 23.468,
     'FrameRate_Minimum': 23.48,
     'FrameRate_Minimum': 23.493,
     'FrameRate_Minimum': 23.517,
     'FrameRate_Minimum': 23.523,
     'FrameRate_Minimum': 23.554,
     'FrameRate_Minimum': 23.622,
     'FrameRate_Minimum': 23.684,
     'FrameRate_Minimum': 23.69,
     'FrameRate_Minimum': 23.709,
     'FrameRate_Minimum': 23.734,
     'FrameRate_Minimum': 23.753,
     'FrameRate_Minimum': 23.778,
     'FrameRate_Minimum': 23.791,
     'FrameRate_Minimum': 23.936,
     'FrameRate_Minimum': 23.949,
     'FrameRate_Minimum': 23.962,
     'FrameRate_Minimum': 24.0,
     'FrameRate_Minimum': 24.032,
     'FrameRate_Minimum': 24.045,
     'FrameRate_Minimum': 24.051,
     'FrameRate_Minimum': 24.083,
     'FrameRate_Minimum': 24.09,
     'FrameRate_Minimum': 24.174,
     'FrameRate_Minimum': 24.181,
     'FrameRate_Minimum': 24.2,
     'FrameRate_Minimum': 24.22,
     'FrameRate_Minimum': 24.246,
     'FrameRate_Minimum': 24.259,
     'FrameRate_Minimum': 24.298,
     'FrameRate_Minimum': 24.331,
     'FrameRate_Minimum': 24.351,
     'FrameRate_Minimum': 24.377,
     'FrameRate_Minimum': 24.47,
     'FrameRate_Minimum': 24.483,
     'FrameRate_Minimum': 24.503,
     'FrameRate_Minimum': 24.57,
     'FrameRate_Minimum': 24.59,
     'FrameRate_Minimum': 24.624,
     'FrameRate_Minimum': 24.637,
     'FrameRate_Minimum': 24.644,
     'FrameRate_Minimum': 24.658,
     'FrameRate_Minimum': 24.664,
     'FrameRate_Minimum': 24.732,
     'FrameRate_Minimum': 24.752,
     'FrameRate_Minimum': 24.8,
     'FrameRate_Minimum': 24.834,
     'FrameRate_Minimum': 24.869,
     'FrameRate_Minimum': 24.931,
     'FrameRate_Minimum': 24.951,
     'FrameRate_Minimum': 24.958,
     'FrameRate_Minimum': 25.0,
     'FrameRate_Minimum': 25.014,
     'FrameRate_Minimum': 25.049,
     'FrameRate_Minimum': 25.084,
     'FrameRate_Minimum': 25.091,
     'FrameRate_Minimum': 25.14,
     'FrameRate_Minimum': 25.175,
     'FrameRate_Minimum': 25.281,
     'FrameRate_Minimum': 25.359,
     'FrameRate_Minimum': 25.381,
     'FrameRate_Minimum': 25.409,
     'FrameRate_Minimum': 25.46,
     'FrameRate_Minimum': 25.467,
     'FrameRate_Minimum': 25.474,
     'FrameRate_Minimum': 25.481,
     'FrameRate_Minimum': 25.51,
     'FrameRate_Minimum': 25.525,
     'FrameRate_Minimum': 25.554,
     'FrameRate_Minimum': 25.575,
     'FrameRate_Minimum': 25.597,
     'FrameRate_Minimum': 25.648,
     'FrameRate_Minimum': 25.656,
     'FrameRate_Minimum': 25.67,
     'FrameRate_Minimum': 25.678,
     'FrameRate_Minimum': 25.707,
     'FrameRate_Minimum': 25.714,
     'FrameRate_Minimum': 25.722,
     'FrameRate_Minimum': 25.87,
     'FrameRate_Minimum': 25.899,
     'FrameRate_Minimum': 25.974,
     'FrameRate_Minimum': 25.982,
     'FrameRate_Minimum': 26.042,
     'FrameRate_Minimum': 26.072,
     'FrameRate_Minimum': 26.102,
     'FrameRate_Minimum': 26.117,
     'FrameRate_Minimum': 26.247,
     'FrameRate_Minimum': 26.254,
     'FrameRate_Minimum': 26.27,
     'FrameRate_Minimum': 26.285,
     'FrameRate_Minimum': 26.347,
     'FrameRate_Minimum': 26.393,
     'FrameRate_Minimum': 26.408,
     'FrameRate_Minimum': 26.478,
     'FrameRate_Minimum': 26.486,
     'FrameRate_Minimum': 26.51,
     'FrameRate_Minimum': 26.525,
     'FrameRate_Minimum': 26.533,
     'FrameRate_Minimum': 26.604,
     'FrameRate_Minimum': 26.667,
     'FrameRate_Minimum': 26.698,
     'FrameRate_Minimum': 26.706,
     'FrameRate_Minimum': 26.754,
     'FrameRate_Minimum': 26.762,
     'FrameRate_Minimum': 26.802,
     'FrameRate_Minimum': 26.818,
     'FrameRate_Minimum': 26.826,
     'FrameRate_Minimum': 27.003,
     'FrameRate_Minimum': 27.06,
     'FrameRate_Minimum': 27.125,
     'FrameRate_Minimum': 27.207,
     'FrameRate_Minimum': 27.24,
     'FrameRate_Minimum': 27.289,
     'FrameRate_Minimum': 27.322,
     'FrameRate_Minimum': 27.523,
     'FrameRate_Minimum': 27.54,
     'FrameRate_Minimum': 27.624,
     'FrameRate_Minimum': 27.684,
     'FrameRate_Minimum': 27.701,
     'FrameRate_Minimum': 27.744,
     'FrameRate_Minimum': 27.752,
     'FrameRate_Minimum': 27.761,
     'FrameRate_Minimum': 27.769,
     'FrameRate_Minimum': 27.812,
     'FrameRate_Minimum': 27.829,
     'FrameRate_Minimum': 27.968,
     'FrameRate_Minimum': 28.002,
     'FrameRate_Minimum': 28.011,
     'FrameRate_Minimum': 28.037,
     'FrameRate_Minimum': 28.064,
     'FrameRate_Minimum': 28.09,
     'FrameRate_Minimum': 28.16,
     'FrameRate_Minimum': 28.178,
     'FrameRate_Minimum': 28.187,
     'FrameRate_Minimum': 28.204,
     'FrameRate_Minimum': 28.293,
     'FrameRate_Minimum': 28.391,
     'FrameRate_Minimum': 28.409,
     'FrameRate_Minimum': 28.427,
     'FrameRate_Minimum': 28.481,
     'FrameRate_Minimum': 28.526,
     'FrameRate_Minimum': 28.553,
     'FrameRate_Minimum': 28.562,
     'FrameRate_Minimum': 28.571,
     'FrameRate_Minimum': 28.59,
     'FrameRate_Minimum': 28.599,
     'FrameRate_Minimum': 28.626,
     'FrameRate_Minimum': 28.681,
     'FrameRate_Minimum': 28.699,
     'FrameRate_Minimum': 28.717,
     'FrameRate_Minimum': 28.754,
     'FrameRate_Minimum': 28.818,
     'FrameRate_Minimum': 28.828,
     'FrameRate_Minimum': 28.846,
     'FrameRate_Minimum': 28.865,
     'FrameRate_Minimum': 28.958,
     'FrameRate_Minimum': 28.995,
     'FrameRate_Minimum': 29.004,
     'FrameRate_Minimum': 29.023,
     'FrameRate_Minimum': 29.07,
     'FrameRate_Minimum': 29.155,
     'FrameRate_Minimum': 29.211,
     'FrameRate_Minimum': 29.268,
     'FrameRate_Minimum': 29.297,
     'FrameRate_Minimum': 29.345,
     'FrameRate_Minimum': 29.412,
     'FrameRate_Minimum': 29.528,
     'FrameRate_Minimum': 29.547,
     'FrameRate_Minimum': 29.596,
     'FrameRate_Minimum': 29.625,
     'FrameRate_Minimum': 29.644,
     'FrameRate_Minimum': 29.664,
     'FrameRate_Minimum': 29.674,
     'FrameRate_Minimum': 29.693,
     'FrameRate_Minimum': 29.703,
     'FrameRate_Minimum': 29.713,
     'FrameRate_Minimum': 29.762,
     'FrameRate_Minimum': 29.772,
     'FrameRate_Minimum': 29.782,
     'FrameRate_Minimum': 29.791,
     'FrameRate_Minimum': 29.801,
     'FrameRate_Minimum': 29.811,
     'FrameRate_Minimum': 29.821,
     'FrameRate_Minimum': 29.831,
     'FrameRate_Minimum': 29.841,
     'FrameRate_Minimum': 29.851,
     'FrameRate_Minimum': 29.861,
     'FrameRate_Minimum': 29.871,
     'FrameRate_Minimum': 29.88,
     'FrameRate_Minimum': 29.89,
     'FrameRate_Minimum': 29.9,
     'FrameRate_Minimum': 29.91,
     'FrameRate_Minimum': 29.92,
     'FrameRate_Minimum': 29.93,
     'FrameRate_Minimum': 29.94,
     'FrameRate_Minimum': 29.95,
     'FrameRate_Minimum': 29.96,
     'FrameRate_Minimum': 29.97,
     'FrameRate_Minimum': 29.98,
     'FrameRate_Minimum': 29.99,
     'FrameRate_Minimum': 3.003,
     'FrameRate_Minimum': 3.03,
     'FrameRate_Minimum': 3.226,
     'FrameRate_Minimum': 3.65,
     'FrameRate_Minimum': 3.746,
     'FrameRate_Minimum': 3.802,
     'FrameRate_Minimum': 30.0,
     'FrameRate_Minimum': 4.0,
     'FrameRate_Minimum': 4.115,
     'FrameRate_Minimum': 4.167,
     'FrameRate_Minimum': 4.464,
     'FrameRate_Minimum': 4.566,
     'FrameRate_Minimum': 4.717,
     'FrameRate_Minimum': 4.785,
     'FrameRate_Minimum': 4.808,
     'FrameRate_Minimum': 4.854,
     'FrameRate_Minimum': 4.926,
     'FrameRate_Minimum': 4.95,
     'FrameRate_Minimum': 4.972,
     'FrameRate_Minimum': 4.982,
     'FrameRate_Minimum': 4.987,
     'FrameRate_Minimum': 4.996,
     'FrameRate_Minimum': 5.0,
     'FrameRate_Minimum': 5.128,
     'FrameRate_Minimum': 5.263,
     'FrameRate_Minimum': 5.291,
     'FrameRate_Minimum': 5.348,
     'FrameRate_Minimum': 5.435,
     'FrameRate_Minimum': 5.525,
     'FrameRate_Minimum': 5.556,
     'FrameRate_Minimum': 5.682,
     'FrameRate_Minimum': 5.714,
     'FrameRate_Minimum': 5.78,
     'FrameRate_Minimum': 5.882,
     'FrameRate_Minimum': 5.917,
     'FrameRate_Minimum': 5.988,
     'FrameRate_Minimum': 5.996,
     'FrameRate_Minimum': 59.92,
     'FrameRate_Minimum': 6.941,
     'FrameRate_Minimum': 7.461,
     'FrameRate_Minimum': 7.5,
     'FrameRate_Minimum': 7.521,
     'FrameRate_Minimum': 7.895,
     'FrameRate_Minimum': 8.0,
     'FrameRate_Minimum': 8.094,
     'FrameRate_Minimum': 8.196,
     'FrameRate_Minimum': 8.208,
     'FrameRate_Minimum': 8.286,
     'FrameRate_Minimum': 8.29,
     'FrameRate_Minimum': 8.449,
     'FrameRate_Minimum': 8.585,
     'FrameRate_Minimum': 8.64,
     'FrameRate_Minimum': 8.785,
     'FrameRate_Minimum': 8.798,
     'FrameRate_Minimum': 8.811,
     'FrameRate_Minimum': 8.881,
     'FrameRate_Minimum': 9.046,
     'FrameRate_Minimum': 9.088,
     'FrameRate_Minimum': 9.12,
     'FrameRate_Minimum': 9.187,
     'FrameRate_Minimum': 9.222,
     'FrameRate_Minimum': 9.263,
     'FrameRate_Minimum': 9.305,
     'FrameRate_Minimum': 9.338,
     'FrameRate_Minimum': 9.437,
     'FrameRate_Minimum': 9.503,
     'FrameRate_Minimum': 9.505,
     'FrameRate_Minimum': 9.532,
     'FrameRate_Minimum': 9.542,
     'FrameRate_Minimum': 9.6,
     'FrameRate_Minimum': 9.605,
     'FrameRate_Minimum': 9.617,
     'FrameRate_Minimum': 9.816,
     'FrameRate_Minimum': 9.822,
     'FrameRate_Minimum': 9.879,
     'FrameRate_Minimum': 9.891,
     'FrameRate_Minimum': 9.984,
     'FrameRate_Minimum': 9.986,
     'FrameRate_Minimum': 9.992,
     'FrameRate_Minimum': 9.993,
     'FrameRate_Minimum': 9.997,
     'FrameRate_Minimum': 9.998,
     'FrameRate_Minimum': 9.999,
     'FrameRate_Minimum': None,
    
    Unique "FrameRate_Nominal"
     'FrameRate_Nominal': None,
    
    Unique "FrameRate_Maximum"
     'FrameRate_Maximum': 10.0,
     'FrameRate_Maximum': 10.014,
     'FrameRate_Maximum': 10.022,
     'FrameRate_Maximum': 10.023,
     'FrameRate_Maximum': 10.025,
     'FrameRate_Maximum': 10.026,
     'FrameRate_Maximum': 10.027,
     'FrameRate_Maximum': 10.028,
     'FrameRate_Maximum': 10.029,
     'FrameRate_Maximum': 10.03,
     'FrameRate_Maximum': 10.037,
     'FrameRate_Maximum': 10.04,
     'FrameRate_Maximum': 10.417,
     'FrameRate_Maximum': 10000.0,
     'FrameRate_Maximum': 11.765,
     'FrameRate_Maximum': 12.346,
     'FrameRate_Maximum': 12.5,
     'FrameRate_Maximum': 12.658,
     'FrameRate_Maximum': 12.821,
     'FrameRate_Maximum': 1200.0,
     'FrameRate_Maximum': 13.514,
     'FrameRate_Maximum': 142.857,
     'FrameRate_Maximum': 15.0,
     'FrameRate_Maximum': 15.009,
     'FrameRate_Maximum': 15.385,
     'FrameRate_Maximum': 15.877,
     'FrameRate_Maximum': 15750.0,
     'FrameRate_Maximum': 16.389,
     'FrameRate_Maximum': 16.406,
     'FrameRate_Maximum': 16.935,
     'FrameRate_Maximum': 17.232,
     'FrameRate_Maximum': 17.251,
     'FrameRate_Maximum': 17.539,
     'FrameRate_Maximum': 17.559,
     'FrameRate_Maximum': 17.857,
     'FrameRate_Maximum': 18.166,
     'FrameRate_Maximum': 18.187,
     'FrameRate_Maximum': 18.529,
     'FrameRate_Maximum': 18.885,
     'FrameRate_Maximum': 19.231,
     'FrameRate_Maximum': 20.013,
     'FrameRate_Maximum': 20.402,
     'FrameRate_Maximum': 20.408,
     'FrameRate_Maximum': 20.833,
     'FrameRate_Maximum': 200.0,
     'FrameRate_Maximum': 22.222,
     'FrameRate_Maximum': 24.0,
     'FrameRate_Maximum': 24.617,
     'FrameRate_Maximum': 25.0,
     'FrameRate_Maximum': 25.417,
     'FrameRate_Maximum': 250.0,
     'FrameRate_Maximum': 26.178,
     'FrameRate_Maximum': 26.471,
     'FrameRate_Maximum': 28.116,
     'FrameRate_Maximum': 286.624,
     'FrameRate_Maximum': 29.412,
     'FrameRate_Maximum': 29.703,
     'FrameRate_Maximum': 29.772,
     'FrameRate_Maximum': 30.0,
     'FrameRate_Maximum': 30.01,
     'FrameRate_Maximum': 30.02,
     'FrameRate_Maximum': 30.03,
     'FrameRate_Maximum': 30.04,
     'FrameRate_Maximum': 30.05,
     'FrameRate_Maximum': 30.06,
     'FrameRate_Maximum': 30.07,
     'FrameRate_Maximum': 30.08,
     'FrameRate_Maximum': 30.09,
     'FrameRate_Maximum': 30.1,
     'FrameRate_Maximum': 30.11,
     'FrameRate_Maximum': 30.12,
     'FrameRate_Maximum': 30.131,
     'FrameRate_Maximum': 30.141,
     'FrameRate_Maximum': 30.151,
     'FrameRate_Maximum': 30.161,
     'FrameRate_Maximum': 30.171,
     'FrameRate_Maximum': 30.181,
     'FrameRate_Maximum': 30.191,
     'FrameRate_Maximum': 30.201,
     'FrameRate_Maximum': 30.211,
     'FrameRate_Maximum': 30.222,
     'FrameRate_Maximum': 30.232,
     'FrameRate_Maximum': 30.242,
     'FrameRate_Maximum': 30.252,
     'FrameRate_Maximum': 30.262,
     'FrameRate_Maximum': 30.272,
     'FrameRate_Maximum': 30.283,
     'FrameRate_Maximum': 30.293,
     'FrameRate_Maximum': 30.303,
     'FrameRate_Maximum': 30.313,
     'FrameRate_Maximum': 30.323,
     'FrameRate_Maximum': 30.334,
     'FrameRate_Maximum': 30.344,
     'FrameRate_Maximum': 30.354,
     'FrameRate_Maximum': 30.364,
     'FrameRate_Maximum': 30.375,
     'FrameRate_Maximum': 30.395,
     'FrameRate_Maximum': 30.405,
     'FrameRate_Maximum': 30.416,
     'FrameRate_Maximum': 30.426,
     'FrameRate_Maximum': 30.436,
     'FrameRate_Maximum': 30.447,
     'FrameRate_Maximum': 30.467,
     'FrameRate_Maximum': 30.488,
     'FrameRate_Maximum': 30.508,
     'FrameRate_Maximum': 30.519,
     'FrameRate_Maximum': 30.529,
     'FrameRate_Maximum': 30.56,
     'FrameRate_Maximum': 30.571,
     'FrameRate_Maximum': 30.581,
     'FrameRate_Maximum': 30.591,
     'FrameRate_Maximum': 30.623,
     'FrameRate_Maximum': 30.675,
     'FrameRate_Maximum': 30.685,
     'FrameRate_Maximum': 30.696,
     'FrameRate_Maximum': 30.717,
     'FrameRate_Maximum': 30.727,
     'FrameRate_Maximum': 30.738,
     'FrameRate_Maximum': 30.748,
     'FrameRate_Maximum': 30.769,
     'FrameRate_Maximum': 30.78,
     'FrameRate_Maximum': 30.801,
     'FrameRate_Maximum': 30.811,
     'FrameRate_Maximum': 30.822,
     'FrameRate_Maximum': 30.843,
     'FrameRate_Maximum': 30.854,
     'FrameRate_Maximum': 30.864,
     'FrameRate_Maximum': 30.875,
     'FrameRate_Maximum': 30.885,
     'FrameRate_Maximum': 30.938,
     'FrameRate_Maximum': 30.96,
     'FrameRate_Maximum': 30.981,
     'FrameRate_Maximum': 3000.0,
     'FrameRate_Maximum': 31.013,
     'FrameRate_Maximum': 31.034,
     'FrameRate_Maximum': 31.045,
     'FrameRate_Maximum': 31.056,
     'FrameRate_Maximum': 31.067,
     'FrameRate_Maximum': 31.099,
     'FrameRate_Maximum': 31.12,
     'FrameRate_Maximum': 31.131,
     'FrameRate_Maximum': 31.142,
     'FrameRate_Maximum': 31.153,
     'FrameRate_Maximum': 31.163,
     'FrameRate_Maximum': 31.174,
     'FrameRate_Maximum': 31.217,
     'FrameRate_Maximum': 31.272,
     'FrameRate_Maximum': 31.304,
     'FrameRate_Maximum': 31.315,
     'FrameRate_Maximum': 31.337,
     'FrameRate_Maximum': 31.348,
     'FrameRate_Maximum': 31.359,
     'FrameRate_Maximum': 31.37,
     'FrameRate_Maximum': 31.381,
     'FrameRate_Maximum': 31.392,
     'FrameRate_Maximum': 31.414,
     'FrameRate_Maximum': 31.458,
     'FrameRate_Maximum': 31.491,
     'FrameRate_Maximum': 31.524,
     'FrameRate_Maximum': 31.557,
     'FrameRate_Maximum': 31.579,
     'FrameRate_Maximum': 31.601,
     'FrameRate_Maximum': 31.69,
     'FrameRate_Maximum': 31.712,
     'FrameRate_Maximum': 31.724,
     'FrameRate_Maximum': 31.735,
     'FrameRate_Maximum': 31.746,
     'FrameRate_Maximum': 31.768,
     'FrameRate_Maximum': 31.791,
     'FrameRate_Maximum': 31.847,
     'FrameRate_Maximum': 31.938,
     'FrameRate_Maximum': 31.96,
     'FrameRate_Maximum': 32.006,
     'FrameRate_Maximum': 32.051,
     'FrameRate_Maximum': 32.063,
     'FrameRate_Maximum': 32.086,
     'FrameRate_Maximum': 32.108,
     'FrameRate_Maximum': 32.247,
     'FrameRate_Maximum': 32.258,
     'FrameRate_Maximum': 32.293,
     'FrameRate_Maximum': 32.304,
     'FrameRate_Maximum': 32.328,
     'FrameRate_Maximum': 32.351,
     'FrameRate_Maximum': 32.397,
     'FrameRate_Maximum': 32.421,
     'FrameRate_Maximum': 32.432,
     'FrameRate_Maximum': 32.456,
     'FrameRate_Maximum': 32.514,
     'FrameRate_Maximum': 32.55,
     'FrameRate_Maximum': 32.621,
     'FrameRate_Maximum': 32.703,
     'FrameRate_Maximum': 32.763,
     'FrameRate_Maximum': 32.943,
     'FrameRate_Maximum': 33.015,
     'FrameRate_Maximum': 33.333,
     'FrameRate_Maximum': 33.746,
     'FrameRate_Maximum': 33.847,
     'FrameRate_Maximum': 33.975,
     'FrameRate_Maximum': 34.117,
     'FrameRate_Maximum': 34.13,
     'FrameRate_Maximum': 34.143,
     'FrameRate_Maximum': 34.338,
     'FrameRate_Maximum': 34.464,
     'FrameRate_Maximum': 35.714,
     'FrameRate_Maximum': 36.482,
     'FrameRate_Maximum': 40.0,
     'FrameRate_Maximum': 43.478,
     'FrameRate_Maximum': 45.455,
     'FrameRate_Maximum': 45.778,
     'FrameRate_Maximum': 47.619,
     'FrameRate_Maximum': 50.0,
     'FrameRate_Maximum': 52.632,
     'FrameRate_Maximum': 54.545,
     'FrameRate_Maximum': 55.556,
     'FrameRate_Maximum': 58.824,
     'FrameRate_Maximum': 59.96,
     'FrameRate_Maximum': 62.5,
     'FrameRate_Maximum': 66.667,
     'FrameRate_Maximum': 66.716,
     'FrameRate_Maximum': 71.429,
     'FrameRate_Maximum': 825.688,
     'FrameRate_Maximum': 90000.0,
     'FrameRate_Maximum': None,
    
    Unique "FrameRate_Real"
     'FrameRate_Real': None,
    
    Unique "ScanType"
     'ScanType': 'Interlaced',
     'ScanType': 'Progressive',
     'ScanType': None,
    
    Unique "ScanOrder"
     'ScanOrder': 'TFF',
     'ScanOrder': None,
    
    Unique "ScanOrder_Stored"
     'ScanOrder_Stored': None,
     'ScanOrder_StoredDisplayedInverted': None,
    Quote Quote  
  12. I just tested one vfr phone video with frame rate running around 30fps and this solution worked, plugin was downloaded from here: Vapoursynth vfr to crf
    and using:
    Code:
    clip = core.ffms2.Source(source=r'D:\path_to_tests\test2\20160720_195261.mp4', timecodes='timecodes.txt')
    clip = core.vfrtocfr.VFRToCFR(clip , timecodes='timecodes.txt', fpsnum=60000, fpsden=1001, drop=True)
    so implementation seems to be no problem at all,
    In that case it made 60000/1001 video with duplicate frames, sure 30fps would be more proper but just a demonstration.
    The only thing comes to my mind is how to get those timecodes with different source plugins.
    Quote Quote  
  13. Member hydra3333's Avatar
    Join Date
    Oct 2009
    Location
    Australia
    Search Comp PM
    Thanks. I saw that and assumed perhaps incorrectly that ffms2's fpsnum=50, fpsden=1 did the same thing.

    I also saw this: https://forum.doom9.org/showthread.php?p=1921016#post1921016
    which perhaps suggests something along the lines (if not already interlaced, if so assume CFR regardless ?)

    Code:
    FFMS2 with fpsnum=50, fpsden=1 (for PAL target 25fps in the end) ... or do VFRtoCFR above if that works better
    assumefps(50)
    trim the incoming clip to a guessed (2 * max_vfr_framerate * num_seconds_of_desired_slideshow_video) # interim clipping to ensure working of a "small" clip not a 800Mb clip
    AssumeTFF()
    SeparateFields()
    SelectEvery(4,0,3)	# ending up TFF
    Weave() so it is interlaced
    AssumeTFF()
    assumefps(25)
    deinterlace with yadifmod2 or BWDIF
    Last edited by hydra3333; 29th Mar 2023 at 06:53.
    Quote Quote  
  14. I will come back to this later. I try some linear frame piping first.
    I saw your code, .. it only gets longer . The mind trick is to name scripts as .py and get stuff out of the way as other modules, that just always accompanying main module like that settings class etc. Then code gets shorter again.

    -Dealing with primaries, transfer and matrix. Is it not core of the problem to have a one in original and none in destination , than it would throw errors when using resize? So it could be really easy if just reading that value in original and only if there is a value than get it for destination as well. color_range is solved I think in viewfunc.py. YUV to RGB as well, it solves in viewfunc.py, matrix and color_range. Never tried RGB to YUV where primaries is set if toRGB gives problem without dealing with it.

    -logging module is really great to get logging data from script or as a matter of fact from across scripts in running code. You just make one switch and it logs DEBUG notes etc. I know it looks difficult to start doing that but at the end no one comes back I think.

    -I do not remember why I used rotation check in PIL and not from mediainfo at the end, Is one more reliable than the other?

    -also that vfr thing, you might be right that ffms2 setting fpsnum and fpsden might just do it, without that plugin, I did not test it truly. There is also havsfunc.ChangeFPS() , it just fills or drop frames, that might work as well.
    Quote Quote  
  15. Member hydra3333's Avatar
    Join Date
    Oct 2009
    Location
    Australia
    Search Comp PM
    Originally Posted by _Al_ View Post
    I will come back to this later. I try some linear frame piping first.
    I saw your code, .. it only gets longer . The mind trick is to name scripts as .py and get stuff out of the way as other modules, that just always accompanying main module like that settings class etc. Then code gets shorter again.
    I hope your eyes are OK after that I concur. Brute-forcing it for the moment

    Originally Posted by _Al_ View Post
    Dealing with primaries, transfer and matrix. Is it not core of the problem to have a one in original and none in destination , than it would throw errors when using resize?
    I had issues along lines of no path to destination when one or more of of those bits were missing having many variations of sources. So I threw the kitchen sink at it, as they say. Over here we have a very flamable aerosol spray product used to help start awkward hard-to-start lawnmowers, one sprays it into the carby and then pulls the starter cord and it does work a treat ... it's name on the label is "Start Ya Bastard" and I took my lead from that.

    Originally Posted by _Al_ View Post
    logging module is really great to get logging data from script or as a matter of fact from across scripts in running code. You just make one switch and it logs DEBUG notes etc. I know it looks difficult to start doing that but at the end no one comes back I think.
    Oh, yes, I've seen it but forgotten about it, nice idea.

    Originally Posted by _Al_ View Post
    I do not remember why I used rotation check in PIL and not from mediainfo at the end, Is one more reliable than the other?
    IIRC, mediainfo handled a variety videos and nicely. I stuck with PIL for images and mediainfo for videos; don't recall why, now, sorry about that.

    Originally Posted by _Al_ View Post
    also that vfr thing, you might be right that ffms2 setting fpsnum and fpsden might just do it, without that plugin, I did not test it truly. There is also havsfunc.ChangeFPS() , it just fills or drop frames, that might work as well.
    I wasn't comfortable with potential stuttering etc from dups/drops going from VFR to CFR, especially when some framerates were close to NTCS->PAL and some were circa 9 fps average VFR; I had an uninformed view that the quoted method may yield a slightly nicer result. It does work. "Better" or not would be in the eye of the beholder. I may perhaps have done better to dup/drop to the average or even maximum VFR framerate (not always yielded by mediainfo) and then use mvtools like below.

    The other issue is there are lots of videos with CFR framerates not like the target framerate.
    Pics still greatly outnumber videos in home archives though, at least for older people
    A lot of videos look close to NTSC->PAL (I'm in PAL country, so that's an issue around jerkyness when going to 25p).
    So I brute-force those with mvtools (from vsrepo) in a new function "make_fps_compatible" with core.mv.FlowBlur, core.mv.BlockFPS and core.mv.FlowFPS, as recycled from elsewhere. It seems OK.
    While thinking about mv, I also chose to denoise "small sized" videos which are likely to be noisy old ones, and mv.Degrain1 seems to work. I had a dumbed down version of LSFMod (as yet untested) but don't currently see a need for it; it'd make older noisy/blocky clips worse.

    Really looking forward to seeing your approach to linear frame piping ! Good luck.
    Last edited by hydra3333; 31st Mar 2023 at 12:26.
    Quote Quote  
  16. Member hydra3333's Avatar
    Join Date
    Oct 2009
    Location
    Australia
    Search Comp PM
    Interestingly, I found a few where mediainfo where does not report the fps etc, yet ffprobe does.
    Example attached.

    But, if one opens it with ffms2, it is set OK in clip properties. eg
    Code:
    2023-04-01.13:56:34.540293 DEBUG: get_clip: opened ffmps2 Video 2004-05-17-cleland-national-park-pic00146.asf.avi clip:
    VideoNode
    	Format: YUV420P8
    	Width: 320
    	Height: 240
    	Num Frames: 227
    	FPS: 200000/9991
    Image Attached Files
    Quote Quote  
  17. Thats a very old video, I don't know where ffms2 dug up that info , I could not find it in avi headers as well ("avih" and "vids"), using variation of this: https://forum.videohelp.com/threads/408673-Best-way-to-change-avi-frame-rate-as-batch-...ss#post2681772 , or it is marked differently,
    oh, wait, that's not avi ...
    Quote Quote  
  18. Member hydra3333's Avatar
    Join Date
    Oct 2009
    Location
    Australia
    Search Comp PM
    Thanks, and what a beaut link. Fun to peruse.
    I reckon I now begin to see my lowly place in the world, having seen that and a lot of your posts here and on doom9 where some really good stuff is done !

    At a guess, ffms2 based on ffmpeg as is ffprobe which finds some "stream" info which mediainfo doesn't.

    I assume people fiddle with clips, especially back in the day where some tools were better than others

    I came across security cam footage with almost no info; hevc in mp4 notionally 15fps and VFR even though not flagged by mediainfo.
    I'm unreasonably sure it'd be bt.709 even though ffms2 and mediainfo say "None" or "2" and not bt.2020, is there some other way to determine colour properties ?
    Code:
    'Width': 3840,
    'Height': 2144,
    'PixelAspectRatio': 1.0,
    'DisplayAspectRatio': 1.791,
    'FrameRate': Fraction(1000000, 67719),
    'FrameRate_Num': 1000000,
    'FrameRate_Den': 67719,
    'FrameCount': 1860,
    'FrameRate_Minimum': None,
    'FrameRate_Nominal': 14.769,
    'FrameRate_Maximum': None,
    'FrameRate_Real': None,
    'ColorSpace': 'YUV',
    'ChromaSubsampling': '4:2:0',
    'BitDepth': 8,
    'colour_description_present': None,
    'colour_range': None,
    'colour_primaries': None,
    'transfer_characteristics': None,
    'matrix_coefficients': None
    Quote Quote  
  19. Member hydra3333's Avatar
    Join Date
    Oct 2009
    Location
    Australia
    Search Comp PM
    Originally Posted by _Al_ View Post
    I try some linear frame piping first.
    Hello, any news on that front ?
    Thanks
    Quote Quote  
  20. Yes, sorry, quite busy, a spring is always a perfect storm to be busy , (as oppose southern hemisphere, where it all goes to winter sleep , but made a nice package for all of it, looks it is working for all files you provided, I just do not know how to create a package to be installed by pip, would have to look for some tutorial, maybe during a longer memorial weekend or I just post all of it on github as a whole structure.
    Quote Quote  
  21. Member hydra3333's Avatar
    Join Date
    Oct 2009
    Location
    Australia
    Search Comp PM
    Wow, linear frame piping is ... in the pipeline ... cool ! May I gently suggest the github option. Please ?

    sleep indeed, more time to play !

    Had a little tinker with OpenAI ChatGPT and asked it to code a python script to create a slideshow from images and it did a fair beginners job, although it ignored real-world complexities.
    Needed to ask supplementary questions to get it to extend new bits of functionality and clarify dependencies (which it ended up changing slightly as a result).
    For interest, here's the final part of the conversation:

    OpenCV's `cv2.VideoWriter` does not provide direct control over the Constant Quality (CQ) parameters for H.264 encoding. However, you can indirectly control the quality by adjusting the bitrate. The higher the bitrate, the better the quality (at the expense of a larger file size). Here's an updated version of the code that allows you to specify the bitrate for H.264 encoding:

    Code:
    python
    import os
    import glob
    from PIL import Image
    import cv2
    
    def create_slideshow(input_folder, output_file, width, height, fps, duration, bitrate):
        # Create a temporary folder for resized images
        temp_folder = 'temp_images'
        os.makedirs(temp_folder, exist_ok=True)
    
        # Find all JPG images in the folder tree
        image_files = glob.glob(os.path.join(input_folder, '**/*.jpg'), recursive=True)
    
        # Resize and save each image as a temporary file
        for i, image_file in enumerate(image_files):
            img = Image.open(image_file)
            img = img.resize((width, height), Image.ANTIALIAS)
    
            temp_file = os.path.join(temp_folder, f'resized_{i}.jpg')
            img.save(temp_file)
    
        # Calculate frame count based on duration and fps
        frame_count = int(fps * duration)
    
        # Create video writer object with AVC encoding and specified bitrate
        fourcc = cv2.VideoWriter_fourcc(*'avc1')
        video_writer = cv2.VideoWriter(output_file, fourcc, fps, (width, height), isColor=True)
        video_writer.set(cv2.VIDEOWRITER_PROP_QUALITY, bitrate)
    
        # Read each resized image and write it to the video for the specified duration
        for i, image_file in enumerate(image_files):
            temp_file = os.path.join(temp_folder, f'resized_{i}.jpg')
            img = cv2.imread(temp_file)
    
            # Write the same image multiple times to achieve the desired duration
            for _ in range(frame_count):
                video_writer.write(img)
    
        # Release the video writer and delete temporary files
        video_writer.release()
        cv2.destroyAllWindows()
        for temp_file in glob.glob(os.path.join(temp_folder, '*.jpg')):
            os.remove(temp_file)
        os.rmdir(temp_folder)
    
    # Example usage
    input_folder = 'path/to/input/folder'
    output_file = 'path/to/output/video.mp4'
    width = 640  # Desired width of each image in the slideshow
    height = 480  # Desired height of each image in the slideshow
    fps = 30  # Frames per second of the video
    duration = 3  # Duration (in seconds) to display each image
    bitrate = 800000  # Specify the desired bitrate (adjust as needed)
    
    create_slideshow(input_folder, output_file, width, height, fps, duration, bitrate)

    In this updated code, I added a `bitrate` parameter to the `create_slideshow` function. The `bitrate` parameter allows you to specify the desired bitrate for the video. Higher bitrates generally result in better quality (but larger file sizes), while lower bitrates reduce file size (but can result in lower quality).

    To adjust the quality, you can experiment with different bitrate values. Higher bitrates, such as `1000000` or `2000000`, generally result in better quality, while lower bitrates, such as `500000` or `100000`, reduce file size but may result in slightly lower quality.

    Remember to replace `'path/to/input/folder'` and `'path/to/output/video.mp4'` with the actual paths you're using for the input folder and output video file, respectively
    In this updated code, I added a `duration` parameter to the `create_slideshow` function. This parameter specifies the duration (in seconds) that each image should be displayed in the resulting video.

    To achieve the desired duration, the code calculates the `frame_count` by multiplying the desired duration by the frames per second (fps). Then, for each image, it writes the same image multiple times to the video to achieve the desired duration.

    Make sure to replace `'path/to/input/folder'` and `'path/to/output/video.mp4'` with the actual paths you're using for the input folder and output video file, respectively. Adjust the `width`, `height`, `fps`, and `duration` variables according to your preferences.

    Let me know if you have any further questions!
    Certainly! Here's a list of the dependencies required for the script:

    1. PIL (Pillow): It is used for image manipulation and resizing.
    - To install: Run `pip install pillow`.

    2. OpenCV (cv2): It is used for video encoding and writing frames to the video.
    - To install: Run `pip install opencv-python`.

    These are the main dependencies needed for the script. However, depending on your system configuration, you may also need additional dependencies such as FFmpeg for video encoding support. If you encounter any specific errors related to missing codecs or libraries during the execution of the script, you can install them as necessary.

    To install the dependencies assuming you're using standalone Python, follow these steps:

    1. Open a terminal or command prompt.

    2. Run the following command to install PIL (Pillow):

    Code:
       pip install pillow
    3. After that, run the following command to install OpenCV (cv2):

    Code:
       pip install opencv-python
    These commands will install the required dependencies for the script to run. If any additional dependencies are needed, you can install them as per the error messages you encounter.

    Remember to replace `'path/to/input/folder'` and `'path/to/output/video.mp4'` with the actual paths you're using for the input folder and output video file, respectively. Adjust the `width`, `height`, `fps`, and `duration` variables according to your preferences.

    Let me know if you have any further questions!
    When using OpenCV for H.264 MP4 video creation, you need to ensure that your OpenCV installation is compiled with FFmpeg support, which provides the necessary codecs. The `opencv-python` package alone does not include FFmpeg support by default. Here's what you can do to install the required dependencies:

    1. Install `opencv-python-headless` package: This package includes the main functionality of OpenCV without graphical user interface (GUI) support.

    Code:
       pip install opencv-python-headless

    2. Install FFmpeg: FFmpeg is a multimedia framework that provides codecs and tools for handling various multimedia formats, including H.264. Install FFmpeg by following the appropriate instructions for your operating system:

    - **Windows**: Download a static build of FFmpeg from the official website (https://www.ffmpeg.org/) and add the FFmpeg binary folder to your system's PATH environment variable.
    - **macOS**: Use Homebrew to install FFmpeg by running the following command:

    Code:
         brew install ffmpeg
    - **Linux**: Use your package manager to install FFmpeg. The package name may vary depending on your distribution. For example, on Ubuntu, you can use:

    Code:
         sudo apt-get install ffmpeg
    By installing `opencv-python-headless` and FFmpeg, you should have the necessary dependencies to create H.264 MP4 videos using OpenCV. Remember to use the updated code provided earlier in our conversation to ensure the correct codec is used.

    If you encounter any further issues or have additional questions, please let me know!
    Quote Quote  
  22. Member hydra3333's Avatar
    Join Date
    Oct 2009
    Location
    Australia
    Search Comp PM
    In exploring changes, it sometimes forgot to retain the code for auto-rotation and resizing. Re-prompting for it gave that back.
    I'm tempted to try to run the code and see if it works.

    A few questions later ...

    Certainly! Here's the extended code that handles different color spaces and color ranges for input images and videos, ensuring everything is converted to an output video with the PAL HD color space (bt.709) and full color range:

    Code:
    python
    import cv2
    import glob
    import os
    import numpy as np
    
    
    def convert_to_constant_framerate(video_path, output_path, framerate):
        # Open the input video
        video = cv2.VideoCapture(video_path)
    
        # Get the original video's properties
        frame_count = int(video.get(cv2.CAP_PROP_FRAME_COUNT))
        original_fps = video.get(cv2.CAP_PROP_FPS)
    
        # Calculate the frame interpolation factor
        interpolation_factor = original_fps / framerate
    
        # Create a VideoWriter with the desired constant framerate
        fourcc = cv2.VideoWriter_fourcc(*'mp4v')
        video_out = cv2.VideoWriter(output_path, fourcc, framerate, (int(video.get(cv2.CAP_PROP_FRAME_WIDTH)), int(video.get(cv2.CAP_PROP_FRAME_HEIGHT))))
    
        # Read and write each frame of the input video to the output video
        for i in range(frame_count):
            ret, frame = video.read()
            if not ret:
                break
    
            # Write each frame multiple times based on the interpolation factor
            for _ in range(int(interpolation_factor)):
                video_out.write(frame)
    
        video.release()
        video_out.release()
    
        return output_path, original_fps
    
    
    def convert_to_pal_hd_color_space(frame):
        # Convert frame to bt.709 color space
        frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
        frame = cv2.cvtColor(frame, cv2.COLOR_RGB2YUV)
        frame[:, :, 0] = cv2.equalizeHist(frame[:, :, 0])
        frame = cv2.cvtColor(frame, cv2.COLOR_YUV2BGR)
    
        return frame
    
    
    def create_slideshow(images_folder, output_file, duration_per_image, video_quality, framerate):
        # Collect the paths of all JPG, GIF, and video clips in the folder tree
        image_paths = glob.glob(os.path.join(images_folder, '**/*.jpg'), recursive=True)
        image_paths.extend(glob.glob(os.path.join(images_folder, '**/*.gif'), recursive=True))
        image_paths.extend(glob.glob(os.path.join(images_folder, '**/*.mp4'), recursive=True))
        image_paths.extend(glob.glob(os.path.join(images_folder, '**/*.avi'), recursive=True))
        image_paths.extend(glob.glob(os.path.join(images_folder, '**/*.mpg'), recursive=True))
        image_paths.sort()
    
        # Initialize the VideoWriter with H.264 codec and the specified framerate
        first_image = cv2.imread(image_paths[0])
        width, height = 1920, 1080  # Desired output size
        fourcc = cv2.VideoWriter_fourcc(*'avc1')  # For H.264 codec
        video = cv2.VideoWriter(output_file, fourcc, framerate, (width, height))
    
        # Iterate over the images and videos and add them to the slideshow video
        for image_path in image_paths:
            # Check if the file is an image (JPG or GIF)
            if image_path.endswith('.jpg') or image_path.endswith('.gif'):
                if image_path.endswith('.gif'):
                    # Read GIF frames
                    gif_frames = cv2.VideoCapture(image_path)
                    frame_count = int(gif_frames.get(cv2.CAP_PROP_FRAME_COUNT))
    
                    # Read and write each frame of the GIF
                    for _ in range(frame_count):
                        _, frame = gif_frames.read()
    
                        # Rotate the frame based on the Exif rotation flag
                        rotation_flag = frame._getexif().get(0x112, 1) if hasattr(frame, '_getexif') else 1
                        if rotation_flag == 3:
                            frame = cv2.rotate(frame, cv2.ROTATE_180)
                        elif rotation_flag == 6:
                            frame = cv2.rotate(frame, cv2.ROTATE_90_COUNTERCLOCKWISE)
                        elif rotation_flag == 8:
                            frame = cv2.rotate(frame, cv2.ROTATE_90_CLOCKWISE)
    
                        # Resize the frame while maintaining the aspect ratio
                        aspect_ratio = frame.shape[1] / frame.shape[0]
                        if aspect_ratio > width / height:
                            new_width = width
                            new_height = int(width / aspect_ratio)
                        else:
                            new_height = height
                            new_width = int(height * aspect_ratio)
    
                        resized_frame = cv2.resize(frame, (new_width, new_height))
    
                        # Create a black canvas of the desired output size
                        canvas = np.zeros((height, width, 3), dtype=np.uint8)
    
                        # Calculate the position to place the resized frame in the center of the canvas
                        x = (width - new_width) // 2
                        y = (height - new_height) // 2
    
                        # Place the resized frame on the canvas
                        canvas[y:y + new_height, x:x + new_width] = resized_frame
    
                        # Convert the frame to PAL HD color space
                        canvas = convert_to_pal_hd_color_space(canvas)
    
                        # Write the frame to the video for the specified duration
                        for _ in range(int(duration_per_image * framerate)):
                            video.write(canvas)
    
                    gif_frames.release()
                else:
                    # Read JPG images
                    image = cv2.imread(image_path)
    
                    # Rotate the image based on the Exif rotation flag
                    rotation_flag = image._getexif().get(0x112, 1) if hasattr(image, '_getexif') else 1
                    if rotation_flag == 3:
                        image = cv2.rotate(image, cv2.ROTATE_180)
                    elif rotation_flag == 6:
                        image = cv2.rotate(image, cv2.ROTATE_90_COUNTERCLOCKWISE)
                    elif rotation_flag == 8:
                        image = cv2.rotate(image, cv2.ROTATE_90_CLOCKWISE)
    
                    # Resize the image while maintaining the aspect ratio
                    aspect_ratio = image.shape[1] / image.shape[0]
                    if aspect_ratio > width / height:
                        new_width = width
                        new_height = int(width / aspect_ratio)
                    else:
                        new_height = height
                        new_width = int(height * aspect_ratio)
    
                    resized_image = cv2.resize(image, (new_width, new_height))
    
                    # Create a black canvas of the desired output size
                    canvas = np.zeros((height, width, 3), dtype=np.uint8)
    
                    # Calculate the position to place the resized image in the center of the canvas
                    x = (width - new_width) // 2
                    y = (height - new_height) // 2
    
                    # Place the resized image on the canvas
                    canvas[y:y + new_height, x:x + new_width] = resized_image
    
                    # Convert the image to PAL HD color space
                    canvas = convert_to_pal_hd_color_space(canvas)
    
                    # Write the image to the video for the specified duration
                    for _ in range(int(duration_per_image * framerate)):
                        video.write(canvas)
    
            # Check if the file is a video clip (MP4, AVI, MPG)
            elif image_path.endswith('.mp4') or image_path.endswith('.avi') or image_path.endswith('.mpg'):
                # Convert the video clip to a constant framerate
                temp_output_file = 'temp_output.mp4'
                temp_video_path, original_fps = convert_to_constant_framerate(image_path, temp_output_file, framerate)
    
                # Open the converted video clip
                video_clip = cv2.VideoCapture(temp_video_path)
    
                # Read and write each frame of the video clip
                for _ in range(int(duration_per_image * framerate)):
                    _, frame = video_clip.read()
    
                    # Rotate the frame based on the Exif rotation flag
                    rotation_flag = frame._getexif().get(0x112, 1) if hasattr(frame, '_getexif') else 1
                    if rotation_flag == 3:
                        frame = cv2.rotate(frame, cv2.ROTATE_180)
                    elif rotation_flag == 6:
                        frame = cv2.rotate(frame, cv2.ROTATE_90_COUNTERCLOCKWISE)
                    elif rotation_flag == 8:
                        frame = cv2.rotate(frame, cv2.ROTATE_90_CLOCKWISE)
    
                    # Resize the frame to the desired output size
                    resized_frame = cv2.resize(frame, (width, height))
    
                    # Convert the frame to PAL HD color space
                    resized_frame = convert_to_pal_hd_color_space(resized_frame)
    
                    # Write the frame to the video
                    video.write(resized_frame)
    
                video_clip.release()
                os.remove(temp_output_file)
    
        # Release the VideoWriter and display a success message
        video.release()
        print(f"Video saved to {output_file}")
    
    
    # Usage example
    images_folder = 'path/to/images/folder'
    output_file = 'output.mp4'
    duration_per_image = 3  # Duration in seconds per image
    video_quality = (cv2.IMWRITE_JPEG_QUALITY, 100)  # Maximum quality (100) for JPEG encoding
    framerate = 30  # Output framerate
    
    create_slideshow(images_folder, output_file, duration_per_image, video_quality, framerate)
    This updated code now handles different color spaces and color ranges for both input images and videos. It converts everything to the PAL HD color space (bt.709) and full color range before adding them to the output video.
    Yes, you can use opencv-python-headless instead of opencv-python if you are running the code as a Windows 11 scheduled task or in a headless environment where a display is not available.

    opencv-python-headless is a variant of opencv-python that doesn't require an X server or a graphical display. It provides the same functionality as opencv-python, but without the dependencies on GUI-related packages.

    To install opencv-python-headless, you can use the following command:

    Copy code
    pip install opencv-python-headless
    Make sure to install opencv-python-headless along with the other mentioned dependencies (Pillow and FFmpeg) before running the code.

    Using opencv-python-headless should work fine for your batch processing needs and scheduled tasks in a headless environment.
    As you may notice, it forgot to retain the code for handling both CFR abd VFR from the immediately prior code it generated:

    Certainly! Here's an updated version of the code that can handle both constant framerate and variable framerate input videos while retaining the existing code for rotation and resizing:

    Code:
    python
    import cv2
    import glob
    import os
    import numpy as np
    
    
    def convert_to_constant_framerate(video_path, output_path, framerate):
        # Open the input video
        video = cv2.VideoCapture(video_path)
    
        # Get the original video's properties
        frame_count = int(video.get(cv2.CAP_PROP_FRAME_COUNT))
        original_fps = video.get(cv2.CAP_PROP_FPS)
    
        # Calculate the frame interpolation factor
        interpolation_factor = original_fps / framerate
    
        # Create a VideoWriter with the desired constant framerate
        fourcc = cv2.VideoWriter_fourcc(*'mp4v')
        video_out = cv2.VideoWriter(output_path, fourcc, framerate, (int(video.get(cv2.CAP_PROP_FRAME_WIDTH)), int(video.get(cv2.CAP_PROP_FRAME_HEIGHT))))
    
        # Read and write each frame of the input video to the output video
        for i in range(frame_count):
            ret, frame = video.read()
            if not ret:
                break
    
            # Write each frame multiple times based on the interpolation factor
            for _ in range(int(interpolation_factor)):
                video_out.write(frame)
    
        video.release()
        video_out.release()
    
        return output_path, original_fps
    
    
    def create_slideshow(images_folder, output_file, duration_per_image, video_quality, framerate):
        # Collect the paths of all JPG, GIF, and video clips in the folder tree
        image_paths = glob.glob(os.path.join(images_folder, '**/*.jpg'), recursive=True)
        image_paths.extend(glob.glob(os.path.join(images_folder, '**/*.gif'), recursive=True))
        image_paths.extend(glob.glob(os.path.join(images_folder, '**/*.mp4'), recursive=True))
        image_paths.extend(glob.glob(os.path.join(images_folder, '**/*.avi'), recursive=True))
        image_paths.extend(glob.glob(os.path.join(images_folder, '**/*.mpg'), recursive=True))
        image_paths.sort()
    
        # Initialize the VideoWriter with H.264 codec and the specified framerate
        first_image = cv2.imread(image_paths[0])
        width, height = 1920, 1080  # Desired output size
        fourcc = cv2.VideoWriter_fourcc(*'avc1')  # For H.264 codec
        video = cv2.VideoWriter(output_file, fourcc, framerate, (width, height))
    
        # Iterate over the images and videos and add them to the slideshow video
        for image_path in image_paths:
            # Check if the file is an image (JPG or GIF)
            if image_path.endswith('.jpg') or image_path.endswith('.gif'):
                if image_path.endswith('.gif'):
                    # Read GIF frames
                    gif_frames = cv2.VideoCapture(image_path)
                    frame_count = int(gif_frames.get(cv2.CAP_PROP_FRAME_COUNT))
    
                    # Read and write each frame of the GIF
                    for _ in range(frame_count):
                        _, frame = gif_frames.read()
    
                        # Rotate the frame based on the Exif rotation flag
                        rotation_flag = frame._getexif().get(0x112, 1) if hasattr(frame, '_getexif') else 1
                        if rotation_flag == 3:
                            frame = cv2.rotate(frame, cv2.ROTATE_180)
                        elif rotation_flag == 6:
                            frame = cv2.rotate(frame, cv2.ROTATE_90_COUNTERCLOCKWISE)
                        elif rotation_flag == 8:
                            frame = cv2.rotate(frame, cv2.ROTATE_90_CLOCKWISE)
    
    
    
                        # Resize the frame while maintaining the aspect ratio
                        aspect_ratio = frame.shape[1] / frame.shape[0]
                        if aspect_ratio > width / height:
                            new_width = width
                            new_height = int(width / aspect_ratio)
                        else:
                            new_height = height
                            new_width = int(height * aspect_ratio)
    
                        resized_frame = cv2.resize(frame, (new_width, new_height))
    
                        # Create a black canvas of the desired output size
                        canvas = np.zeros((height, width, 3), dtype=np.uint8)
    
                        # Calculate the position to place the resized frame in the center of the canvas
                        x = (width - new_width) // 2
                        y = (height - new_height) // 2
    
                        # Place the resized frame on the canvas
                        canvas[y:y + new_height, x:x + new_width] = resized_frame
    
                        # Write the frame to the video for the specified duration
                        for _ in range(int(duration_per_image * framerate)):
                            video.write(canvas)
    
                    gif_frames.release()
                else:
                    # Read JPG images
                    image = cv2.imread(image_path)
    
                    # Rotate the image based on the Exif rotation flag
                    rotation_flag = image._getexif().get(0x112, 1) if hasattr(image, '_getexif') else 1
                    if rotation_flag == 3:
                        image = cv2.rotate(image, cv2.ROTATE_180)
                    elif rotation_flag == 6:
                        image = cv2.rotate(image, cv2.ROTATE_90_COUNTERCLOCKWISE)
                    elif rotation_flag == 8:
                        image = cv2.rotate(image, cv2.ROTATE_90_CLOCKWISE)
    
                    # Resize the image while maintaining the aspect ratio
                    aspect_ratio = image.shape[1] / image.shape[0]
                    if aspect_ratio > width / height:
                        new_width = width
                        new_height = int(width / aspect_ratio)
                    else:
                        new_height = height
                        new_width = int(height * aspect_ratio)
    
                    resized_image = cv2.resize(image, (new_width, new_height))
    
                    # Create a black canvas of the desired output size
                    canvas = np.zeros((height, width, 3), dtype=np.uint8)
    
                    # Calculate the position to place the resized image in the center of the canvas
                    x = (width - new_width) // 2
                    y = (height - new_height) // 2
    
                    # Place the resized image on the canvas
                    canvas[y:y + new_height, x:x + new_width] = resized_image
    
                    # Write the image to the video for the specified duration
                    for _ in range(int(duration_per_image * framerate)):
                        video.write(canvas)
    
            # Check if the file is a video clip (MP4, AVI, MPG)
            elif image_path.endswith('.mp4') or image_path.endswith('.avi') or image_path.endswith('.mpg'):
                # Check if the video has variable framerate
                video_capture = cv2.VideoCapture(image_path)
                is_variable_framerate = video_capture.get(cv2.CAP_PROP_POS_FRAMES) != video_capture.get(cv2.CAP_PROP_POS_AVI_RATIO)
    
                if is_variable_framerate:
                    # Convert the variable framerate video to a temporary file with the desired constant framerate
                    temp_file_path = 'temp_video.mp4'
                    constant_framerate_path, original_fps = convert_to_constant_framerate(image_path, temp_file_path, framerate)
    
                    # Read the frames from the temporary file
                    video_clip = cv2.VideoCapture(constant_framerate_path)
                    frame_count = int(video_clip
    
    .get(cv2.CAP_PROP_FRAME_COUNT))
                else:
                    # Read the frames directly from the video clip
                    video_clip = cv2.VideoCapture(image_path)
                    frame_count = int(video_clip.get(cv2.CAP_PROP_FRAME_COUNT))
    
                # Read and write each frame of the video clip
                for _ in range(frame_count):
                    _, frame = video_clip.read()
    
                    # Rotate the frame based on the Exif rotation flag
                    rotation_flag = frame._getexif().get(0x112, 1) if hasattr(frame, '_getexif') else 1
                    if rotation_flag == 3:
                        frame = cv2.rotate(frame, cv2.ROTATE_180)
                    elif rotation_flag == 6:
                        frame = cv2.rotate(frame, cv2.ROTATE_90_COUNTERCLOCKWISE)
                    elif rotation_flag == 8:
                        frame = cv2.rotate(frame, cv2.ROTATE_90_CLOCKWISE)
    
                    # Resize the frame to the desired output size
                    resized_frame = cv2.resize(frame, (width, height))
    
                    # Write the frame to the video
                    for _ in range(int(duration_per_image * framerate)):
                        video.write(resized_frame)
    
                video_clip.release()
    
        # Release the VideoWriter and display a success message
        video.release()
        print(f"Video saved to {output_file}")
    
    
    # Usage example
    images_folder = 'path/to/images/folder'
    output_file = 'output.mp4'
    duration_per_image = 3  # Duration in seconds per image
    video_quality = (cv2.IMWRITE_JPEG_QUALITY, 100)  # Maximum quality (100) for JPEG encoding
    framerate = 30  # Output framerate
    
    create_slideshow(images_folder, output_file, duration_per_image, video_quality, framerate)
    This updated code now handles both constant framerate and variable framerate input videos. If the input video has a variable framerate, it will be converted to a temporary file with the desired constant framerate using the `convert_to_constant_framerate()` function. The frames are then read and processed similarly to the constant framerate videos, including rotation and resizing before being written to the output video.
    Quote Quote  
  23. Member hydra3333's Avatar
    Join Date
    Oct 2009
    Location
    Australia
    Search Comp PM
    Final post about ChatGPT.
    It had a pretty decent go, even if it does occasionally forget (omit) some functionally between code re-generation after a question.
    Not sure if it'll work as desired, tempted to try to run it.

    Apologies for the oversight. Here's an updated version of the script that handles VFR (Variable Frame Rate), CFR (Constant Frame Rate) of the desired output framerate, and CFR of a different framerate:

    Code:
    python
    import cv2
    import numpy as np
    import os
    import shutil
    
    
    def convert_to_constant_framerate(input_file, output_file, target_framerate):
        # Create a temporary directory to store the frames
        temp_dir = 'temp_frames'
        os.makedirs(temp_dir, exist_ok=True)
    
        # Extract frames from the input video
        vidcap = cv2.VideoCapture(input_file)
        success, frame = vidcap.read()
        count = 0
        while success:
            frame_path = os.path.join(temp_dir, f'{count:06d}.jpg')
            cv2.imwrite(frame_path, frame)  # Save frame as JPEG file
            success, frame = vidcap.read()
            count += 1
        vidcap.release()
    
        # Calculate the duration of each frame at the target framerate
        frame_duration = 1 / target_framerate
    
        # Create a new video using the constant framerate
        ffmpeg_cmd = f'ffmpeg -r {target_framerate} -i {temp_dir}/%06d.jpg -vf "setpts={frame_duration:.3f}*PTS" ' \
                     f'-c:v libx264 -crf 18 -pix_fmt yuv420p -y {output_file}'
        os.system(ffmpeg_cmd)
    
        # Remove the temporary directory
        shutil.rmtree(temp_dir)
    
        # Return the path of the converted video and original framerate
        return output_file, target_framerate
    
    
    def convert_to_variable_framerate(input_file, output_file, target_framerate):
        # Create a new video using the desired output framerate
        ffmpeg_cmd = f'ffmpeg -i {input_file} -vsync vfr -r {target_framerate} {output_file}'
        os.system(ffmpeg_cmd)
    
        # Return the path of the converted video and original framerate
        return output_file, target_framerate
    
    
    def convert_framerate(input_file, output_file, target_framerate):
        # Check if the input video has the desired output framerate
        vidcap = cv2.VideoCapture(input_file)
        input_framerate = vidcap.get(cv2.CAP_PROP_FPS)
        vidcap.release()
    
        if abs(input_framerate - target_framerate) < 1e-6:
            # Input video has the same framerate as the target framerate
            converted_path = input_file
            original_fps = input_framerate
        else:
            if input_framerate != target_framerate:
                # Input video has a different framerate, convert it to the desired output framerate
                converted_path, original_fps = convert_to_constant_framerate(input_file, 'temp.mp4', target_framerate)
            else:
                converted_path = input_file
                original_fps = input_framerate
    
        # Copy or convert the input video based on its framerate
        if input_framerate == 0:
            # Input video has a variable framerate
            converted_path, original_fps = convert_to_variable_framerate(converted_path, output_file, target_framerate)
        else:
            # Input video has a constant framerate
            shutil.copy(converted_path, output_file)
    
        # Delete the temporary converted video file
        if converted_path != input_file:
            os.remove(converted_path)
    
        # Return the path of the converted video and original framerate
        return output_file, original_fps
    
    
    def convert_to_pal_hd_color_space(frame):
        # Convert the frame to BT.709 color
    
     space
        return cv2.cvtColor(frame, cv2.COLOR_BGR2YUV)
    
    
    def convert_to_full_color_range(frame):
        # Convert the frame to full color range
        frame = frame.astype(np.float32)
        frame *= 255.0 / 219.0
        frame += (16.0 - (16.0 * 255.0 / 219.0))
        frame = np.clip(frame, 0, 255)
        return frame.astype(np.uint8)
    
    
    def create_slideshow(images_folder, output_file, duration_per_image, video_quality, framerate):
        # Get the dimensions of the first image in the folder
        image_path = os.path.join(images_folder, os.listdir(images_folder)[0])
        image = cv2.imread(image_path)
        height, width, _ = image.shape
    
        # Create the video writer
        fourcc = cv2.VideoWriter_fourcc(*'mp4v')
        video = cv2.VideoWriter(output_file, fourcc, framerate, (width, height))
    
        # Process each image in the folder
        for image_name in os.listdir(images_folder):
            image_path = os.path.join(images_folder, image_name)
            image = cv2.imread(image_path)
    
            # Rotate the image if necessary (based on EXIF orientation flag)
            exif = cv2.imread(image_path, -1)
            if exif is not None and exif.shape[0] >= 2 and exif.shape[1] >= 2:
                rotation_flag = exif[1, 0]
                if rotation_flag == 3:
                    image = cv2.rotate(image, cv2.ROTATE_180)
                elif rotation_flag == 6:
                    image = cv2.rotate(image, cv2.ROTATE_90_COUNTERCLOCKWISE)
                elif rotation_flag == 8:
                    image = cv2.rotate(image, cv2.ROTATE_90_CLOCKWISE)
    
            # Resize the image to fit the video frame
            image = cv2.resize(image, (width, height))
    
            # Convert the image to BT.709 color space
            image = convert_to_pal_hd_color_space(image)
    
            # Convert the image to full color range
            image = convert_to_full_color_range(image)
    
            # Write the image to the video for the specified duration
            for _ in range(int(duration_per_image * framerate)):
                video.write(image)
    
        # Check if the file is a video clip (MP4, AVI, MPG)
        elif file_path.endswith('.mp4') or file_path.endswith('.avi') or file_path.endswith('.mpg'):
            # Determine the input video's framerate
            vidcap = cv2.VideoCapture(file_path)
            input_framerate = vidcap.get(cv2.CAP_PROP_FPS)
            vidcap.release()
    
            # Convert the input video to the desired output framerate
            converted_path, original_fps = convert_framerate(file_path, 'temp.mp4', target_framerate)
    
            # Open the converted video
            converted_video = cv2.VideoCapture(converted_path)
    
            # Read and write each frame of the converted video
            while True:
                ret, frame = converted_video.read()
                if not ret:
                    break
    
                # Convert the frame to BT.709 color space
                frame = convert_to_pal_hd_color_space(frame)
    
                # Convert the frame to full color range
                frame = convert_to_full_color_range(frame)
    
                # Write the frame to the video
                video.write(frame)
    
            converted_video.release()
    
            # Delete the temporary converted video file
            if converted_path != file_path:
                os.remove(converted_path)
    
        video.release()
    
        return output_file
    
    
    # Example usage:
    images_folder
    
     = 'input_images'
    output_file = 'slideshow.mp4'
    duration_per_image = 3.0
    video_quality = 18
    framerate = 30
    
    create_slideshow(images_folder, output_file, duration_per_image, video_quality, framerate)
    This updated script should handle VFR (Variable Frame Rate), CFR (Constant Frame Rate) of the desired output framerate, and CFR of a different framerate. It converts the input videos to the desired output framerate and applies the necessary color space and color range conversions.
    Apparently adding something like this in the right place yields subtitling.
    Code:
            # Get the filename of the input image or video
            _, filename = os.path.split(image_path)
    
            # Write the filename as a subtitle on the frame
            text = f"Input: {filename}"
            cv2.putText(canvas, text, (10, 30), cv2.FONT_HERSHEY_SIMPLEX, 1, (255, 255, 255), 2, cv2.LINE_AA)
    Quote Quote  
  24. opencv was an idea, yes, but I wanted to include videos as well and mainly loading all kinds of weird videos, your videos was a challenge, those cheap or early camera videos , catch different video/audio track lengths (which would screw up a sync royally) , come up with all transfer, matrix and primaries etc.. , it was a reminder what a variety of videos there could be (at the beggining a was just imagining couple of video formats from a current videocamcorders). Also allow to mix videos and images and being able to create corresponding audio track as well concatenating audios (therefore creating a silent parts for images if needed) . So far I respect SAR, pillarboxing or letterboxing, rotating, changing fps, handling interlacing, having an unlimited amount of sources (frames are just requested for encoder or previewer). There are never loaded more than two images or videos (because of transitions) by vapoursynth plugins. Select a type of transitions (image to image, image to video, video to image, video to video). To be able to use use powerfull mediainfo and vapoursynth source plugin for video properties, whatever is available is used (priority has mediainfo) etc. Log everything it does for a feedback.
    But that video processing module is a standalone and there is no problem to add features.

    Main concern was to not vspipe start to slow down when progressing with time, so far it is ok, it just slightly drops per source (it depends) but it reaches a certain limit and not dropping fps anymore
    Last edited by _Al_; 27th May 2023 at 10:25.
    Quote Quote  
  25. Member hydra3333's Avatar
    Join Date
    Oct 2009
    Location
    Australia
    Search Comp PM
    yes I concur, and am very much looking forward to seeing your intelligent approaches !

    You will have made something pretty unique !

    I have 2 rather large (read: huge, to me anyway) media stores and can't wait to try it.

    My tinkering has an option to subtitle images/clips with their filename and part of the folder tree they reside in, which would be almost trivial to add to your script if it is not already done
    Last edited by hydra3333; 27th May 2023 at 12:46.
    Quote Quote  
  26. I did not force myself or had time to wrap that project into final gui yet, quite busy now. Just posting what I have in adirectory, but it should be ok, I post what to do, if someone wants to test it.

    1.Download the whole directory with everything, to prevent confusion, latest portable vapoursynth, python 3.11 and all files for that slideshow, unzip it, it is all there, all needed files. It is setup with all vapoursynth dlls and python modules, even for havsfunc, vapoursynth plugin source dll's(vapoursynth64 dir), deiterlace dlls, svp(not yet used in processing) etc. all python modules (some in dir some in slideshow/python_modules dir) are updated sometime in january 2023, so very latest. Also opencv included, which is not fully installed but fine working with my good old crude: view.py (also included) that could be used as a previewer to actually see what slideshow would look like before encoding.

    2.add ffmpeg.exe and ffprobe.exe into that unzip directory if using audio

    3.all important files and directories besides many other (which are portable python and vapoursynth) are listed below, this task number could be skipped,
    only if if you just have your own dir with vapoursynth and python just these would go there:
    Code:
    files:
    settings.py             edit: fpsnum, (wrong fpsnum, need fix), fpsden, log_path, directory_path, audio_data_path (and many others based on a preference)
    preview_video.py        for previewing video
    encode_video.py         for encoding video
    encode_audio.py         for encoding audio
    Load settings GUI.exe   run it before you run everything to set up load.py  what mediafile extension uses what vapoursynth source plugin
    view.py                 preview_video.py uses it
    MediaInfo.dll           downloaded from  https://mediaarea.net/en/MediaInfo/Download/Windows
    
    
    directories:
    slideshow               actual slideshow package also with many python modules dependencies
    vapoursynth64           vapoursynth dlls
    tools                   contains executables, d2vwitch, ffmsindex, x264, mp4box end dependencies
    cv2
    opencv_contrib_python-4.7.0.68.dist-info
    might select "Date modified" in windows browse explorer so those files are on the top, because they were created latest

    4.manually install PIL(needed to get rotation of images) and numpy (view.py needs it, if not using that previewer in preview_video.py , not needed)
    workflow how to do it was posted by hydra3333:
    Code:
    download portable pip which is called pip.pyz:    https://bootstrap.pypa.io/pip/pip.pyz    and put it in that portable directory
    how to use it: python pip.pyz --help
    installing PIL:
    python pip.pyz install --target .\ Pillow --force-reinstall --upgrade --upgrade-strategy eager  --verbose
    installing numpy (optional for view.py module)
    python pip.pyz install --target .\ numpy --force-reinstall --upgrade --upgrade-strategy eager  --verbose
    PIL and numpy directory are present in that portable directory, but anyway, it might not work, it needs to be re-installed by those commands

    5.usage:
    in that order preview slideshow, encode video, encode audio, muxing into mp4 could be:
    Code:
    python "preview_video.py"
    VSPipe.exe -c y4m  encode_video.py  -  | "...path to your portable dir\\tools\\x264.exe" --demuxer y4m --crf 18 --vbv-maxrate 30000 --vbv-bufsize 30000 --keyint 60 --tune film --colorprim bt709 --transfer bt709 --colormatrix bt709 --output  "output.264" -
    python "encode_audio.py"  "output.m4a"
    "...path to your portable dir\\tools\\MP4Box.exe"  -add  "output.264"  -add  "output.m4a"#audio  -new  "output.mp4"
    Last edited by _Al_; 5th Jun 2023 at 00:20.
    Quote Quote  
  27. ---
    before:
    python "encode_audio.py" "output.m4a"
    is run, before making audio, script has to run first thru with preview_video.py or encode_video.py to actually gather data for making audio,
    so if audio is only a target (joining audios from clips only), not video, perhaps fastest way to go thru slideshow is just requesting frames instead. It is commented out in preview_video.py. Then using python "encode_audio.py" "output.m4a" command line. Might make it easier later. But without going synchronously while making frames for video is not easy because of transitions.

    ---
    portable python does not include tkinter (gui creation module) anymore in portable download, in older python versions they did, they abandoned it now, why the heck people do that, especially if it is a nightmare or not at all posible to manually install it in that portable directory. So I compiled Load settings GUI.exe at least instead, to set up load.py module that loads media filepaths into clips in vapoursynth. If someone has python's tkinter module, gui could be launched by: from slideshow import load_settings; load_settings.gui() or double click load_settings.py (in slideshow/python_modules dir) instead (if scared of exe)

    ---
    if any problem with load.py module, loading mediafilepaths, most likely it is a conflict with previous json file version that store data, delete that file: C:\Users\user-name\AppData\Roaming\Load\load_ini.json, run things again

    ---
    processing of media to a desired format is crude so far, havsfunc.changefps is used to set correct fps, so far easy upscalers and downscalers. But that is all customizable in slideshow/process_clip.py , things can be added there

    ---
    main concern is:
    to watch if vspipe running, while encoding, if it does not drop processing fps considerably with time
    Last edited by _Al_; 4th Jun 2023 at 21:19.
    Quote Quote  
  28. Member hydra3333's Avatar
    Join Date
    Oct 2009
    Location
    Australia
    Search Comp PM
    Wow, thanks !!!!

    I will have a go. I do have my own dir with vapoursynth and python, so just those would go there.

    I will check, but I guess it will handle VFR video files from mobile phones (needs to be made CFR of course) which is nearly all videos in my archive at least.
    Cool with havsfunc.changefps ! I had mistakenly thought that it didn't do motion interpolated framerate conversions.
    svp is commercial, so off-limits for me

    Although, timing of testing may be a week or so away ...

    I had abandoned my gear, then had to have another go ... I am still finalizing a quick kludgy hack of my stuff (which didn't handle audio) for an old lady's recent holiday:
    join up all the image (eg jpg) and video (eg .mp4) files in sequence as usual ...
    and keep track of every video's frame start/end points
    and in a separate step use a pre-made background audio track and those start/end points to open every small video, extract a section of audio, and insert at the right place in the background audio with fade-in and fade-out and them mux the video and audio.

    I am really looking forward to reviewing your new stuff !
    Quote Quote  
  29. Originally Posted by hydra3333 View Post
    and in a separate step use a pre-made background audio track and those start/end points to open every small video, extract a section of audio, and insert at the right place in the background audio with fade-in and fade-out and them mux the video and audio.
    I ended up creating a txt audio data file while previewing video, encoding video or just requesting vapoursynth frames. That file keeps track of time in milliseconds for a silent audio parts, including transitions, if there are and video paths (mediafiles with audio). audio_encode.py then uses that stored file to create audio.

    Another issue was using frame lengths for transitions or referencing lengths of some sort in frames was not a good idea, because if fpsnum/fpsden changes, it is all wrong then. So for transition lengths, image lengths and keeping reference for audio lengths, there is time in milliseconds used as a reference. Also pydub works in milliseconds, so it is just all working.

    Another issue was with vapoursynth bas source, it could have a problem with some weird audios in some old camcordes etc., but ffmpeg would just load it. So using pydub for videopaths, if there is a problem it falls back to make wave audio using ffmpeg.

    Another thing to watch for was to make sure audio and video times for a segment are same exact length. It extends silent parts to some audio tracks (that yours video surveillance footage I guess) because audio was much shorter. This way it would not go out of sync (video and audio for the total length).

    Based on your AI recommendation code , I realized that just make wave files for each image segment, transition would work also, but what if there is 1000 files? So rather decided to make it on the need bases, gradually, like video, based on gathered audio_data_file.txt. While encoding for example, what if it crushes after two hours? What would end up is a good *.264 file. And that reference audio file has a references just for the same files, those encoded images and videos. So audio can be run at least after that, then muxed together. Also this way, if there are parts with images only, example 1000 images, it adds up silent parts for image lengths and transitions, so at the end there would be only one line in milliseconds for a reference in that audio reference file.

    Hopefully there is not many bugs and problems.

    Also I included DGIndexNV source plugin implementation in load.py but actually never tested it, here I do not have NVIDIA PC available atm., will test it later. Made auto indexing feature with it, like d2vwith or ffmsindex. So chances are it gives some error or something, ussually like it is if something is not tested.
    Last edited by _Al_; 5th Jun 2023 at 00:11.
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!