VideoHelp Forum

+ Reply to Thread
Page 3 of 3
FirstFirst 1 2 3
Results 61 to 68 of 68
Thread
  1. Originally Posted by hydra3333 View Post
    I must have assumed vs_transitions behaviour; I had naively thought transitions merged the clips at a transition "stealing" x frames from left and from right sides to overlap the clips, so adding the clips like that might not have worked (videos highlight such things).
    I had also decided to chop only the "necessary for transition" frames from left and right and only give those to vs_transitions.
    No, I think you have it right!
    It just eats up from given clips evenly from left and right.
    I did not elaborate about it much at all, because it does not matter if using images. Just modified couple of lines in old script and added vs_transitions

    So to let vs_transitions eat up whatever it wants by passing frames=length, which is CROSS_DUR, and therefore make everything work even for video would be like this (also adding fading from and to black at ends):
    Code:
    .
    .
    LOADER = load.Sources()
    CROSS_DUR = max(2,CROSS_DUR)
    paths = Path(DIRECTORY).glob("*.*")
    print('wait loading paths ...')
    path = get_path(paths)
    if path is None:
        raise ValueError(f'Extensions: {EXTENSIONS}, not found in {DIRECTORY}')
    clips = vs_transitions.fade_from_black(get_clip(path))
    while 1:
        path = get_path(paths)
        if path is None:
            break
        next_clip = get_clip(path)
        clips = clip_transition(clips, next_clip, CROSS_DUR, next(transition_generator))
    clips = vs_transitions.fade_to_black(clips)
    clips = clips.resize.Bicubic(format = vs.YUV420P8, matrix_in_s='709')
    print('done')
    
    clips.set_output()
    edit: but again, just realized, there needs to be audio with videos, so this modifications is again good for images only.
    There is a script on previous page for audio also, but no transitions, so it would need to be modified to use transition only for images and come up with correct quiet audio length. And if next clip is a video, then no transitions and use original audio.
    Last edited by _Al_; 24th Mar 2023 at 22:27.
    Quote Quote  
  2. There is a bug in vs_transitions, in "fade_to_black" and "fade_from_black" , where black clip is hardcoded to 24fps (using BlankClip), so if other clip has different fps, it will come out all right, but fps is "dynamic", so it could give a trouble later if fps is used in code, getting something like "ZeroDivisionError" or something else.

    So in __init__.py inside those two functions, just before return there should be added this line:
    Code:
    black_clip_resized = black_clip_resized.std.AssumeFPS(fpsnum=src_clip.fps.numerator, fpsden=src_clip.fps.denominator)
    https://github.com/OrangeChannel/vs-transitions/issues/1
    I got some response, there suppose to be a new package ready that is updated or currently being updated with a new link.
    Last edited by _Al_; 25th Mar 2023 at 17:42.
    Quote Quote  
  3. Images and videos using transitions and audio also.

    If image is loaded audio is silent, but to have audio when video is loaded.
    To define ATTRIBUTE_AUDIO_PATH is important, it could be any video that is going to be loaded. It is to create silent audio parts based on that clip audio attributes.
    Also FPSNUM and FPSDEN should match values in video clips loaded (if they are), set LENGTH or CROSS_DUR more properly, CROSS_DUR cannot be bigger that LENGTH. LENGTH is ignored if video is loaded. It uses video length.

    media_to_video.py
    Code:
    import vapoursynth as vs
    from vapoursynth import core
    from functools import partial
    from pathlib import Path
    import itertools
    import sys
    import os
    
    from PIL import Image, ExifTags, UnidentifiedImageError   #pip install Pillow
    #sys.path.append(str(Path(__file__).parent)) 
    
    import load
    import vs_transitions
    
    transitions = [
        "cover",
        "cube_rotate",
        "curtain_cover",
        "curtain_reveal",
        "fade",
    ##    "fade_from_black",
    ##    "fade_to_black",
    ##    "linear_boundary",
        "poly_fade",
        "push",
        "reveal",
        "slide_expand",
        "squeeze_expand",
        "squeeze_slide",
        "wipe",
    ]
    
    #neverending cycling from list
    TRANSITION_GENERATOR = itertools.cycle(transitions)
    
    DIRECTORY       = r'D:\path_to_tests\test2'
    EXTENSIONS      = [".jpg",".m2ts"]
    WIDTH           = 1920
    HEIGHT          = 1080
    LENGTH          = 56
    CROSS_DUR       = 26
    FPSNUM          = 60000,
    FPSDEN          = 1001
    UPSIZE_KERNEL   = 'Lanczos'
    DOWNSIZE_KERNEL = 'Spline36'
    BOX             = True    # True would initiate letterboxing or pillarboxing. False fills to WIDTH,HEIGTH
    MODX            = 2       # mods for letterboxing calculations, example, for 411 YUV as an extreme
    MODY            = 2       # mods would have to be MODX=4, MODY=1 as minimum
    SAVE_ROTATED_IMAGES = False   #saves image to disk with name suffix: "_rotated" using PIL module
    ATTRIBUTE_AUDIO_PATH = r'D:\path_to_tests\test2\20230131193501.m2ts'
    
    class Clip:
        def __init__(self, video = None, audio=None, attribute_audio_path=None):
            self.video = video
            self.audio = audio
            if self.video is None:
                self.video = core.std.BlankClip()
            if self.audio is None:
                if attribute_audio_path is None:
                    raise ValueError('arguent default_audio (could be very short) to get default audio for images')
                attr_audio = core.bas.Source(attribute_audio_path)
                length = int(attr_audio.sample_rate/self.video.fps*self.video.num_frames)
                self.audio = attr_audio.std.BlankAudio(length = length)
        def trim(self, first=0, last=None, length=None):
            afirst  = self.to_samples(first)    if first  is not None else None
            alast   = self.to_samples(last+1)-1 if last   is not None else None
            alength = self.to_samples(length)   if length is not None else None
            return Clip( self.video.std.Trim(first=first, last=last, length=length),
                         self.audio.std.AudioTrim(first=afirst,last=alast,length=alength)
                        )
        def to_samples(self, frame):
            return int((self.audio.sample_rate/self.video.fps)*frame)
    
        def __add__(self, other):
            return Clip(self.video + other.video, self.audio + other.audio)
    
        def __mul__(self, multiple):
            return Clip(self.video*multiple, self.audio*multiple)
    
        def __getitem__(self, val):
            if isinstance(val, slice):
                if val.step is not None:
                    raise ValueError('Using steps while slicing AudioNode together with VideoNode makes no sense')
                start = self.to_samples(val.start) if val.start is not None else None
                stop =  self.to_samples(val.stop)  if val.stop  is not None else None
                return Clip( self.video.__getitem__(val),
                             self.audio.__getitem__(slice(start,stop))
                             )
            elif isinstance(val, int):
                start = self.to_samples(val)
                stop = int(start + self.audio.sample_rate/self.video.fps)
                return Clip( self.video[val],
                             self.audio.__getitem__(slice(start,stop))
                             )        
        def __repr__(self):
            return '{}\n{}'.format(repr(self.video), repr(self.audio))
    
        def __str__(self):
            return '{}\n{}'.format(str(self.video), str(self.audio))
    
    
    def rotation_check(clip, path, save_rotated_image=False):
        #PIL module loads an image, checks if EXIF data, checks for 'Orientation'
        try:
            image = Image.open(str(path))
        except UnidentifiedImageError:
            return clip
        except PermissionError:
            print(f'PIL, Permission denied to load: {path}')
            return clip
        except Exception as e:
            print(f'PIL, {e}')
            return clip
        try:        
            for key in ExifTags.TAGS.keys():
                if ExifTags.TAGS[key] == 'Orientation':
                    break
            exif = dict(image.getexif().items())
            value = exif[key]
        except (AttributeError, KeyError, IndexError):
            # no getexif
            return clip
        else:
            if   value == 3: clip=clip.std.Turn180()
            elif value == 8: clip=clip.std.Transpose().std.FlipVertical()
            elif value == 6: clip=clip.std.Transpose().std.FlipHorizontal()
            if save_rotated_image and value in [3,8,6]:
                #rotation degrees are in counterclockwise direction!
                rotate = {3:Image.ROTATE_180, 6:Image.ROTATE_270, 8:Image.ROTATE_90}
                image = image.transpose(rotate[value])
                path = path.parent / f'{path.stem}_rotated{path.suffix}'
                image.save(str(path))
        image.close()    
        return clip
    
    def boxing(clip, W=WIDTH, H=HEIGHT):
        cw, ch = clip.width, clip.height
        if W/H > cw/ch:
            w = cw*H/ch
            x = int((W-w)/2)
            x = x - x%MODX
            x = max(0, x)
            clip = resize_clip(clip, W-2*x, H)
            if x: return clip.std.AddBorders(left=x, right=x, color=(16,128,128))  #RGB is out then (16,16,16)
            else: return clip
        else:
            h = ch*W/cw
            y = int((H-h)/2)
            y = y - y%MODY
            y = max(0, y)
            clip = resize_clip(clip, W, H-2*y)
            if y: return clip.std.AddBorders(top=y, bottom=y, color=(16,128,128))
            else: return clip
          
    def resize_clip(clip,w,h, W=WIDTH, H=HEIGHT):
        if w>W or h>H: resize = getattr(clip.resize, DOWNSIZE_KERNEL)
        else:          resize = getattr(clip.resize, UPSIZE_KERNEL)
        if clip.format.color_family==vs.RGB:
            #rgb to YUV, perhaps only for png images, figure out what matrix out is needed
            return resize(width=w, height=h, format=vs.YUV420P8, matrix_s='709')
        else:
            #YUV to YUV
            return resize(width=w, height=h, format=vs.YUV420P8)  
    
    def get_clip(path):
        #get video
        data = LOADER.get_data([str(path),])[0] 
        video = data.clip
        if data.load_isError:
            raise ValueError(f'{data.load_log}{data.load_log_error}\nload.py could not create vs.VideoNode from {path}')
        video = rotation_check(video, path, save_rotated_image=SAVE_ROTATED_IMAGES)
        video = video.std.AssumeFPS(fpsnum=FPSNUM, fpsden=FPSDEN)
        if BOX:  video = boxing(video, WIDTH, HEIGHT)
        else:    video = resize_clip(video, WIDTH, HEIGHT)
        if len(video)==1:    video = video[0]*LENGTH
        video = video.resize.Bicubic(format = vs.YUV444P8, matrix_in_s='709')
        
        #get audio  
        try:
            audio = core.bas.Source(str(path))
        except AttributeError:
            raise ImportError('Vapoursynth audio source plugin "BestAudioSource.dll" could not be loaded\n'
                              'download: https://github.com/vapoursynth/bestaudiosource/releases/tag/R1')
        except (vs.Error, Exception) as e:
            #quiet audio, could not load audio , either video is an image or some problem
            clip = Clip(video, attribute_audio_path=ATTRIBUTE_AUDIO_PATH) #will generate silent clip with desired parameters
        else:
            #audio loaded
            clip = Clip(video, audio)
        return clip
    
            
    def get_path(path_generator):
        #get path of desired extensions from generator
        while 1:
            try:
                path = next(path_generator)
            except StopIteration:
                return None
            if path.suffix.lower() in EXTENSIONS:
                print(f'{path}')
                return path
              
    def get_transition_clip(a, b, duration, transition='fade'):
        left_video  = a.video[-1] * duration
        right_video = b.video[0]  * duration
        transition_func = getattr(vs_transitions, transition)
        video_transition = transition_func(left_video, right_video, frames=duration)
        silent_transition_clip = Clip(video_transition,  attribute_audio_path=ATTRIBUTE_AUDIO_PATH)
        return silent_transition_clip
    
    LOADER = load.Sources()
    CROSS_DUR = max(2,CROSS_DUR + CROSS_DUR%2) #minimum 2 and mod2 to be sure
    paths = Path(DIRECTORY).glob("*.*")
    print('wait loading paths ...')
    path = get_path(paths)
    if path is None:
        raise ValueError(f'Extensions: {EXTENSIONS}, not found in {DIRECTORY}')
    clips = get_clip(path) #get_clip() loads video AND audio as well
    clips.video = vs_transitions.fade_from_black(clips.video, frames=CROSS_DUR)
    while 1:
        path = get_path(paths)
        if path is None:
            break
        next_clip = get_clip(path)
        silent_transition_clip = get_transition_clip( clips,
                                                      next_clip,
                                                      duration=CROSS_DUR,
                                                      transition=next(TRANSITION_GENERATOR) #or just put desired available transition:  transition="wipe"
                                                      )
        clips = clips + silent_transition_clip + next_clip
    clips.video = vs_transitions.fade_to_black(clips.video, frames=CROSS_DUR)
    clips.video = clips.video.resize.Bicubic(format = vs.YUV420P8, matrix_in_s='709')
    print('done')
    
    clips.video.set_output()
    clips.audio.set_output(1)
    command lines could be:
    Code:
    VSPipe.exe --outputindex 0 --container y4m  media_to_video.py - | x264.exe --demuxer y4m --crf 18 --vbv-maxrate 30000 --vbv-bufsize 30000 --keyint 60 --tune film --colorprim bt709 --transfer bt709 --colormatrix bt709 --output output.264 - 
    VSPipe.exe --outputindex 1 --container wav  media_to_video.py - | neroAacEnc.exe -ignorelength -lc -cbr 96000 -if - -of output.m4a
    Mp4box.exe   -add  output.264 -add  output.m4a#audio  -new output.mp4
    Last edited by _Al_; 25th Mar 2023 at 19:24.
    Quote Quote  
  4. Member hydra3333's Avatar
    Join Date
    Oct 2009
    Location
    Australia
    Search Comp PM
    Thank you I'll look at those.

    vs_transitions function within function "_squeeze_expand" has a bug too, referencing width instead of height.

    Partial fix to reference height may not work, not tested properly, so far "up" yields no direct errors
    Code:
    			elif direction in [Direction.UP, Direction.DOWN]:
    				h_inc = math.floor(scale * clipa.height)
    				h_dec = clipa.height - h_inc
    
    				if h_inc == 0:
    					return clipa_t_zone
    
    				if direction == Direction.UP:
    					return StackVertical_wrapper(ID, 
    						[clipa_t_zone.resize.Spline36(height=h_dec), clipb_t_zone.resize.Spline36(height=h_inc)]
    					)
    				elif direction == Direction.RIGHT:
    					return StackVertical_wrapper(ID, 
    						[clipb_t_zone.resize.Spline36(height=h_inc), clipa_t_zone.resize.Spline36(height=h_dec)]
    					)
    however "down" yields
    Code:
    2023-03-26.19:14:36.454997 DEBUG: vs_transitions: linear_boundary: Entered _squeeze_expand ID=14 clipa_movement=squeeze clipb_movement=expand direction=down
    2023-03-26.19:14:36.454997 DEBUG: vs_transitions: linear_boundary: Entered _squeeze_expand ID=14 clipa_movement=squeeze clipb_movement=expand direction=down
    2023-03-26.19:14:36.548749 Post set_output: Consistency Check: FAILED to get_frame at frame 101 (base 0) of 1649 (base 0): FrameEval: Function didn't return a clip
    2023-03-26.19:14:36.548749 Post set_output: Consistency Check: FAILED to get_frame at frame 102 (base 0) of 1649 (base 0): FrameEval: Function didn't return a clip
    2023-03-26.19:14:36.548749 Post set_output: Consistency Check: FAILED to get_frame at frame 103 (base 0) of 1649 (base 0): FrameEval: Function didn't return a clip
    2023-03-26.19:14:36.548749 Post set_output: Consistency Check: FAILED to get_frame at frame 104 (base 0) of 1649 (base 0): FrameEval: Function didn't return a clip

    edit: my bad, didn't notice "elif direction == Direction.RIGHT" needed to be changed to Direction.DOWN.
    Last edited by hydra3333; 27th Mar 2023 at 06:40.
    Quote Quote  
  5. Member hydra3333's Avatar
    Join Date
    Oct 2009
    Location
    Australia
    Search Comp PM
    he he, testing with a poor person's (non-audio) version of your scripts ... with an old amd 3900X (12 core) & 32gb mem, on a separate SSD,
    • 14 images/vids and random transitions yields ~184 fps (starting slow then increasing fps) from vspipe into ffmpeg libx264
    • 166 images/vids and random transitions yields ~2.5 fps from vspipe into ffmpeg libx264
    • 1,110 images/vids and random transitions yields ~1-2 fps (starting at 1 fps then slowly increasing)

    A tad on the slow-ish side Non-linearly.
    I suspect it's related to the number of files being opened/closed (mainly by ffms2) rather than to time processing each clip. But I don't know.
    I guess vspipe may be able to output some filter stats but I found them uncertain to interpret, last time I looked.

    Still, can't be unhappy with pointing it at folder trees and saying "go" with no intervention apart from 2 mins up-front config.

    I really must look at your audio processing, however admit to being afraid of what it may do to the fps
    Quote Quote  
  6. I see, so it drops down speed with number of images. So it becomes impractical with number of images.

    There might be a solution for that. I tested a while ago a dynamic piping of frames into a previewer, sequence of clips. I was afraid it would take a lots of RAM. It looks now vapoursynth is more set to not increase RAM but it could slow down considerably if loading lots of stuff. Which is basically very unusual. I could be wrong, it is just a guess.

    That dynamic loading solution, load only one clip at a time, would work only in linear fashion. Frame is loaded and then is gone, because one source plugin is working and opened at a time. No seeking available for that vapoursynth script. So if previewing it, it would go only linearly forward. For encoding that should not be a problem as it was not a problem for linear previewing.

    I have to find it in PC and set it up. I don't think it should be a problem. At most two source plugins would be open at a time because of transitions.
    Last edited by _Al_; 27th Mar 2023 at 09:47.
    Quote Quote  
  7. But for now I'd AUDIO test this solution:

    it has improved transitions, custom, depending what clip follows what. But that could be customized.
    There is a transition between images, but if there is a transition from an image to a video it is just fades outs, also from video to images. I was testing it and realized that transitions between two video clips are a nonsense. There is no transition between two videos. Script recognizes image and video clip. There is a new class called Transition that handles that.

    Also if there are videos with two DIFFERENT FPS, it would Error and tell fps discrepancy is present. All videos have to have same fps (different resolution is fine). That would be a whole different league to auto change fps.
    Code:
    import vapoursynth as vs
    from vapoursynth import core
    from functools import partial
    from pathlib import Path
    import itertools
    import sys
    import os
    from typing import Union, List
    from PIL import Image, ExifTags, UnidentifiedImageError   #pip install Pillow
    
    #sys.path.append(str(Path(__file__).parent))
    try:
        is_API4 = vs.__api_version__.api_major >= 4
    except AttributeError:
        is_API4 = False
    
    import load
    import vs_transitions
    
    TRANSITIONS = [
        "cover",
        "cube_rotate",
        "curtain_cover",
        "curtain_reveal",
        "fade",
    ##    "fade_from_black",
    ##    "fade_to_black",
    ##    "linear_boundary",
        "poly_fade",
        "push",
        "reveal",
        "slide_expand",
        "squeeze_expand",
        "squeeze_slide",
        "wipe",
    ]
    
    TRANSITION      = 'cycle'     #'cycle' will cycle list with transitions or put some concrete transition like 'fade'
    DIRECTORY       = r'D:\paths_to_tests\test2'
    EXTENSIONS      = ['.jpg','.m2ts']
    WIDTH           = 640
    HEIGHT          = 360
    LENGTH          = 100
    TRANSITION_DUR  = 35
    FPSNUM          = 60000,
    FPSDEN          = 1001
    UPSIZE_KERNEL   = 'Lanczos'
    DOWNSIZE_KERNEL = 'Spline36'
    BOX             = True    # True would initiate letterboxing or pillarboxing. False fills to WIDTH,HEIGTH
    MODX            = 2       # mods for letterboxing calculations, example, for 411 YUV as an extreme
    MODY            = 2       # mods would have to be MODX=4, MODY=1 as minimum
    SAVE_ROTATED_IMAGES = False   #saves image to disk with name suffix: "_rotated" using PIL module
    ATTRIBUTE_AUDIO_PATH = r'D:\paths_to_tests\test2\20190421085114.m2ts'
    
    
    
    class Clip:
        def __init__(self, video = None, audio=None, attribute_audio_path=None):
            self.video = video
            self.audio = audio
            if self.video is None:
                self.video = core.std.BlankClip()
            if self.audio is None:
                if attribute_audio_path is None:
                    raise ValueError('arguent default_audio (could be very short) to get default audio for images')
                attr_audio = core.bas.Source(attribute_audio_path)
                length = int(attr_audio.sample_rate/self.video.fps*self.video.num_frames)
                self.audio = attr_audio.std.BlankAudio(length = length)
    
        def trim(self, first=0, last=None, length=None):
            afirst  = self.to_samples(first)    if first  is not None else None
            alast   = self.to_samples(last+1)-1 if last   is not None else None
            alength = self.to_samples(length)   if length is not None else None
            return Clip( self.video.std.Trim(first=first, last=last, length=length),
                         self.audio.std.AudioTrim(first=afirst,last=alast,length=alength)
                        )
        def to_samples(self, frame):
            return int((self.audio.sample_rate/self.video.fps)*frame)
    
        def __add__(self, other):
            return Clip(self.video + other.video, self.audio + other.audio)
    
        def __mul__(self, multiple):
            return Clip(self.video*multiple, self.audio*multiple)
    
        def __getitem__(self, val):
            if isinstance(val, slice):
                if val.step is not None:
                    raise ValueError('Using steps while slicing AudioNode together with VideoNode makes no sense')
                start = self.to_samples(val.start) if val.start is not None else None
                stop =  self.to_samples(val.stop)  if val.stop  is not None else None
                return Clip( self.video.__getitem__(val),
                             self.audio.__getitem__(slice(start,stop))
                             )
            elif isinstance(val, int):
                start = self.to_samples(val)
                stop = int(start + self.audio.sample_rate/self.video.fps)
                return Clip( self.video[val],
                             self.audio.__getitem__(slice(start,stop))
                             )        
        def __repr__(self):
            return '{}\n{}'.format(repr(self.video), repr(self.audio))
    
        def __str__(self):
            return '{}\n{}'.format(str(self.video), str(self.audio))
    
    class Transition:
        '''
        -enveloping vs_transition module to create transitions for Clip class clips(video and audio)
        -clips a and b are extended using edge frames for transition duration needed
         because vapoursynth cannot merge audios
        -if passing vs.VideoNodes here (not class Clip) it also extends clips so better use vs_transitions directly with vs.VideoNode clips
         no need to use this class for it (unless you want to extend ends as well)
         '''
    
        CUSTOM1 = {
            'image_to_image': 'regular_transition',
            'image_to_video': 'fade_to_and_from_black',
            'video_to_image': 'fade_to_and_from_black',
            'video_to_video': 'no_transition'
            }
       
        def __init__(self,
                     a:           Union[Clip, vs.VideoNode],
                     b:           Union[Clip, vs.VideoNode],
                     duration:    int = 30,
                     transition:  str = 'fade',
                     **kwargs):
    
            self.a_orig = a
            self.b_orig = b
            if isinstance(a, Clip) and isinstance(b, Clip):
                self.clip_type = 'Clip'
                self.a = a.video
                self.b = b.video
            elif isinstance(a, vs.VideoNode) and isinstance(b, vs.VideoNode):
                self.clip_type = 'VideoNode'
                self.a = a
                self.b = b         
            else:
                raise ValueError('Transitions: both clips must the same Clip class or vs.VideoNode class')
            fps_a = round(self.a.fps.numerator/self.a.fps.denominator, 3) if self.a.fps.numerator else 'dynamic'
            fps_b = round(self.b.fps.numerator/self.b.fps.denominator, 3) if self.b.fps.numerator else 'dynamic'
            if 'dynamic' in [fps_a, fps_b] or  abs(fps_a-fps_b) > 0.01:
                raise ValueError(f'Transitions: both clips must have the same fps and cannot be "dynamic", got: {fps_a} fps and {fps_b} fps')
            self.duration = duration
            self.transition = transition
            self.kwargs = kwargs
            #extending ends for transition durations
            self.a = self.a[-1] * duration
            self.b = self.b[0] * duration
            
        def custom1(self):
            first  = 'image' if 'is_image' in self.a[-1].get_frame(0).props else 'video' #must be always last frame prop, 
            second = 'image' if 'is_image' in self.b.get_frame(0).props else 'video'
            return getattr(self, self.CUSTOM1[f'{first}_to_{second}'])()
            
        def regular_transition(self):
            transition_func = getattr(vs_transitions, self.transition)
            transition_videonode = transition_func(self.a, self.b, frames=self.duration)
            return self.out(transition_videonode)
            
        def fade_to_and_from_black(self):
            left  = vs_transitions.fade_to_black(self.a,   frames=self.duration)
            right = vs_transitions.fade_from_black(self.b, frames=self.duration)
            return self.out(left+right)
        
        def no_transition(self):
            return self.a_orig + self.b_orig
    
        def out(self, transition_videonode):
            if self.clip_type == 'Clip':
                self.attribute_audio_path = self.kwargs.pop('attribute_audio_path', ATTRIBUTE_AUDIO_PATH)
                silent_transition_clip = Clip(transition_videonode,  attribute_audio_path=self.attribute_audio_path)
                return self.a_orig + silent_transition_clip + self.b_orig
            elif self.clip_type == 'VideoNode':
                return self.a_orig + transition_videonode + self.b_orig
    
    
    def rotation_check(clip, path, save_rotated_image=False):
        #PIL module loads an image, checks if EXIF data, checks for 'Orientation'
        try:
            image = Image.open(str(path))
        except UnidentifiedImageError:
            return clip
        except PermissionError:
            print(f'PIL, Permission denied to load: {path}')
            return clip
        except Exception as e:
            print(f'PIL, {e}')
            return clip
        try:        
            for key in ExifTags.TAGS.keys():
                if ExifTags.TAGS[key] == 'Orientation':
                    break
            exif = dict(image.getexif().items())
            value = exif[key]
        except (AttributeError, KeyError, IndexError):
            # no getexif
            return clip
        else:
            if   value == 3: clip=clip.std.Turn180()
            elif value == 8: clip=clip.std.Transpose().std.FlipVertical()
            elif value == 6: clip=clip.std.Transpose().std.FlipHorizontal()
            if save_rotated_image and value in [3,8,6]:
                #rotation degrees are in counterclockwise direction!
                rotate = {3:Image.ROTATE_180, 6:Image.ROTATE_270, 8:Image.ROTATE_90}
                image = image.transpose(rotate[value])
                path = path.parent / f'{path.stem}_rotated{path.suffix}'
                image.save(str(path))
        image.close()    
        return clip
    
    def boxing(clip, W=WIDTH, H=HEIGHT):
        cw, ch = clip.width, clip.height
        if W/H > cw/ch:
            w = cw*H/ch
            x = int((W-w)/2)
            x = x - x%MODX
            x = max(0, x)
            clip = resize_clip(clip, W-2*x, H)
            if x: return clip.std.AddBorders(left=x, right=x, color=(16,128,128))  #RGB is out then (16,16,16)
            else: return clip
        else:
            h = ch*W/cw
            y = int((H-h)/2)
            y = y - y%MODY
            y = max(0, y)
            clip = resize_clip(clip, W, H-2*y)
            if y: return clip.std.AddBorders(top=y, bottom=y, color=(16,128,128))
            else: return clip
          
    def resize_clip(clip,w,h, W=WIDTH, H=HEIGHT):
        if w>W or h>H: resize = getattr(clip.resize, DOWNSIZE_KERNEL)
        else:          resize = getattr(clip.resize, UPSIZE_KERNEL)
        if clip.format.color_family==vs.RGB:
            #rgb to YUV, perhaps only for png images, figure out what matrix out is needed
            return resize(width=w, height=h, format=vs.YUV420P8, matrix_s='709')
        else:
            #YUV to YUV
            return resize(width=w, height=h, format=vs.YUV420P8)  
    
    def get_clip(path):
        #get video
        data = LOADER.get_data([str(path),])[0] 
        video = data.clip
        if data.load_isError:
            raise ValueError(f'{data.load_log}\nload.py could not create vs.VideoNode from {path}')
        video = rotation_check(video, path, save_rotated_image=SAVE_ROTATED_IMAGES)
        if BOX:  video = boxing(video, WIDTH, HEIGHT)
        else:    video = resize_clip(video, WIDTH, HEIGHT)
        if len(video)==1:
            if is_API4: video = video.std.SetFrameProps(is_image=1)
            else:       video = video.std.SetFrameProp(prop='is_image', intval=1)
            video = video[0]*LENGTH
            video = video.std.AssumeFPS(fpsnum=FPSNUM, fpsden=FPSDEN)
        video = video.resize.Bicubic(format = vs.YUV444P8, matrix_in_s='709')
       
        #get audio  
        try:
            audio = core.bas.Source(str(path))
        except AttributeError:
            raise ImportError('Vapoursynth audio source plugin "BestAudioSource.dll" could not be loaded\n'
                              'download: https://github.com/vapoursynth/bestaudiosource/releases/tag/R1')
        except (vs.Error, Exception) as e:
            #quiet audio, could not load audio , either video is an image or some problem
            clip = Clip(video, attribute_audio_path=ATTRIBUTE_AUDIO_PATH) #will generate silent clip with desired parameters
        else:
            #audio loaded
            clip = Clip(video, audio)
        return clip
    
            
    def get_path(path_generator):
        while 1:
            try:
                path = next(path_generator)
            except StopIteration:
                return None
            if path.suffix.lower() in EXTENSIONS:
                print(f'{path}')
                return path
    
                
    if TRANSITION == 'cycle':
        TRANSITION_GENERATOR = itertools.cycle(TRANSITIONS)
    else:
        TRANSITION_GENERATOR = itertools.cycle([TRANSITION])  #cycles always the same transition
    
    LOADER = load.Sources()
    TRANSITION_DUR = max(2,TRANSITION_DUR + TRANSITION_DUR%2) #minimum 2 and mod2 to be sure
    paths = Path(DIRECTORY).glob("*.*")
    print('wait loading paths ...')
    path = get_path(paths)
    if path is None:
        raise ValueError(f'Extensions: {EXTENSIONS}, not found in {DIRECTORY}')
    clips = get_clip(path) #get_clip() loads video AND audio as well
    
    while 1:
        path = get_path(paths)
        if path is None:
            break
        second_clip = get_clip(path)
        clips = Transition(clips, second_clip, duration=TRANSITION_DUR, transition=next(TRANSITION_GENERATOR)).custom1()
    
    clips.video = vs_transitions.fade_from_black(clips.video, frames=TRANSITION_DUR)
    clips.video = vs_transitions.fade_to_black(clips.video, frames=TRANSITION_DUR)
    clips.video = clips.video.resize.Bicubic(format = vs.YUV420P8, matrix_in_s='709')
    print('done')
    
    clips.video.set_output()
    clips.audio.set_output(1)
    Last edited by _Al_; 27th Mar 2023 at 09:53.
    Quote Quote  



Similar Threads