VideoHelp Forum

+ Reply to Thread
Page 1 of 2
1 2 LastLast
Results 1 to 30 of 38
Thread
  1. Member hydra3333's Avatar
    Join Date
    Oct 2009
    Location
    Australia
    Search Comp PM
    Hello.

    I have a large number of old family .jpg pics with arbitrary filenames (although ordered) which I'd like to turn into a fast video slideshow at say 1080p (most TVs have this) with 1s/2s frame display duration ... or even dabble with 2160p.

    The thing is the .jpg's will have arbitrary dimensions (different cameras, settings) and so need to be resized (with some "quality" resizer) whilst maintaining aspect ratio during the process. Famous last words subject to change: speed isn't necessarily an issue.

    Unfortunately, I am also clueless about what (if anything) to do about colourspace conversions for this case.
    All I know about the .jpg files is that they are jpegs , some old some new, some "landscape" some "portrait"
    I guess the result would need to be Rec.709, but how to ensure it safely gets there prior to encoding is a question.

    So, could some kind souls please provide suggestions on how to use ffmpeg to
    1. do the "quality" resizing from arbitrarily dimensioned .jpg inputs whilst maintaining aspect ratios
    2. do any necessary colourspace conversions to ensure input to the h264_nvenc encoder is Rec.709 (or suggest a better alternative)

    Thanks !

    Context:
    have seen https://trac.ffmpeg.org/wiki/Slideshow
    have an nvidia "2060 Super" with an AMD 3900X cpu
    have an ffmpeg build which accepts vapoursynth (or even avisynth) input, if that helps, although I'd prefer
    prefer to use nvidia's h264_nvenc gpu encoding, probably with parameters including something like this (once I figure out how to force every frame to be an i-frame)
    Code:
    -c:v h264_nvenc -pix_fmt nv12 -preset p7 -multipass fullres -forced-idr 1 -g 1 -coder:v cabac -spatial-aq 1 -temporal-aq 1 -dpb_size 0 -bf:v 0 -b_ref_mode:v 0 -rc:v vbr -cq:v 0-b:v %bitrate_target% -minrate:v %bitrate_min% -maxrate:v %bitrate_max% -bufsize %bitrate_target% -profile:v high -level 5.2 -movflags +faststart+write_colr
    Last edited by hydra3333; 14th Jan 2023 at 17:09.
    Quote Quote  
  2. Member
    Join Date
    Jul 2009
    Location
    United States
    Search Comp PM
    I can't help with the color space bit, but read this part of the ffmpeg wiki for the resizing part https://trac.ffmpeg.org/wiki/Scaling. I personally prefer Lanczos for upsizing and bicubic for downsizing.
    Quote Quote  
  3. Member hydra3333's Avatar
    Join Date
    Oct 2009
    Location
    Australia
    Search Comp PM
    Thanks !

    https://trac.ffmpeg.org/wiki/Scaling
    ( supersedes https://superuser.com/questions/547296/resizing-videos-with-ffmpeg-avconv-to-fit-into-...136305#1136305 )

    Of interest it says:
    When going from BGR (not RGB) to yuv420p the conversion is broken (off-by-one). Use -vf scale=flags=accurate_rnd to fix that.
    The default for matrix in untagged input and output is always limited BT.601
    So 2 options to test:

    Code:
    -sws_flags lanczos+accurate_rnd+full_chroma_int+full_chroma_inp -vf "scale=1920:1080:eval=frame:flags=lanczos+accurate_rnd+full_chroma_int+full_chroma_inp:force_original_aspect_ratio=decrease:out_color_matrix=bt709:out_range=full,pad=1920:1080:-1:-1:color=black,format=yuv420p"
    Although https://trac.ffmpeg.org/wiki/Scaling seems to point to this one:
    Code:
    -sws_flags lanczos+accurate_rnd+full_chroma_int+full_chroma_inp -vf "scale=1920:1080:eval=frame:flags=lanczos+accurate_rnd+full_chroma_int+full_chroma_inp:force_original_aspect_ratio=decrease:out_color_matrix=bt709:out_range=full,pad=1920:1080:(ow-iw)/2:(oh-ih)/2:color=black,format=yuv420p"
    Originally Posted by zing269 View Post
    https://trac.ffmpeg.org/wiki/Scaling. I personally prefer Lanczos for upsizing and bicubic for downsizing.
    OK. I'm not sure if I can specify one of the other depending if upscaling or downscaling an image ...

    I guess I also need some advice on effect for "modern" TVs : out_range=full rather than tv, and I'm not convinced I'm understanding its use in the scale filter correctly.

    edit: notes
    https://ffmpeg.org/ffmpeg-scaler.html
    ‘accurate_rnd’ Enable accurate rounding.
    ‘full_chroma_int’ Enable full chroma interpolation.
    ‘full_chroma_inp’ Select full chroma input.
    and https://trac.ffmpeg.org/wiki/Slideshow
    and https://superuser.com/questions/1661735/pattern-type-glob-or-other-jpg-wildcard-input-...for-windows-10
    Last edited by hydra3333; 14th Jan 2023 at 19:44.
    Quote Quote  
  4. Member hydra3333's Avatar
    Join Date
    Oct 2009
    Location
    Australia
    Search Comp PM
    Oh dear.

    https://ffmpeg.org/ffmpeg-formats.html#image2-1
    3.12 image2 Image file demuxer.

    This demuxer reads from a list of image files specified by a pattern. The syntax and meaning of the pattern is specified by the option pattern_type.
    The pattern may contain a suffix which is used to automatically determine the format of the images contained in the files.
    The size, the pixel format, and the format of each image must be the same for all the files in the sequence.
    That's probably me stuffed, then.
    Guess I'll give it a try anyway
    Quote Quote  
  5. You can use something like script below, vapoursynth script just needs to be piped to ffmpeg.

    I have set it up for videos as well, it works, images can be even mixed with video, but no audio is used. It would be interesting to set it up with audio as well, I might attempt to do that later.
    Images should work. No value checking though.
    Code:
    import vapoursynth as vs
    from vapoursynth import core
    from functools import partial
    from pathlib import Path
    import itertools
    import sys
    import os
    ##sys.path.append(str(Path(__file__).parent / 'python_modules'))
    
    DIRECTORY       = r'F:\images'
    EXTENSIONS      = [".png", ".jpg"]     #always lower case
    WIDTH           = 1920                 #final width, watch for subsumpling, it has to fit
    HEIGHT          = 1080                 #final height, watch for subsumpling, it has to fit
    LENGTH          = 150                  #image frame length, not videos
    CROSS_DUR       = 25                   #crossfade duration in frames
    FPSNUM          = 60000,
    FPSDEN          = 1001
    UPSIZE_KERNEL   = 'Lanczos'
    DOWNSIZE_KERNEL = 'Spline36'
    BOX             = True    # True would initiate letterboxing or pillarboxing. False fills to WIDTH,HEIGTH
    MODX            = 2       # mods for letterboxing calculations, example, for 411 YUV as an extreme
    MODY            = 2       # mods would have to be MODX=4, MODY=1 as minimum
    
    
    def boxing(clip, W=WIDTH, H=HEIGHT):
        cw, ch = clip.width, clip.height
        if W/H > cw/ch:
            w = cw*H/ch
            x = int((W-w)/2)
            x = x - x%MODX
            x = max(MODX, min(x,W))
            clip = resize_clip(clip, W-2*x, H)
            return clip.std.AddBorders(left=x, right=x, color=(16,128,128))  #RGB is out then (16,16,16)
        else:
            h = ch*W/cw
            y = int((H-h)/2)
            y = y - y%MODY
            y = max(MODY, min(y,H))
            clip = resize_clip(clip, W, H-2*y)
            return clip.std.AddBorders(top=y, bottom=y, color=(16,128,128))
          
    def resize_clip(clip,w,h, W=WIDTH, H=HEIGHT):
        if w>W or h>H: resize = getattr(clip.resize, DOWNSIZE_KERNEL)
        else:          resize = getattr(clip.resize, UPSIZE_KERNEL)
        if clip.format.color_family==vs.RGB:
            #rgb to YUV, perhaps only for png images, figure out what matrix out is needed
            return resize(width=w, height=h, format=vs.YUV420P8, matrix_s='709')
        else:
            #YUV to YUV
            return resize(width=w, height=h, format=vs.YUV420P8)  
    
    def get_clip(path):
        #actual vapoursynth script part so to speak...
        if path.suffix.lower() == ".mp4":    clip = core.lsmas.LibavSMASHSource(str(path))
        elif path.suffix.lower() == ".m2ts": clip = core.lsmas.LWLibavSource(str(path))
        else:                                clip = core.ffms2.Source(str(path))
        if BOX:  clip = boxing(clip, WIDTH, HEIGHT)
        else:    clip = resize_clip(clip, WIDTH, HEIGHT)
        clip = clip[0]*LENGTH if len(clip)<5 else clip
        return clip
    
    def get_path(path_generator):
        #get path of desired extensions from generator
        while 1:
            try:
                path = next(path_generator)
            except StopIteration:
                return None
            if path.suffix.lower() in EXTENSIONS:
                return path
              
    def crossfade(a, b, duration):
        #gets crosfade part from end of clip a and start of clip b
        def fade_image(n, a, b):
            return core.std.Merge(a, b, weight=n/duration)
        if a.format.id != b.format.id or a.height != b.height or a.width != b.width:
            raise ValueError('crossfade: Both clips must have the same dimensions and format.')
        return core.std.FrameEval(a[-duration:], partial(fade_image, a=a[-duration:], b=b[:duration]))
    
    CROSS_DUR = max(1,CROSS_DUR)                        #1 is minimum for crossfade -> no crossfades
    paths = Path(DIRECTORY).glob("*")                   #generator of all paths in a directory
    paths1, paths2 = itertools.tee(paths, 2)            #make a copy to have two generators
    clips = get_clip(get_path(paths1))[0:-CROSS_DUR]    #make a starter clip up to first crossfade
    print('wait ...')
    while 1:
        path = get_path(paths1)
        if path is None:
            clips = clips + left_clip
            break
        right_clip = get_clip(path)
        left_clip = get_clip(get_path(paths2))
        crossfade_clip = crossfade(left_clip, right_clip, CROSS_DUR)
        right  = right_clip[CROSS_DUR:-CROSS_DUR]
        clips = clips + crossfade_clip + right   
    clips = clips.std.AssumeFPS(fpsnum=FPSNUM, fpsden=FPSDEN)
    clips.set_output()
    print('done')
    Last edited by _Al_; 15th Jan 2023 at 12:37.
    Quote Quote  
  6. Member hydra3333's Avatar
    Join Date
    Oct 2009
    Location
    Australia
    Search Comp PM
    Thanks ! Great stuff !

    Fiddling with it now to work on my system...

    A couple of funny things :

    1. UPSIZE_KERNEL = 'Lanczsoz' causes it to crash, had to resort to Spline36 until I figure out why
    Code:
    File "src\cython\vapoursynth.pyx", line 2415, in vapoursynth.Plugin.__getattr__
    AttributeError: There is no function named Lanczsoz
    2. some of my images appear to have EXIF data like "Orientation - Right top" etc set,
    which is some sort of rotation required to display the image properly ...
    irfanview respects it and displays properly
    ffprobe -show_frames shows "rotation=-90"
    however I can't find how to set core.ffms2.Source to respect it.
    hmm, imagemagick doesn't rotate either

    Not complaints, just items of interest to look into:
    - number 1 I can live with until I discover what's happening
    - number 2 is a tad less liveable, so I'll keep fiddling
    Last edited by hydra3333; 15th Jan 2023 at 08:34.
    Quote Quote  
  7. Member hydra3333's Avatar
    Join Date
    Oct 2009
    Location
    Australia
    Search Comp PM
    funnily enough, this nearly works using a text file of filenames as input ...
    however guess which file(s) it aborts on ? you guessed it, the jpegs with rotation specified.
    sometimes when something goes bung, it really does it properly.

    Code:
    "C:\SOFTWARE\Vapoursynth-x64\ffmpeg_OpenCL.exe"  -hide_banner -stats -v verbose -reinit_filter 0 -safe 0 -auto_convert 1 -f concat -i ".\ffmpeg_concat-2023.01.16.00.32.41.64-3900X-input.txt" -sws_flags lanczos+accurate_rnd+full_chroma_int+full_chroma_inp -vf "scale=1920:1080:eval=frame:flags=lanczos+accurate_rnd+full_chroma_int+full_chroma_inp:force_original_aspect_ratio=decrease:out_color_matrix=bt709:out_range=full,pad=1920:1080:(ow-iw)/2:(oh-ih)/2:eval=frame:color=black,settb=expr=1/25,setpts=1*N/TB,drawtext=box=0:fontsize=30:text='Frame %{frame_num}':x=(w-text_w)/2:y=(h-text_h)/2:fix_bounds=1:fontcolor=black,setdar=16/9,format=yuvj420p" -c:v h264_nvenc -pix_fmt nv12 -preset p7 -multipass fullres -forced-idr 1 -g 25 -coder:v cabac -spatial-aq 1 -temporal-aq 1 -dpb_size 0 -bf:v 0 -bf:v 0 -b_ref_mode:v 0 -rc:v vbr -cq:v 0 -b:v 8000000 -minrate:v 800000 -maxrate:v 10400000 -bufsize 20800000 -profile:v high -level 5.2 -movflags +faststart+write_colr -y ".\ffmpeg_concat-2023.01.16.00.32.41.64-3900X-input.txt.try1.mp4"
    Quote Quote  
  8. 1. UPSIZE_KERNEL = 'Lanczsoz' causes it to crash, had to resort to Spline36 until I figure out why
    Code:
    File "src\cython\vapoursynth.pyx", line 2415, in vapoursynth.Plugin.__getattr__
    AttributeError: There is no function named Lanczsoz
    there was a typo in the script, I already fixed it to:
    Code:
    UPSIZE_KERNEL   = 'Lanczos'
    Quote Quote  
  9. 2. some of my images appear to have EXIF data like "Orientation - Right top" etc set,
    which is some sort of rotation required to display the image properly ...
    irfanview respects it and displays properly
    ffprobe -show_frames shows "rotation=-90"
    however I can't find how to set core.ffms2.Source to respect it.
    hmm, imagemagick doesn't rotate either
    Can rotation be detected by mediainfo? If yes,it could be implemented too, I have things set for mediainfo here, never parsed ffprobe in python, that would be longer shot.

    Or actually could imwri source detect rotated image?
    Code:
    clip = core.imwri.Read([str(path)])
    https://github.com/vapoursynth/vs-imwri/blob/master/docs/imwri.rst
    download dll: https://github.com/vapoursynth/vs-imwri/releases/tag/R2
    Last edited by _Al_; 15th Jan 2023 at 13:00.
    Quote Quote  
  10. Can rotation be detected by mediainfo?
    at least on .mov- and .mp4-input: yes
    no clue for image input

    Cu
    Selur
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  11. thanks, I will check things..

    in meantime some important fixes, so better use this:
    Code:
    import vapoursynth as vs
    from vapoursynth import core
    from functools import partial
    from pathlib import Path
    import itertools
    import sys
    import os
    
    DIRECTORY       = r'F:\images'
    EXTENSIONS      = [".jpg"]     #always lower case
    WIDTH           = 1920                 #final width, watch for subsumpling, it has to fit
    HEIGHT          = 1080                 #final height, watch for subsumpling, it has to fit
    LENGTH          = 150                  #image frame length, not videos
    CROSS_DUR       = 25                   #crossfade duration in frames
    FPSNUM          = 60000,
    FPSDEN          = 1001
    UPSIZE_KERNEL   = 'Lanczos'
    DOWNSIZE_KERNEL = 'Spline36'
    BOX             = True    # True would initiate letterboxing or pillarboxing. False fills to WIDTH,HEIGTH
    MODX            = 2       # mods for letterboxing calculations, example, for 411 YUV as an extreme
    MODY            = 2       # mods would have to be MODX=4, MODY=1 as minimum
    
    
    def boxing(clip, W=WIDTH, H=HEIGHT):
        cw, ch = clip.width, clip.height
        if W/H > cw/ch:
            w = cw*H/ch
            x = int((W-w)/2)
            x = x - x%MODX
            x = max(0, min(x,W))
            clip = resize_clip(clip, W-2*x, H)
            if x: return clip.std.AddBorders(left=x, right=x, color=(16,128,128))  #RGB is out then (16,16,16)
            else: return clip
        else:
            h = ch*W/cw
            y = int((H-h)/2)
            y = y - y%MODY
            y = max(0, min(y,H))
            clip = resize_clip(clip, W, H-2*y)
            if y: return clip.std.AddBorders(top=y, bottom=y, color=(16,128,128))
            else: return clip
          
    def resize_clip(clip,w,h, W=WIDTH, H=HEIGHT):
        if w>W or h>H: resize = getattr(clip.resize, DOWNSIZE_KERNEL)
        else:          resize = getattr(clip.resize, UPSIZE_KERNEL)
        if clip.format.color_family==vs.RGB:
            #rgb to YUV, perhaps only for png images, figure out what matrix out is needed
            return resize(width=w, height=h, format=vs.YUV420P8, matrix_s='709')
        else:
            #YUV to YUV
            return resize(width=w, height=h, format=vs.YUV420P8)  
    
    def get_clip(path):
        #actual vapoursynth script part so to speak...
        if path.suffix.lower() == ".mp4":    clip = core.lsmas.LibavSMASHSource(str(path))
        elif path.suffix.lower() == ".m2ts": clip = core.lsmas.LWLibavSource(str(path))
        else:                                clip = core.ffms2.Source(str(path))
        if BOX:  clip = boxing(clip, WIDTH, HEIGHT)
        else:    clip = resize_clip(clip, WIDTH, HEIGHT)
        clip = clip[0]*LENGTH if len(clip)<5 else clip
        return clip
    
    def get_path(path_generator):
        #get path of desired extensions from generator
        while 1:
            try:
                path = next(path_generator)
            except StopIteration:
                return None
            if path.suffix.lower() in EXTENSIONS:
                return path
              
    def crossfade(a, b, duration):
        #gets crosfade part from end of clip a and start of clip b
        def fade_image(n, a, b):
            return core.std.Merge(a, b, weight=n/duration)
        if a.format.id != b.format.id or a.height != b.height or a.width != b.width:
            raise ValueError('crossfade: Both clips must have the same dimensions and format.')
        return core.std.FrameEval(a[-duration:], partial(fade_image, a=a[-duration:], b=b[:duration]))
    
    CROSS_DUR = max(1,CROSS_DUR)                        #1 is minimum for crossfade -> no crossfades 
    paths = Path(DIRECTORY).glob("*.*")                 #generator of all paths in a directory
    paths1, paths2 = itertools.tee(paths, 2)            #make a copy to have two generators
    path = get_path(paths1)                             #load first path
    if path is None:
        raise ValueError(f'Extensions: {EXTENSIONS}, not found in {DIRECTORY}')
    starter = get_clip(path)
    clips = starter[0:-CROSS_DUR]                       #starter clip goes into loop without crossfade at the end
    left_clip = None
    print('wait ...')
    while 1:                                            #generator paths1 returns always a clip that is ahead of paths2
        path = get_path(paths1)
        if path is None:
            if left_clip is None: clips = starter
            break
        right_clip = get_clip(path)
        left_clip = get_clip(get_path(paths2))
        crossfade_clip = crossfade(left_clip, right_clip, CROSS_DUR)
        right  = right_clip[CROSS_DUR:-CROSS_DUR]
        clips = clips + crossfade_clip + right   
    clips = clips.std.AssumeFPS(fpsnum=FPSNUM, fpsden=FPSDEN)
    clips.set_output()
    print('done')
    Last edited by _Al_; 16th Jan 2023 at 13:31.
    Quote Quote  
  12. You may give a chance to zscale instead generic ffmpeg scaler - based on people feedback seem it is faster and deliver higher quality.
    Quote Quote  
  13. Looks like zscale is the same zimg library that vapoursynth uses as default or Avisynth uses as z_ConvertFormat ().
    No boxing options (boxing into a pillarbox or letterbox) , so that is why I did it manually for the script above.
    Quote Quote  
  14. Python PIL module can read EXIF data. It can be installed pip install Pillow and lines below can be added to the script.
    It takes any path, rejects it, if PIL cannot load it.
    Code:
    from PIL import Image, ExifTags, UnidentifiedImageError   #pip install Pillow
    SAVE_ROTATED_IMAGES = False
    .
    .
    .
    def rotation_check(clip, path, save_rotated_image=False):
        #PIL module loads an image, checks if EXIF data, checks for 'Orientation'
        try:
            image = Image.open(str(path))
        except UnidentifiedImageError:
            return clip
        except PermissionError:
            print(f'PIL, Permission denied to load: {path}')
            return clip
        except Exception as e:
            print(f'PIL, {e}')
            return clip
        try:        
            for key in ExifTags.TAGS.keys():
                if ExifTags.TAGS[key] == 'Orientation':
                    break
            exif = dict(image.getexif().items())
            value = exif[key]
        except (AttributeError, KeyError, IndexError):
            return clip
        else:
            if   value == 3: clip = clip.std.Turn180()
            elif value == 8: clip = clip.std.Transpose().std.FlipVertical()
            elif value == 6: clip = clip.std.Transpose().std.FlipHorizontal()
            if save_rotated_image and value in [3,8,6]:
                #rotation degrees are in counterclockwise direction!
                rotate = {3:Image.ROTATE_180, 6:Image.ROTATE_270, 8:Image.ROTATE_90}
                image = image.transpose(rotate[value])
                path = path.parent / f'{path.stem}_rotated{path.suffix}'
                image.save(str(path))
        image.close()    
        return clip
    call this function, after clip is loaded by source plugins like:
    Code:
    clip = rotation_check(clip, path, save_rotated_image=SAVE_ROTATED_IMAGES)
    Image should be saved only if it is clear it rotates in correct way, I only tested it with images from some kiddie camera
    Last edited by _Al_; 16th Jan 2023 at 11:38.
    Quote Quote  
  15. Member hydra3333's Avatar
    Join Date
    Oct 2009
    Location
    Australia
    Search Comp PM
    Nice ! Thanks. I was looking into pymediainfo but this looks like just the thing.
    Quote Quote  
  16. mediainfo unfortunately cannot deal with EXIF data, so I guess pymediainfo cannot deal with it as well, it uses MediaInfo.dll. I had that PIL already installed so I gave it a shot and it worked.
    Also, to recognize a rotation for mp4 and mov as Selur suggested, using mediainfo, bypassing pymediainfo and using MediaInfoDLL3.py and MediaInfo.dll that both came with mediainfo developer package I came up with this so far:
    Code:
    #python3, loading mediainfo readings for a media file using MediaInfo.dll
    
    #https://mediaarea.net/en/MediaInfo/Download/Windows
    #download 64bit DLL without installer, unzip, find MediaInfo.dll and MediaInfoDLL3.py
    #put MediaInfoDLL3.py in your directory (portable setup) or site-packages directory
    #MediaInfo.dll is loaded by ctypes with full path
    #or put MediaInfo.dll in your directory (for portable setup) and load it: ctypes.CDLL('.\MediaInfo.dll')
    
    from pathlib import Path
    from ctypes import *
    from typing import Union
    import vapoursynth as vs
    from vapoursynth import core
    
    CDLL('.\MediaInfo.dll')   #'.\MediaInfo.dll' if in directory or include path
    from MediaInfoDLL3 import MediaInfo, Stream, Info, InfoOption
    
    def mediainfo_value(stream:int, track:int, param:str, path: Union[Path,str]) -> Union[int,float,str]:
        if not stream in range(0,8):
            raise ValueError(f'stream must be a Stream attribute: General, Video, Audio, Text, Other, Image, Menu, Max')
        if not isinstance(track, int) or track<0:
            raise ValueError(f'track must be a positive integer')
        if not isinstance(param, str):
            raise ValueError(f'param must be a string for particular stream, print(MI.Option_Static("Info_Parameters")')
        if not isinstance(path, (Path, str)):
            raise ValueError(f'path must be Path or str class')    
        MI.Open(str(path))
        str_value = MI.Get(stream, track, param)
        info_option =  MI.Get(stream, track, param, InfoKind=Info.Options)
        MI.Close()
        if not str_value:
            return None
        if info_option:
            #returning a proper value type, int, float or str for particular parameter
            type_ = info_option[InfoOption.TypeOfValue] #type_=info_option[3] #_type will be 'I', 'F', 'T', 'D' or 'B'
            val = {'I':int, 'F':float, 'T':str, 'D':str, 'B':str}[type_](str_value)
            return val
        else:
            raise ValueError(f'wrong parameter: "{param}" for given stream: {stream}')
    
    DIRECTORY = r"D:\downloads"
    paths = Path(DIRECTORY).glob(f'*.mp4')
    
    for path in paths:
        MI = MediaInfo()
        param='Rotation'
        value = mediainfo_value(Stream.Video, 0, param, path)
        if param == 'Rotation':
            value = int(float(value)) # for some reason Rotation value type mediainfo carries as a string,  like: '180.00'
        print(f'{value}  {path}')
        clip = core.lsmas.LibavSMASHSource(str(path))
        if   value == 180: clip = clip.std.Turn180()
        elif value == 90:  clip = clip.std.Transpose().std.FlipVertical()
        elif value == 270: clip = clip.std.Transpose().std.FlipHorizontal()
        #work with clip
    Cannot verify it, rotating values might be off, I do not have any rotated images, for other media info values it works though, for example:
    Code:
    value = mediainfo_value(Stream.Video, 0, 'Width', path)
    print(value)
    >>>1920
    which is nice integer, because value type for searched parameter is also provided by mediainfo

    edit: kiddie camera saved me again to test this, to get rotation, I fixed that in code, but again test those proper rotations for clips,
    and those types are predictable, it looks like, they are marked as a fourth character (index 3) of those 5 chacter values for a parameters , it is listed in documents
    Last edited by _Al_; 16th Jan 2023 at 23:59.
    Quote Quote  
  17. Member hydra3333's Avatar
    Join Date
    Oct 2009
    Location
    Australia
    Search Comp PM
    Many Thanks ! Also nice.
    I'm guessing a combination of using Pillow for source images and MediaInfoDLL3.py for source videos is the way forward.

    A trick for new players, I install vapoursynth into the same folder as extracted portable Python x64 and it had no pip to install Pillow.
    Some googled instructions on how to install pip didn't work.
    Ended up using portable pip, run in the same folder as the extracted portable python, and it works:

    Code:
    REM https://packaging.python.org/en/latest/tutorials/installing-packages/#ensure-you-can-r...e-command-line
    REM https://pip.pypa.io/en/latest/installation/
    del "pip.pyz"
    c:\software\wget\wget.exe -v -t 1 --server-response --timeout=360 -nd -np -nH --no-cookies --output-document="pip.pyz" "https://bootstrap.pypa.io/pip/pip.pyz"
    python pip.pyz --help
    python pip.pyz install --target .\ Pillow --force-reinstall --upgrade --upgrade-strategy eager  --verbose
    Will give them a try soon.
    Quote Quote  
  18. Originally Posted by _Al_ View Post
    Looks like zscale is the same zimg library that vapoursynth uses as default or Avisynth uses as z_ConvertFormat ().
    No boxing options (boxing into a pillarbox or letterbox) , so that is why I did it manually for the script above.
    Trying to address topic title, ffmpeg automatic pad is easy (my assumption is that "boxing into a pillarbox or letterbox" means automatic pad) - just create area same size or bigger than your picture and overlay on top your picture
    Quote Quote  
  19. Originally Posted by pandy View Post
    Trying to address topic title, ffmpeg automatic pad is easy (my assumption is that "boxing into a pillarbox or letterbox" means automatic pad) - just create area same size or bigger than your picture and overlay on top your picture
    Yes, but figuring out that offset x or y (pillarbox or letterbox) to offset image is necessary anyway. Actually first to automatize is to figure out what I am doing pillarbox or letterbox, then getting that offset x or y. Can it be done in ffmpeg? That might really be of some interest for others.
    In batch script it might be a problem to get lines like w = w - w%modw(this is python), maybe not, maybe batch script could do it. Or can ffmpeg automatically pad a video into other video and at the same time following mods. In python/vapoursynth it seams just ok to do it (I know hydra3333 has no problem with vapoursynth so I did not hesitate to bring it up, at the beginning I actually learn from his code).

    There might be ways together with ffprobe (getting orientation) to use batch script. But then can ffmpeg rotate image. Would a batch code be doable for this task etc. Also as long there is vapoursynth one can use crossfades.
    Quote Quote  
  20. Member hydra3333's Avatar
    Join Date
    Oct 2009
    Location
    Australia
    Search Comp PM
    Originally Posted by _Al_ View Post
    Can it be done in ffmpeg?
    Sort of, if I understood you correctly. After a fair bit of googling on options, and seeing
    https://superuser.com/questions/1661735/pattern-type-glob-or-other-jpg-wildcard-input-...for-windows-10
    https://trac.ffmpeg.org/wiki/Slideshow
    This ffmpeg commandline works after generating the concat input file via a script https://forum.videohelp.com/threads/408230-ffmpeg-avc-from-jpgs-of-arbitrary-dimension...io#post2678193
    However ffmpeg stops immediately upon seeing the first rotated image.
    Given the script looks for media files first, one could do something with ffprobe, perhaps rotating and saving a new image in a scratch folder or something.
    Creating and sticking metadata in the concat file is one thing, however I can't see a way to access/use it in the filter chain.
    (Haven't yet tried looking for or processing rotated videos with the commandline)

    Acknowledging a working single ffmpeg commandline would have been the cleanest "native" outcome, that isn't going to happen by the looks.

    _AI_'s vapoursynth approach (very many thanks!) seems the cleanest working approach, as an all-in-one script not requiring scratch folders with an indeterminate number of new rotated images/videos etc.
    Not wanting to run installers ... yes, one has to setup an environment ...
    • portable python in a folder,
    • portable vapoursynth overlaid in the same folder,
    • an ffmpeg build with vapoursynth compatibility and relevant codecs and features (openCL/vulkan, anyone?) copied into the same folder,
    • a manual pip downloaded in the same folder,
    • pip install of Pillow
    • and somehow MediaInfoDLL3 stuff, t.b.a.

    Still tinkering with it.
    Last edited by hydra3333; 17th Jan 2023 at 18:58. Reason: tested in the portable environment and it works
    Quote Quote  
  21. It can be any ffmpeg if using it via vspipe, as long using python, it might as well be all done there
    https://forum.videohelp.com/threads/407724-moving-image-effect#post2673975
    or
    https://forum.videohelp.com/threads/404931-x264_encoder_encode-failed#post2650039
    or
    https://forum.videohelp.com/threads/404931-x264_encoder_encode-failed#post2649873

    Not sure if I ever tested ffmpeg with vpy import. The thing is, it would have to be -i *.vpy, not -i *.py
    I started also using ffmpeg via direct vapoursynth output (below), but maybe going thru vspipe is the way to do it
    https://forum.videohelp.com/threads/392447-encoding-with-vapoursynth#post2544566
    Quote Quote  
  22. Member hydra3333's Avatar
    Join Date
    Oct 2009
    Location
    Australia
    Search Comp PM
    True, piping is an option. Great examples in your links by the way !

    I use home-grown ffmpeg builds with extra dependencies including vapoursynth/avisynth/openCL/vulkan etc.
    I gravitated to that without piping on an assumption that no piping meant less overhead, rightly or wrongly.
    It works a treat even with the portable versions of vapoursynth and python.

    Yes, here's the hack script with the -i *.vpy
    Code:
    @ECHO ON
    @setlocal ENABLEDELAYEDEXPANSION
    @setlocal enableextensions
    
    "C:\SOFTWARE\MediaInfo\MediaInfo.exe" --full "G:\Family_Photos\2010.IMG_1190.JPG"
    "C:\SOFTWARE\Vapoursynth-x64\ffprobe.exe" "G:\Family_Photos\2010.IMG_1190.JPG"
    
    "C:\SOFTWARE\Vapoursynth-x64\ffmpeg_OpenCL.exe" -hide_banner -v verbose ^
    -f vapoursynth -i "G:\HDTV\TEST\_AI_\_AI_01_no_crossfade.vpy" -an ^
    -map 0:v:0 ^
    -vf "setdar=16/9" ^
    -fps_mode passthrough ^
    -sws_flags lanczos+accurate_rnd+full_chroma_int+full_chroma_inp ^
    -strict experimental ^
    -c:v h264_nvenc -pix_fmt nv12 -preset p7 -multipass fullres -forced-idr 1 -g 25 ^
    -coder:v cabac -spatial-aq 1 -temporal-aq 1 ^
    -dpb_size 0 -bf:v 3 -b_ref_mode:v 0 ^
    -rc:v vbr -cq:v 0 -b:v 7000000 -minrate:v 100000 -maxrate:v 9000000 -bufsize 9000000 ^
    -profile:v high -level 5.2 ^
    -movflags +faststart+write_colr ^
    -y "G:\HDTV\TEST\_AI_\_AI_01_no_crossfade.mp4"
    
    pause
    exit
    Here's the previous non-vapoursynth using-concat non-working-with-rotation ffmpeg commandline hack, just for reference.
    Code:
    @ECHO on
    @setlocal ENABLEDELAYEDEXPANSION
    @setlocal enableextensions
    Set "slideshow_ffmpegexe64=C:\SOFTWARE\Vapoursynth-x64\ffmpeg_OpenCL.exe"
    Set "slideshow_mediainfoexe64=C:\SOFTWARE\MediaInfo\MediaInfo.exe"
    Set "slideshow_ffprobeexe64=C:\SOFTWARE\Vapoursynth-x64\ffprobe_OpenCL.exe"
    Set "slideshow_Insomniaexe64=C:\SOFTWARE\Insomnia\64-bit\Insomnia.exe"
    REM
    REM https://forum.videohelp.com/threads/408230-ffmpeg-avc-from-jpgs-of-arbitrary-dimension...io#post2678121
    G:
    cd G:\HDTV\TEST
    
    call :maketempheader
    ECHO after call --- !COMPUTERNAME! !DATE! !TIME! tempheader="!tempheader!"
    
    set "ffmpeg_concat_input_file=.\ffmpeg_concat-!tempheader!-input.txt"
    set "ffmpeg_concat_output_file=.\ffmpeg_concat-!tempheader!-output.mp4"
    set "ffmpeg_concat_log_file=.\ffmpeg_concat-!tempheader!-log.log"
    DEL /F "!ffmpeg_concat_input_file!"
    REM overwrite any existing ffmpeg_concat_input_file
    REM in the echo, no extra characters or even a space !
    echo ffconcat version 1.0> "!ffmpeg_concat_input_file!"
    REM dir /s /b /a:-d G:\Family_Photos\*.jp*g >> "!ffmpeg_concat_input_file!"
    REM for /R G:\Family_Photos %%parameter %%G IN (.) DO (echo %%G)
    
    for /f "tokens=*" %%G in ('dir /b /s /a:-d "G:\HDTV\TEST\Family_Photos\"') DO (
    	REM first, "escape" all backslashes in the fill path name
    	set "x=%%G"
    	set "x=!x:\=\\!"
    	set "xe=!x::=\:!"
    	echo file !x!>> "!ffmpeg_concat_input_file!"
    	echo file_packet_meta img_source_unescaped "%%G">> "!ffmpeg_concat_input_file!"
    	echo file_packet_meta img_source_escaped "!xe!">> "!ffmpeg_concat_input_file!"
    )
    REM type "!ffmpeg_concat_input_file!"
    
    REM set the bitrates
    set /a "bitrate_target=8000000"
    set /a "bitrate_min=!bitrate_target! / 10"
    set /a "tmp=!bitrate_min! * 3"
    set /a "bitrate_max=!bitrate_target! + !tmp!"
    set /a "bitrate_bufsize=!bitrate_max! * 2"
    REM set the time base (PAL country)
    set /a "timebase_numerator=1" 
    set /a "timebase_denominator=25"
    REM set picture duration via the PTS as an integer multiple of the timebase_denominator, so 1*25 = 25 (1 second)
    set /a "picture_duration=1"
    set /a "gop_size=!timebase_denominator!"
    REM changed to format=nv12 from format=yuv420p"
    REM assume -temporal-aq 1 helps, since we run a second or so of frames all containing same image in all those frames
    REM this version of the command is preferred in https://trac.ffmpeg.org/wiki/Slideshowy 
    REM 	-vf "scale=1920:1080:eval=frame:flags=lanczos+accurate_rnd+full_chroma_int+full_chroma_inp:force_original_aspect_ratio=decrease:out_color_matrix=bt709:out_range=full,pad=1920:1080:(ow-iw)/2:(oh-ih)/2:eval=frame:color=black,settb=expr=!timebase_numerator!/!timebase_denominator!,setpts=!picture_duration!*N/TB,drawtext=box=0:fontsize=30:text='Frame %%{frame_num}':x=(w-text_w)/2:y=(h-text_h)/2:fix_bounds=1:fontcolor=black,setdar=16/9,format=nv12" ^
    set "cmd2="
    set "cmd2=!cmd2!"!slideshow_ffmpegexe64!" "
    set "cmd2=!cmd2! -hide_banner"
    set "cmd2=!cmd2! -stats"
    REM set "cmd2=!cmd2! -v debug"
    set "cmd2=!cmd2! -reinit_filter 0 -safe 0 -auto_convert 1"
    set "cmd2=!cmd2! -f concat -i "!ffmpeg_concat_input_file!""
    set "cmd2=!cmd2! -sws_flags lanczos+accurate_rnd+full_chroma_int+full_chroma_inp"
    set "cmd2=!cmd2! -vf ""
    set "cmd2=!cmd2!scale=1920:1080:eval=frame:flags=lanczos+accurate_rnd+full_chroma_int+full_chroma_inp:force_original_aspect_ratio=decrease:out_color_matrix=bt709:out_range=full,"
    set "cmd2=!cmd2!pad=1920:1080:(ow-iw)/2:(oh-ih)/2:eval=frame:color=black,"
    set "cmd2=!cmd2!settb=expr=!timebase_numerator!/!timebase_denominator!,"
    set "cmd2=!cmd2!setpts=!picture_duration!*N/TB,"
    set "cmd2=!cmd2!drawtext=box=0:fontsize=30:text='Frame %%{frame_num}':x=(w-text_w)/2:y=(h-text_h)/2:fix_bounds=1:fontcolor=black,"
    set "cmd2=!cmd2!setdar=16/9,"
    set "cmd2=!cmd2!format=yuvj420p"
    set "cmd2=!cmd2!""
    set "cmd2=!cmd2! -c:v h264_nvenc -pix_fmt nv12 -preset p7 -multipass fullres"
    set "cmd2=!cmd2! -forced-idr 1 -g !gop_size!"
    set "cmd2=!cmd2! -coder:v cabac -spatial-aq 1 -temporal-aq 1 -dpb_size 0"
    set "cmd2=!cmd2! -bf:v 0 -bf:v 0 -b_ref_mode:v 0 -rc:v vbr -cq:v 0 -b:v %bitrate_target% -minrate:v %bitrate_min% -maxrate:v %bitrate_max% -bufsize %bitrate_bufsize%"
    set "cmd2=!cmd2! -profile:v high -level 5.2"
    set "cmd2=!cmd2! -movflags +faststart+write_colr"
    set "cmd2=!cmd2! -y "!ffmpeg_concat_input_file!.try1.mp4""
    echo !cmd2!
    echo !cmd2! >>"!ffmpeg_concat_log_file!" 2>&1
    !cmd2! >>"!ffmpeg_concat_log_file!" 2>&1
    
    pause
    exit
    
    REM +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    REM --- start set a temp header to date and time
    :maketempheader
    set "Datex=%DATE: =0%"
    set yyyy=!Datex:~10,4!
    set mm=!Datex:~7,2!
    set dd=!Datex:~4,2!
    set "Timex=%time: =0%"
    set hh=!Timex:~0,2!
    set min=!Timex:~3,2!
    set ss=!Timex:~6,2!
    set ms=!Timex:~9,2!
    ECHO !DATE! !TIME! As at !yyyy!.!mm!.!dd!_!hh!.!min!.!ss!.!ms!  COMPUTERNAME="!COMPUTERNAME!"
    set tempheader=!yyyy!.!mm!.!dd!.!hh!.!min!.!ss!.!ms!-!COMPUTERNAME!
    REM --- end set a temp header to date and time
    goto :eof
    REM +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    Quote Quote  
  23. thanks,
    but you call it! batch scripts always look as if you hack something, it even feels that way .
    Quote Quote  
  24. that pip.pyz is amazing, thanks for links
    I installed installed that Pillow, also numpy and opencv , just could not install tkinter, tkinter is not bundled in portable python for some reason
    Quote Quote  
  25. Member hydra3333's Avatar
    Join Date
    Oct 2009
    Location
    Australia
    Search Comp PM
    Trying to display parts of the image name to the bottom right, closer to the bottom than core.text.Text puts it, and blended in like an opaque logo.

    Code:
    this_clip = core.text.Text(this_clip, subtitle_path, alignment=3, scale=1)
    That works, however

    Code:
    this_clip = core.sub.Subtitle(clip=this_clip, text=r'2222')
    aborts with
    Property read unsuccessful due to out of bounds index but no error output: clip
    no matter what I try.
    https://amusementclub.github.io/doc3/plugins/subtext.html

    Can't seem to get the margins in Assrender to do what I want, hmm, it always appears at top left regardless of left=,top= etc
    https://github.com/AmusementClub/assrender
    Code:
    this_clip = core.assrender.Subtitle(this_clip, subtitle_path + " with T=-500", colorspace="BT.709", top=-500)
    Suggestions welcomed.
    Last edited by hydra3333; 19th Jan 2023 at 06:59.
    Quote Quote  
  26. Member hydra3333's Avatar
    Join Date
    Oct 2009
    Location
    Australia
    Search Comp PM
    Never mind
    Tried aegisub-procles in an ubunbtu VM to see what .ass parameters did what.
    Discovered what some of the style= settings did and can be used in the "assrender.Subtitle" call per https://github.com/AmusementClub/assrender

    Could try ass tags in the text= however will try a custom style= since that's more in your face.
    Also nice:
    start, end: Subtitle display time, start frame number and end frame’s next frame number, it will trim like [start:end], default is all frames of clip.
    which hopefully means I can use frame numbers in "assrender.Subtitle" instead of having to calculate timecodes.

    From aegisub fiddling, the style string from that looks like:
    Code:
    [V4+ Styles]
    Format: Name, Fontname, Fontsize, PrimaryColour, SecondaryColour, OutlineColour, BackColour, Bold, Italic, Underline, StrikeOut, ScaleX, ScaleY, Spacing, Angle, BorderStyle, Outline, Shadow, Alignment, MarginL, MarginR, MarginV, Encoding
    Style: h3333,Arial,15,&H00FFFFFF,&H000000FF,&H00000000,&H00000000,0,0,0,0,100,100,0,0,1,0.9,0.5,3,1,1,1,1
    which may, I hope, be recycled to do the trick:
    Code:
    style="Arial,15,&H00FFFFFF,&H000000FF,&H00000000,&H00000000,0,0,0,0,100,100,0,0,1,0.9,0.5,3,1,1,1,1"
    or perhaps with sans-serif per https://github.com/AmusementClub/assrender

    Cheers.

    edit: yes this worked a treat:
    Code:
    this_clip = core.assrender.Subtitle(this_clip, text_subpath_for_subtitles, style="sans-serif,18,&H00FFFFFF,&H000000FF,&H00000000,&H00000000,0,0,0,0,100,100,0,0,1,0.9,0.5,3,1,1,1,1", frame_width=TARGET_WIDTH, frame_height=TARGET_HEIGHT, colorspace=TARGET_COLOURSPACE)
    Last edited by hydra3333; 20th Jan 2023 at 00:36.
    Quote Quote  
  27. Member hydra3333's Avatar
    Join Date
    Oct 2009
    Location
    Australia
    Search Comp PM
    Originally Posted by _Al_ View Post
    mediainfo unfortunately cannot deal with EXIF data, so I guess pymediainfo cannot deal with it as well, it uses MediaInfo.dll. I had that PIL already installed so I gave it a shot and it worked.
    Also, to recognize a rotation for mp4 and mov as Selur suggested, using mediainfo, bypassing pymediainfo and using MediaInfoDLL3.py and MediaInfo.dll that both came with mediainfo developer package I came up with this so far:
    Thanks !
    Will look at this next.
    A pity mediainfo/MediaInfoDLL3 isn't in vsrepo, so thanks for the instructions.

    edit:
    Originally Posted by _Al_ View Post
    Also, to recognize a rotation for mp4 and mov as Selur suggested, using mediainfo
    A rookie mistake, I'd hoped mediainfo dealt with images too, but no. PIL finds image rotations whereas mediainfo doesn't.
    So I'll have to use PIL for images and mediainfo for videos.
    Cheers.
    Last edited by hydra3333; 20th Jan 2023 at 10:43.
    Quote Quote  
  28. Member hydra3333's Avatar
    Join Date
    Oct 2009
    Location
    Australia
    Search Comp PM
    OK, a terrible fudged mangle of the good tidy work of _AI_ in this thread ...
    Much appreciation to _AI_.

    This .vpy depends on being a direct input to ffmpeg, and cannot be run standalone afaik.
    A lot of debug is intentionally left in but is commented out.
    On a 3900X, an NVenc encode using this .vpy as input crawls along at circa speed 1.1x to 2.0x on a combo of 1944 mixed images/videos.

    I guess since I currently have 95,293 home images/videos organised in 1,485 date-named subfolders,
    I may set it running in a variety of subfolder trees to create a set of video slideshows
    covering time periods.

    Damn digital cameras and then phones with cameras.
    A family tends to happy snap thinking "We'll enjoy seeing these one day".
    No you won't
    Perhaps other families cull them right back or stick near the storage limits on their phones.

    Given the number of files and thus very large viewing run-times,
    an objective may be to set image display durations to say 0.75
    and call it "getting a flavour" of histories in time periods.
    Even at 0.75, it's circa 20 hours ignoring the impact of home video clips.
    Encoding times seem likely to be around half to 2/3 of that.

    Don't really know how to approach it, otherwise.

    Code:
    # PYTHON3
    # Version:
    #	as at 2023.01.20
    #
    # Description:
    #	Attempt to create a HD video slideshow of images and hopefully video clips from a folder tree.
    #	Does 8-bit only, does not handle HDR conversions etc.
    #	This script is consumed directly by ffmpeg as a .vpy input file, eg '-f vapoursynth -i "G:\folder\filename.vpy"'
    #
    # Acknowledgements:
    #	With all due respect to _AI_
    #	Original per _AI_ as updated in this thread and below
    #		https://forum.videohelp.com/threads/408230-ffmpeg-avc-from-jpgs-of-arbitrary-dimensions-maintaining-aspect-ratio#post2678241
    #
    # Environment:
    #	Not wanting to run installers (well, I don't, anyhow) ... one has to setup a suitable environment ... eg
    #		portable python into a nominated folder
    #		portable vapoursynth overlaid in the same folder
    #		an ffmpeg build with options for vapoursynth and NVenc enabled, copied into the same folder
    #		portable pip downloaded into the same folder
    #		a pip install of Pillow (refer below)
    #		Donald Graft's DGDecNV extracted into the subfolder DGIndex (refer below)
    #		suitable filters (refer below)
    #	Thread for interest https://forum.videohelp.com/threads/408230-ffmpeg-avc-from-jpgs-of-arbitrary-dimensions-maintaining-aspect-ratio#post2678484
    #
    # Filters:
    #	Choose your own vapoursynth Filters, however here's the ones (actually or potentially) currently used in this script ...
    #	Filter Dependencies with example installs into portable vapoursynth x64:
    #		cd C:\SOFTWARE\Vapoursynth-x64\
    #		REM after vsrepo.py and vsrupdate.py from https://github.com/vapoursynth/vsrepo into .\
    #		.\python.exe .\vsrepo.py -p -t win64 -f -b vapoursynth64\plugins -s vapoursynth64\scripts install "FFTW3 Library"
    #		.\python.exe .\vsrepo.py -p -t win64 -f -b vapoursynth64\plugins -s vapoursynth64\scripts install AssRender
    #		.\python.exe .\vsrepo.py -p -t win64 -f -b vapoursynth64\plugins -s vapoursynth64\scripts install LSMASHSource
    #		.\python.exe .\vsrepo.py -p -t win64 -f -b vapoursynth64\plugins -s vapoursynth64\scripts install FFmpegSource2
    #		.\python.exe .\vsrepo.py -p -t win64 -f -b vapoursynth64\plugins -s vapoursynth64\scripts install imwri
    #	And then mediainfo readings for a media file using MediaInfo.dll
    #		REM per https://forum.videohelp.com/threads/408230-ffmpeg-avc-from-jpgs-of-arbitrary-dimensions-maintaining-aspect-ratio#post2678372
    #		cd C:\SOFTWARE\Vapoursynth-x64\
    #		pushd C:\TEMP
    #		del MediaInfo*.dll
    #		del MediaInfo*.py
    #		del MediaInfo*.zip
    #		REM check for latest version per REM https://mediaarea.net/en/MediaInfo/Download/Windows
    #		set f="MediaInfo_DLL_22.12_Windows_x64_WithoutInstaller"
    #		c:\software\wget\wget.exe -v -t 1 --server-response --timeout=360 -nd -np -nH --no-cookies --output-document="%f%.zip" "https://mediaarea.net/download/binary/libmediainfo0/22.12/%f%.zip" 
    #		"C:\Program Files\WinZip\WZUNZIP.EXE" -e -o -^^ "%f%.zip" "C:\SOFTWARE\Vapoursynth-x64\" MediaInfo.dll Developers\Source\MediaInfoDLL\MediaInfoDLL.py Developers\Source\MediaInfoDLL\MediaInfoDLL3.py
    #		popd
    #		copy /b /y /z ".\MediaInfo*.py" vapoursynth64\scripts
    #	And then DGDenoise as a part of DGDecodeNV in the DGDecNV package which is Donald Graft's very handy GPU-accelerated toolset 
    #		per https://www.rationalqm.us/dgdecnv/dgdecnv.html and https://www.rationalqm.us/board/viewforum.php?f=8
    #			which can be installed by extracting dgdecnv_???.zip into C:\SOFTWARE\Vapoursynth-x64\DGIndex\ per LoadPlugin usage below.
    #	Finally
    #		copy /Y /V vapoursynth64\scripts\*.py .\
    #		.\python.exe .\vsrepo.py -p -t win64 -f -b vapoursynth64\plugins -s vapoursynth64\scripts installed | SORT > .\run_vsrepo_installed.txt
    #		.\python.exe .\vsrepo.py -p -t win64 -f -b vapoursynth64\plugins -s vapoursynth64\scripts available | SORT > .\run_vsrepo_available.txt
    #
    # Usage:
    #	Example usage with ffmpeg built with vapoursynth and NVEnc options enabled and using this .vpy as an input ...
    #		set "f=_AI_03_no_crossfade"
    #		"C:\SOFTWARE\Vapoursynth-x64\ffmpeg_OpenCL.exe" -hide_banner -v verbose ^
    #			-f vapoursynth -i "G:\HDTV\TEST\_AI_\!f!.vpy" -an ^
    #			-map 0:v:0 ^
    #			-vf "setdar=16/9" ^
    #			-fps_mode passthrough ^
    #			-sws_flags lanczos+accurate_rnd+full_chroma_int+full_chroma_inp ^
    #			-strict experimental ^
    #			-c:v h264_nvenc -pix_fmt nv12 -preset p7 -multipass fullres -forced-idr 1 -g 25 ^
    #			-coder:v cabac -spatial-aq 1 -temporal-aq 1 ^
    #			-dpb_size 0 -bf:v 3 -b_ref_mode:v 0 ^
    #			-rc:v vbr -cq:v 0 -b:v 3500000 -minrate:v 100000 -maxrate:v 9000000 -bufsize 9000000 ^
    #			-profile:v high -level 5.2 ^
    #			-movflags +faststart+write_colr ^
    #			-y "G:\HDTV\TEST\_AI_\!f!.mp4"
    
    import vapoursynth as vs
    from vapoursynth import core
    from functools import partial
    from pathlib import Path, PureWindowsPath
    from ctypes import *		# for mediainfo ... load via ctypes.CDLL('.\MediaInfo.dll')
    from typing import Union	# for mediainfo
    import itertools
    import math
    import sys
    import os
    import glob
    # To install Pillow in portable Python using Portable pip
    # see https://pip.pypa.io/en/latest/installation/  for the portable version of pip, then eg
    # cd C:\SOFTWARE\Vapoursynth-x64
    # c:\software\wget\wget.exe -v -t 1 --server-response --timeout=360 -nd -np -nH --no-cookies --output-document="pip.pyz" "https://bootstrap.pypa.io/pip/pip.pyz"
    # python pip.pyz --help
    # python pip.pyz install --target .\ Pillow --force-reinstall --upgrade --upgrade-strategy eager  --verbose 
    from PIL import Image, ExifTags, UnidentifiedImageError
    from PIL.ExifTags import TAGS
    core.std.LoadPlugin(r'C:\SOFTWARE\Vapoursynth-x64\DGIndex\DGDecodeNV.dll') 	# note the hard-coded folder
    core.avs.LoadPlugin(r'C:\SOFTWARE\Vapoursynth-x64\DGIndex\DGDecodeNV.dll') 	# note the hard-coded folder
    CDLL(r'C:\SOFTWARE\Vapoursynth-x64\MediaInfo.dll')				# per https://forum.videohelp.com/threads/408230-ffmpeg-avc-from-jpgs-of-arbitrary-dimensions-maintaining-aspect-ratio#post2678372
    from MediaInfoDLL3 import MediaInfo, Stream, Info, InfoOption	# per https://forum.videohelp.com/threads/408230-ffmpeg-avc-from-jpgs-of-arbitrary-dimensions-maintaining-aspect-ratio#post2678372
    
    DIRECTORY = r'G:\HDTV\TEST\_AI_\test_images'	# note the hard-coded folder we are going to process
    ########################################################################################################################################################
    # We could read the folder name to process from a single line in file named ".\_AI_folder_to_process.txt"
    # instead of it being the fixed string DIRECTORY above.
    # Perhaps we should leave this un-commented since a missing file will mean just using the default DIRECTORY
    #
    DEFAULT_FILE_SPECIYING_A_FOLDER_TO_PROCESS = r'.\_AI_folder_to_process.txt' # no need for escaping the backslashes in a proper path, NOT ALWAYS THE CASE
    try:
    	txtfile = open(DEFAULT_FILE_SPECIYING_A_FOLDER_TO_PROCESS, 'r')
    except OSError as err:
    	#print(f'DEBUG: cannot Open file {DEFAULT_FILE_SPECIYING_A_FOLDER_TO_PROCESS}')
    	#print(f"DEBUG: Unexpected {err=}, {type(err)=}")
    	#raise
    	pass
    except Exception as err:
    	#print(f'DEBUG: cannot Open file {DEFAULT_FILE_SPECIYING_A_FOLDER_TO_PROCESS}')
    	#print(f"DEBUG: Unexpected {err=}, {type(err)=}")
    	#raise
    	pass
    else:
    	ll = len(txtfile.readlines())
    	if ll!=1:
    		#print(f'DEBUG: {DEFAULT_FILE_SPECIYING_A_FOLDER_TO_PROCESS} has {ll} lines which should only ever be "1"')
    		pass
    	else:
    		txtfile.seek(0)	# rewind the file
    		DIRECTORY = txtfile.readline().rstrip('\n')
    		txtfile.close()
    		print(f'INFO: As read from "{DEFAULT_FILE_SPECIYING_A_FOLDER_TO_PROCESS}" the incoming folder to process="{DIRECTORY}"')
    ########################################################################################################################################################
    
    RECURSIVE			= True						# iterate all subfolders as well
    PIC_EXTENSIONS		= [".png", ".jpg", ".jpeg", ".gif"]								# always lower case
    VID_EXTENSIONS		= [".mp4", ".mpeg4", ".mpg", ".mpeg", ".avi", ".mjpeg", ".3gp"]	# always lower case
    EEK_EXTENSIONS		= [".m2ts"]														# always lower case
    EXTENSIONS			= PIC_EXTENSIONS + VID_EXTENSIONS + EEK_EXTENSIONS
    VID_EEK_EXTENSIONS	= VID_EXTENSIONS + EEK_EXTENSIONS
    
    TARGET_PIXEL_FORMAT	= vs.YUV420P8				# pixel format of the target video
    DG_PIXEL_FORMAT		= vs.YUV420P16				# pixel format of the video for use by DG tools
    TARGET_COLOURSPACE_MATRIX = r'709'				# HD, used by resize filter
    TARGET_COLOURSPACE	= r'BT.709'					# HD, used by assrender.Subtitle filter
    TARGET_WIDTH		= 1920						# target width,  watch for subsampling, it has to fit
    TARGET_HEIGHT		= 1080						# target height, watch for subsampling, it has to fit
    TARGET_FPSNUM		= 25						# for fps numerator		... PAL world bias
    TARGET_FPSDEN		= 1							# for fps denominator	... PAL world bias
    TARGET_FPS			= round(TARGET_FPSNUM / TARGET_FPSDEN,3)
    DURATION_SEC		= 1							# seconds duration for images not videos
    BLANK_CLIP_LENGTH	= int(math.ceil(0.1*TARGET_FPS))	# leading and trailing blank clip duration in frames with round-up. int(round(0.1*TARGET_FPS)) will round up/down
    MIN_DURATION_SEC	= 0.75											### duration of display of an image, in seconds
    MIN_DURATION_FRAMES	= int(math.ceil(MIN_DURATION_SEC * TARGET_FPS))	### duration of display of an image, in frames
    MAX_DURATION_SEC	= 5											### max duration of display of a video, in seconds
    MAX_DURATION_FRAMES	= int(math.ceil(MAX_DURATION_SEC * TARGET_FPS))	### max duration of display of a video, in frames
    #
    CROSS_DUR			= 0							# crossfade duration in frames, eg 5, 0 means so crossfade (0 looks good and doesn't chew extra space)
    #CROSS_DUR			= 5							# crossfade duration in frames, eg 5, 0 means so crossfade (0 looks good and doesn't chew extra space)
    BOX					= True						# True would initiate letterboxing or pillarboxing. False fills to TARGET_WIDTH,TARGET_HEIGHT
    ADD_SUBTITLE		= True						# True adds a subtitle in the bottom right corner containing the last few parts of the path to the image/video
    DOT_FFINDEX			= ".ffindex".lower()		# for removing temporary *.ffindex files at the end
    #
    UPSIZE_KERNEL   = 'Lanczos'
    DOWNSIZE_KERNEL = 'Spline36'
    #
    MODX				= 2	   # mods for letterboxing calculations, example, for 411 YUV as an extreme
    MODY				= 2	   # mods would have to be MODX=4, MODY=1 as minimum
    #
    MI					= MediaInfo()	# initialize per https://forum.videohelp.com/threads/408230-ffmpeg-avc-from-jpgs-of-arbitrary-dimensions-maintaining-aspect-ratio#post2678372
    
    ###
    def mediainfo_value(stream:int, track:int, param:str, path: Union[Path,str]) -> Union[int,float,str]:
    	# NOTE: global MI is already setup as if a "constant" global variable
        if not stream in range(0,8):
            raise ValueError(f'ERROR: mediainfo_value: stream must be a Stream attribute: General, Video, Audio, Text, Other, Image, Menu, Max')
        if not isinstance(track, int) or track<0:
            raise ValueError(f'ERROR: mediainfo_value: track must be a positive integer')
        if not isinstance(param, str):
            raise ValueError(f'ERROR: mediainfo_value: param must be a string for particular stream, print(MI.Option_Static("Info_Parameters")')
        if not isinstance(path, (Path, str)):
            raise ValueError(f'ERROR: mediainfo_value: path must be Path or str class')    
        MI.Open(str(path))
        str_value = MI.Get(stream, track, param)
        info_option =  MI.Get(stream, track, param, InfoKind=Info.Options)
        MI.Close()
        if not str_value:
            return None
        if info_option:
            #returning a proper value type, int, float or str for particular parameter
            type_ = info_option[InfoOption.TypeOfValue] #type_=info_option[3] #_type will be 'I', 'F', 'T', 'D' or 'B'
            val = {'I':int, 'F':float, 'T':str, 'D':str, 'B':str}[type_](str_value)
            return val
        else:
            raise ValueError(f'ERROR: mediainfo_value: wrong parameter: "{param}" for given stream: {stream}')
    
    ###
    def boxing(clip, W=TARGET_WIDTH, H=TARGET_HEIGHT):
    	source_width, source_height = clip.width, clip.height
    	if W/H > source_width/source_height:
    		w = source_width*H/source_height
    		x = int((W-w)/2)
    		x = x - x%MODX
    		x = max(0, min(x,W))
    		clip = resize_clip(clip, W-2*x, H)
    		if x: return clip.std.AddBorders(left=x, right=x, color=(16,128,128))  #RGB is out then (16,16,16)
    		else: return clip
    	else:
    		h = source_height*W/source_width
    		y = int((H-h)/2)
    		y = y - y%MODY
    		y = max(0, min(y,H))
    		clip = resize_clip(clip, W, H-2*y)
    		if y: return clip.std.AddBorders(top=y, bottom=y, color=(16,128,128))
    		else: return clip
    
    ###
    def resize_clip(clip,w,h, W=TARGET_WIDTH, H=TARGET_HEIGHT):
    	if w>W or h>H:	resize = getattr(clip.resize, DOWNSIZE_KERNEL)	# get the resize function object ?handle? with the nominated kernel
    	else:			resize = getattr(clip.resize, UPSIZE_KERNEL)	# get the resize function object ?handle? with the nominated kernel
    	if clip.format.color_family==vs.RGB:
    		#rgb to YUV, perhaps only for png images, figure out what matrix out is needed ... use the HD one REC.709
    		#print("DEBUG: clip.format.color_family==vs.RGB")
    		c =  resize(width=w, height=h, format=TARGET_PIXEL_FORMAT, matrix_s='709')
    		return c
    	else:
    		#YUV to YUV
    		#print("DEBUG: clip.format.color_family==vs.YUV?")
    		c = resize(width=w, height=h, format=TARGET_PIXEL_FORMAT)
    		# AH !!! the next line with matrix_s='709' can cause this:
    		#		Error getting frame: Resize error: Resize error 3074: no path between colorspaces (2/2/2 => 1/2/2).
    		# it seems missing "Matrix coefficients" metadata in the source may be the culprit
    		#c = resize(width=w, height=h, format=TARGET_PIXEL_FORMAT, matrix_s='709')
    		return c
    
    ###
    def get_clip(path):
    	if path.suffix.lower()   in EEK_EXTENSIONS:
    		#print(f'DEBUG: get_clip: lsmas Video: {path.name}')
    		clip = core.lsmas.LWLibavSource(str(path))
    		#print(f'DEBUG: get_clip: Video info:\n{clip}')
    	elif path.suffix.lower() in VID_EXTENSIONS:
    		#print(f'DEBUG: get_clip: ffms2 Video: {path.name}')
    		clip = core.ffms2.Source(str(path))	#ffms2 leaves *.ffindex files everywhere in folders.
    		#clip = core.lsmas.LibavSMASHSource(str(path))
    		#print(f'DEBUG: get_clip: Video info:\n{clip}')
    	elif path.suffix.lower() in PIC_EXTENSIONS:
    		#print(f'DEBUG: get_clip: ffms2 Video: {path.name}')
    		clip = core.ffms2.Source(str(path))
    	#	clip = core.imwri.Read(str(path)) # ImageWriter, if installed into vapoursynth folder
    		#print(f'DEBUG: get_clip: Video info:\n{clip}')
    	else:
    		#print(f'DEBUG: get_clip: ffms2 Video: {path.name}')
    		clip = core.ffms2.Source(str(path))	# if file extension not recognised, use this reader
    		#print(f'DEBUG: get_clip: Video info:\n{clip}')
    
    	# check for and picture/video rotation specified perhaps in EXIF but not auto-processed here by the file openers
    	if path.suffix.lower()   in VID_EEK_EXTENSIONS:
    		clip = rotation_check_MediaInfo(clip, path, save_rotated_image=False)	# per https://forum.videohelp.com/threads/408230-ffmpeg-avc-from-jpgs-of-arbitrary-dimensions-maintaining-aspect-ratio#post2678326
    	else: # source is not a video type, i.e. an image
    		clip = rotation_check_PIL(clip, path, save_rotated_image=False)	# per https://forum.videohelp.com/threads/408230-ffmpeg-avc-from-jpgs-of-arbitrary-dimensions-maintaining-aspect-ratio#post2678326
    
    	# do video-specific or picture-specific changes
    	if path.suffix.lower()   in VID_EEK_EXTENSIONS:	#if any sort of video, probably an old hand-held camera or phone, sometimes variable fps ...
    		#print(f'DEBUG: get_clip: opened Video: {path.name}')
    		#print(f'DEBUG: get_clip: opened Video: {path.name}\nVideo info:\n{clip}')
    		#clip = core.text.Text(clip, path.name, alignment=3, scale=1)
    		#clip = core.text.FrameNum(clip, alignment=5, scale=1)
    		#clip = core.text.ClipInfo(clip, alignment=8, scale=1)
    		#clip = core.text.FrameProps(clip, alignment=2, scale=1)
    		source_fpsnum = clip.fps.numerator		# eg 25	# numerator   is 0 when the clip has a variable framerate.
    		source_fpsden = clip.fps.denominator	# eg 1	# denominator is 1 when the clip has a variable framerate.
    		source_fps = round(source_fpsnum / source_fpsden,3)
    		source_duration_frames = clip.num_frames
    		source_duration_secs = round((source_duration_frames / source_fps),3)
    		source_width, source_height = clip.width, clip.height
    		#print(f'DEBUG: get_clip: {source_width}x{source_height}\nsource_fpsnum:{source_fpsnum} source_fpsden:{source_fpsden}\nsource_fps:{source_fps}\nsource_duration_frames:{source_duration_frames}\nsource_duration_secs:{source_duration_secs}')
    		# change framerate ? too hard for a small simple video sample, just "assume" target fps and ignore consequences of speed-up or slow-down or VFR
    		clip = clip.std.AssumeFPS(fpsnum=TARGET_FPSNUM, fpsden=TARGET_FPSDEN)
    		# if duration greater than out review maximum, clip it
    		if source_duration_frames>(MAX_DURATION_FRAMES-1):
    			clip = core.std.Trim(clip, first=0, last=(MAX_DURATION_FRAMES-1))
    		# denoise ANY "small" dimension video clips, older videos likely to be noisy
    		if source_width<TARGET_WIDTH or source_height<TARGET_HEIGHT:	 
    			#print(f'INFO: applying DGDenoise to small {source_width}x{source_height} video {path.name}')
    			# clip must be YUV420P16 for DGDenoise etc
    			clip = clip.resize.Point(format=DG_PIXEL_FORMAT)		# convert to DG_PIXEL_FORMAT via resizer which does no resizing
    			#clip = core.avs.DGDenoise(clip, strength=0.06, cstrength=0.06)
    			clip = core.avs.DGDenoise(clip, strength=0.15, cstrength=0.15)
    			clip = clip.resize.Point(format=TARGET_PIXEL_FORMAT)	# convert to TARGET_PIXEL_FORMAT via resizer which does no resizing
    	else: # source is not a video type, i.e. an image
    		# extend duration of a clip of an image
    		clip = clip[0]*MIN_DURATION_FRAMES if len(clip)<MIN_DURATION_FRAMES else clip # make clip at least MIN_DURATION_FRAMES frames long if less than MIN_DURATION_FRAMES frames
    	
    	# either add borders to maintain aspect ratio (boxing), or just stretch to fit (yuk)
    	if BOX:
    		clip = boxing(clip, TARGET_WIDTH, TARGET_HEIGHT)
    	else:
    		clip = resize_clip(clip, TARGET_WIDTH, TARGET_HEIGHT)
    
    	# Add a subtitle being the trailing 3 parts of the path
    	if ADD_SUBTITLE:	# Add a subtitle being the trailing 3 parts of the path
    		pwp = PureWindowsPath(path)
    		n = len(pwp.parts)
    		text_subpath_for_subtitles = "/" + pwp.parts[n-3] + "/" + pwp.parts[n-2] + "/" +  pwp.parts[n-1]
    		# To tinker with .ass subs, see https://snapcraft.io/install/aegisub-procles/ubuntu
    		# Also note from an aegisub created .ass file
    		#	Format: Name, Fontname, Fontsize, PrimaryColour, SecondaryColour, OutlineColour, BackColour, Bold, Italic, Underline, StrikeOut, ScaleX, ScaleY, Spacing, Angle, BorderStyle, Outline, Shadow, Alignment, MarginL, MarginR, MarginV, Encoding
    		#	Style: h3333,Arial,18,&H00FFFFFF,&H000000FF,&H00000000,&H00000000,0,0,0,0,100,100,0,0,1,0.9,0.5,3,2,2,2,1
    		# whereas default .assrender.Subtitle style="sans-serif,18,&H00FFFFFF,&H000000FF,&H00000000,&H00000000,0,0,0,0,100,100,0,0,1,2,0,7,10,10,10,1"
    		clip = core.assrender.Subtitle(clip, text_subpath_for_subtitles, style="sans-serif,18,&H00FFFFFF,&H000000FF,&H00000000,&H00000000,0,0,0,0,100,100,0,0,1,0.9,0.5,3,2,2,2,1", colorspace=TARGET_COLOURSPACE) # frame_width=TARGET_WIDTH, frame_height=TARGET_HEIGHT, 
    
    	return clip
    
    ###
    def get_path(path_generator):
    	#get next path of desired extensions from generator, ignoring extensions we have not specified
    	while 1:	# loop until we do a "return", hitting past the end of the iterator returns None
    		try:
    			path = next(path_generator)
    			#print('DEBUG: get_path: get success, path.name=' + path.name)
    		except StopIteration:
    			return None
    		if path.suffix.lower() in EXTENSIONS:	# only return files which are in known extensions
    			#print('DEBUG: get_path: in EXTENSIONS success, path.name=' + path.name)
    			return path
    		  
    ###
    def crossfade(a, b, duration):
    	#gets crosfade part from end of clip a and start of clip b
    	def fade_image(n, a, b):
    		return core.std.Merge(a, b, weight=n/duration)
    	if a.format.id != b.format.id or a.height != b.height or a.width != b.width:
    		raise ValueError('crossfade: Both clips must have the same dimensions and format.')
    	return core.std.FrameEval(a[-duration:], partial(fade_image, a=a[-duration:], b=b[:duration]))
    
    ###
    def print_exif_data(exif_data):
    	for tag_id in exif_data:
    		tag = TAGS.get(tag_id, tag_id)
    		content = exif_data.get(tag_id)
    	print(f'DEBUG: {tag:25}: {content}')
    		
    ###
    def print_exif_data2(exif_data):
    	for tag_id in exif_data:
    		tag = TAGS.get(tag_id, tag_id)
    		content = exif_data.get(tag_id)
    		if isinstance(content, bytes):
    			content = content.decode()
    		print(f'DEBUG: {tag:25}: {content}')
    	print()
    
    ###
    def rotation_check_PIL(clip, path, save_rotated_image=False):
    	# from PIL import Image, ExifTags, UnidentifiedImageError   # pip install Pillow, or equivalent
    	# PIL Pillow module loads an image, checks if EXIF data, checks for 'Orientation'
    	# The Python Pillow library is a fork of an older library called PIL. 
    	# Older PIL stands for Python Imaging Library, and it's the original library that enabled Python to deal with images. 
    	# PIL was discontinued in 2011 (that author died) and only supports Python 2.23 ... so use Pillow instead.
    	# https://python-pillow.org/
    	#print('DEBUG: rotation_check_PIL entered')
    	try:
    		image = Image.open(str(path))
    	except UnidentifiedImageError:
    		#print(f'DEBUG: rotation_check_PIL except UnidentifiedImageError immediate return now')
    		return clip
    	except PermissionError:
    		#print(f'DEBUG: rotation_check_PIL except PermissionError Permission denied to load: {path} immediate return now')
    		return clip
    	except Exception as e:
    		#print(f'DEBUG: rotation_check_PIL except Exception {e} immediate return now')
    		return clip
    	#print('DEBUG: rotation_check_PIL try on Image.open succeeded',flush=True)
    	try:		
    		for key in ExifTags.TAGS.keys():
    			if ExifTags.TAGS[key] == 'Orientation':
    				break
    		exif = dict(image.getexif().items())
    		value = exif[key]
    	except (AttributeError, KeyError, IndexError):
    		#print('DEBUG: rotation_check_PIL except AttributeError during for key in ExifTags.TAGS.keys(), immediate return now')
    		return clip
    	else:
    		if   value == 3:
    			print(f'INFO: PIL says auto-Rotating by 180 degrees {path}')
    			clip = clip.std.Turn180()
    		elif value == 8:
    			print(f'INFO: PIL says auto-Rotating by  90 degrees {path}')
    			clip = clip.std.Transpose().std.FlipVertical()
    		elif value == 6:
    			print(f'INFO: PIL says auto-Rotating by 270 degrees {path}')
    			clip = clip.std.Transpose().std.FlipHorizontal()
    		if save_rotated_image and value in [3,8,6]:
    			#rotation degrees are in counterclockwise direction!
    			rotate = {3:Image.ROTATE_180, 6:Image.ROTATE_270, 8:Image.ROTATE_90}
    			image = image.transpose(rotate[value])
    			path2 = path.parent / f'{path.stem}_rotated{path.suffix}'
    			##image.save(str(path2))	# comment this out ... no writing new images, please
    			#print(f'INFO: Rotated image {path} was NOT saved as requested into {path2}')
    	#exif = image.getexif()
    	#print_exif_data(exif)
    	#print()
    	#print_exif_data(exif.get_ifd(0x8769))
    	#print()
    	#
    	#print_exif_data2(image.getexif())
    	#print()
    	#print_exif_data(image._getexif())
    	#print()
    	image.close()	
    	return clip
    
    ###
    def rotation_check_MediaInfo(clip, path, save_rotated_image=False):
    	#print('DEBUG: rotation_check_MediaInfo entered')
    	param = 'Rotation'
    	value = mediainfo_value(Stream.Video, 0, param, path)
    	if param == 'Rotation':
    		if value is None:
    			value = 0
    		else:
    			value = int(float(value)) # for some reason Rotation value type mediainfo carries as a string,  like: '180.00'
    	#print(f'DEBUG: rotation_check_MediaInfo: value={value} for {path}')
    	if   value == 180:
    		print(f'INFO: MediaInfo says auto-Rotating by 180 degrees {path}')
    		clip = clip.std.Turn180()
    	elif value == 90:
    		print(f'INFO: MediaInfo says auto-Rotating by 90 degrees {path}')
    		clip = clip.std.Transpose().std.FlipVertical()
    	elif value == 270:
    		print(f'INFO: MediaInfo says auto-Rotating by 270 degrees {path}')
    		clip = clip.std.Transpose().std.FlipHorizontal()
    	else:
    		clip = clip
    	return clip
    
    ###################################################################################################################################################
    
    ### MAIN 
    if RECURSIVE:
    	glob_var="**/*.*"			# recursive
    	ff_glob_var="**/*.ffindex"	# for .ffindex file deletion recursive
    else:
    	glob_var="*.*"				# non-recursive
    	ff_glob_var="*.ffindex"		# for .ffindex file deletion non-recursive
    print(f'INFO: Processing {DIRECTORY} with recursive={RECURSIVE} glob_var={glob_var} ...\nwith Extensions {EXTENSIONS}',flush=True)
    Count_of_files = 0
    paths = Path(DIRECTORY).glob(glob_var) #generator of all paths in a directory, files starting with . won't be matched by default
    #sys.exit(0)
    path = get_path(paths)	#pre-fetch first path
    if path is None:
    	raise ValueError(f'Extensions: {EXTENSIONS}, not found in {DIRECTORY}')
    blank_clip = core.std.BlankClip(format=TARGET_PIXEL_FORMAT, width=TARGET_WIDTH, height=TARGET_HEIGHT, length=BLANK_CLIP_LENGTH, color=(16,128,128))
    clips = blank_clip	# initialize the accumulated clips with a starting small blank clip
    if CROSS_DUR>0:	
    	crossfade_blank_clip = blank_clip[0]*MIN_DURATION_FRAMES if len(blank_clip)<MIN_DURATION_FRAMES else blank_clip
    	prior_clip_for_crossfade = crossfade_blank_clip
    #---
    while not (path is None):	# first clip already pre-retrieved ready for this while loop
    	Count_of_files = Count_of_files + 1
    	print(f'INFO: processing {Count_of_files} {str(path)}')
    	#if (Count_of_files % 10)==0:
    	#	print(f'{Count_of_files},',end="",flush=True)
    	#	if (Count_of_files % (10*10*3))==0:
    	#		print("",flush=True)
    	this_clip = get_clip(path)
    	#this_clip = core.text.Text(this_clip, text_subpath_for_subtitles, alignment=9, scale=1)
    	#this_clip = core.text.FrameNum(this_clip, alignment=2, scale=1)
    	#this_clip = core.text.ClipInfo(this_clip, alignment=8, scale=1)
    	#this_clip = core.text.FrameProps(this_clip, alignment=2, scale=1)
    	if CROSS_DUR>0:	
    		#print(f'DEBUG: doing crossfade in while loop')
    		crossfade_clip = crossfade(prior_clip_for_crossfade, this_clip, CROSS_DUR)
    		# for now, don't do equivalent of this from _AI_ ... right  = right_clip[CROSS_DUR:-CROSS_DUR]
    		clips = clips + crossfade_clip + this_clip
    		prior_clip_for_crossfade = this_clip
    	else:
    		clips = clips + this_clip
    	path = get_path(paths)		# get next path to process in this while loop
    #---
    # perhaps a finishing crossfade to black ?
    if CROSS_DUR>0:	
    	#print(f'DEBUG: doing final crossfade after while loop')
    	crossfade_clip = crossfade(prior_clip_for_crossfade, crossfade_blank_clip, CROSS_DUR)
    	# for now, don't do equivalent of this from _AI_ ... right  = right_clip[CROSS_DUR:-CROSS_DUR]
    	clips = clips + crossfade_clip + this_clip
    clips = clips + blank_clip		# end the accumulated clips with a finishing small blank clip
    clips = clips.std.AssumeFPS(fpsnum=TARGET_FPSNUM, fpsden=TARGET_FPSDEN)
    print("")
    print(f'INFO: Finished processing {Count_of_files} image/video files.',flush=True)
    # Cleanup any temporary .ffindex files created by ffms2
    print(f'INFO: Removing temporary *.ffindex files from folder {DIRECTORY} with recursive={RECURSIVE} ...',flush=True)
    pp = DIRECTORY + "/" + ff_glob_var
    ffindex_files = glob.glob(pp, recursive=RECURSIVE)
    Count_of_files_removed = 0
    for ff in ffindex_files:
    	if ff.lower()[-len(DOT_FFINDEX):] == DOT_FFINDEX:	# double check the file really does have ext .ffindex
    		try:
    			Count_of_files_removed = Count_of_files_removed + 1
    			print(f'INFO: removing {Count_of_files_removed} {ff}',flush=True)
    			os.remove(ff)
    			#if (Count_of_files_removed % (10))==0:
    			#	print(f'{Count_of_files_removed},',end="",flush=True)
    			#	if (Count_of_files_removed % (10*10*3))==0:
    			#		print("",flush=True)
    		except OSError as ee:
    			print("Error: %s : %s" % (ff, ee.strerror),flush=True)
    print("")
    print(f'INFO: Finished removing {Count_of_files_removed} .ffindex files.',flush=True)
    # hmm, audio ? maybe later using source filter "bas" ?
    print(f'INFO: Done.',flush=True)
    clips.set_output()
    #------
    Code:
    @ECHO ON
    @setlocal ENABLEDELAYEDEXPANSION
    @setlocal enableextensions
    
    REM set "fol_images=G:\HDTV\TEST\_AI_\Family_Photos"
    set "fol_images=G:\HDTV\TEST\_AI_\test_images"
    set "fol=.\_AI_folder_to_process.txt"
    DEL "!fol!"
    echo !fol_images!>"!fol!"
    TYPE "!fol!"
    
    set "mp4_file=G:\HDTV\TEST\_AI_\_AI_04.mp4"
    set "script=G:\HDTV\TEST\_AI_\_AI_04.vpy"
    
    "C:\SOFTWARE\Vapoursynth-x64\ffmpeg_OpenCL.exe" -hide_banner -v verbose ^
    -f vapoursynth -i "!script!" -an ^
    -map 0:v:0 ^
    -vf "setdar=16/9" ^
    -fps_mode passthrough ^
    -sws_flags lanczos+accurate_rnd+full_chroma_int+full_chroma_inp ^
    -strict experimental ^
    -c:v h264_nvenc -pix_fmt nv12 -preset p7 -multipass fullres -forced-idr 1 -g 25 ^
    -coder:v cabac -spatial-aq 1 -temporal-aq 1 ^
    -dpb_size 0 -bf:v 3 -b_ref_mode:v 0 ^
    -rc:v vbr -cq:v 0 -b:v 3500000 -minrate:v 100000 -maxrate:v 9000000 -bufsize 9000000 ^
    -profile:v high -level 5.2 ^
    -movflags +faststart+write_colr ^
    -y "!mp4_file!"
    
    DEL "!fol!"
    pause
    exit
    Quote Quote  
  29. I haven't read the thread from top to bottom, but some of the things you're wanting to achieve can be done with IrfanView's batch mode, and maybe the rest of it using VirtualDub2.

    For IrfanView:
    - File/Batch Conversion/Rename
    - I chose bitmap as the output type, but anything ffmpeg can open will do. Something lossless would be better though.
    - Enter "#" (without the quotes) in the batch rename dropdown box.
    - Check "Use advanced options" and click on the Advanced button.
    - Check "Resize" and enter your preferred dimensions. I chose 960x540 for my test.
    - Check "Preserve aspect ratio".
    - Check "Use resampler function" (it'll probably be the same as whatever is selected in the normal resize dialogue window, which is Lanczos for me).
    - Check "Canvas Size" in the middle column and click on "Settings". Select "Method 2" and set the same dimensions you used for resizing (it adds black borders).
    - Check "Add overlay text" and then click on "Settings". The settings I used are in the screenshot below. The text is based on the file name but you can use fields from EXIF or IPTC data if you want to.
    - Click Okay so you're back in the main batch conversion window. Add your images and batch convert them. They'll be given sequential numbers as their file names.

    Image
    [Attachment 68766 - Click to enlarge]


    Below is the result of converting one image. The original was named "IMG_0276.jpg". The output was a bitmap but I resaved it as a jpg to upload it here.

    Before:

    Image
    [Attachment 68768 - Click to enlarge]


    After:

    Image
    [Attachment 68770 - Click to enlarge]


    Once all the images are batch converted, open the first one in VirtualDub2. If they're sequentially numbered (only numbers as the file names) it should open them all automatically. Under the Video menu change the frame rate to 0.5 FPS (for each picture to display for 2 seconds). Convert to whatever format you prefer. Maybe something losssless if you want to open the output video with a script and re-encode it with a different encoder etc.
    I encoded 9 images I'd batch converted with whatever the default VirtualDub2, 8 bit x264 settings are, and once that was selected as the encoder I used the Configure and Pixel Format buttons to set the output to limited range YUV420 using rec.601 (because it's standard definition).

    The sample video is below (re-uploaded, this time with every frame as a keyframe and then remuxed with MKVToolNix to generate a chapter for each frame/picture).
    Image Attached Files
    Last edited by hello_hello; 20th Jan 2023 at 20:34.
    Quote Quote  
  30. Member hydra3333's Avatar
    Join Date
    Oct 2009
    Location
    Australia
    Search Comp PM
    Originally Posted by hello_hello View Post
    some of the things you're wanting to achieve can be done with IrfanView's batch mode, and maybe the rest of it using VirtualDub2
    Thanks, I'll look into that !

    I have 95,293 home images/videos organised in 1,485 date-named subfolders.

    It's a mix of various formats (eg digital cameras and phones of various vintages) with both video clips (eg .avi, .mov, .3gp, .mp4, .mjpeg, etc) and images (eg .jpg, .gif, etc).

    I was hoping to include both images and say "the 1st 5 secs of a clip" as slideshow components.

    Cheers
    Quote Quote  



Similar Threads