VideoHelp Forum

Our website is made possible by displaying online advertisements to our visitors. Consider supporting us by disable your adblocker or Try ConvertXtoDVD and convert all your movies to DVD. Free trial ! :)
+ Reply to Thread
Results 1 to 8 of 8
Thread
  1. Member
    Join Date
    Oct 2008
    Location
    Japan
    Search Comp PM
    Sony (?) Vegas Pro lets you set the length of time that still images are displayed for, and the length of time that they overlap fading from one to the other in Preferences, Editing.

    I would like to add some dark space between still images automatically. Can this be achieved in anyway? I am making educational materials for English tests, and I would like the test takers not to be able to see the content until they play the video. I will be making lots.

    Here is my video preferences (The upload Image "Select Files" did not work in my Firefox so this is on flickr)


    2019-06-22 16-47-33
    by Timothy Takemoto, on Flickr
    Quote Quote  
  2. Member
    Join Date
    Oct 2008
    Location
    Japan
    Search Comp PM
    One way of achieving black sections between still images is
    1) Copy and paste your images to the folder in which they exist. Windows will create a set of images called imagename_copy which intersperse your images.
    2) Open the copies in a image editor that can do batch processing. I use Corel (Ulead) Photoimpact.
    3) Batch process the copied images to black (Photo>Brightness & Contrast> Brightness -100 Contrast -100)
    4) Save the black images.
    5) Set Vegas Preferences Editing New Still image length (above) to the length that the black images and still images be shown (this can not alas be different)
    6) Drag all the images to the Vegas time line.
    7) Set auto-ripple and cut and paste the last black image to the front so that a black section precedes every image.
    (8 Arm and record audio)
    9) Render to a Video size that matches your image size.
    10) Chop the video into a lot of little videos using AVS4YOU Video remaker. I can't get Vegas Batch render of sections to work.
    Quote Quote  
  3. This is actual THE job for Avisynth or Vapoursynth, so if you are into it, I made a script for Vapoursynth (Python):
    Code:
    import os
    import vapoursynth as vs
    
    directory    = r'F:\pics'
    jpg_lenght   = 300 #frames
    blank_lenght = 60  #frames
    video = []
    for file in os.listdir(directory):
        if file.endswith(".jpg"):
            path = os.path.join(directory, file)
            clip = vs.core.ffms2.Source(path, format=vs.YUV420P8, alpha=False, fpsnum=60000, fpsden=1001)
            blank = vs.core.std.BlankClip(clip, color=[16, 128,128])
            clip = clip * jpg_lenght   
            blank = blank * blank_lenght  
            video.append(clip + blank)
    clip = vs.core.std.Splice(video)
    clip.set_output()
    or if you want to cross desolve between jpg's and those blacks:
    Code:
    import os
    import vapoursynth as vs
    from vapoursynth import core
    import vsutils                              #https://github.com/jeremypoulter/vsutils/blob/master/vsutils.py
    utils = vsutils.vsutils()
    
    core.max_cache_size= 600         #high resolutions might run memory high, watch out, default is 4GB(4096) if not set
    
    directory          = r'F:\pics'
    jpg_lenght         = 300 #frames
    blank_lenght       = 120  #frames
    crossfade_duration = 0.4   #seconds
    video = []
    
    for file in os.listdir(directory):
        if file.endswith(".jpg"):
            path = os.path.join(directory, file)
            clip = core.ffms2.Source(path, format=vs.YUV420P8, alpha=False, fpsnum=60000, fpsden=1001)
            #clip = core.resize.Spline36(clip, 1280, 720) #some resize if you want
            blank = core.std.BlankClip(clip, color=[16, 128,128])
            clip = clip * jpg_lenght
            blank = blank * blank_lenght
            pair = utils.CrossFade(clip, blank, crossfade_duration)
            video.append(pair)
    
    whole_clip = video[0]
    for index , pair in enumerate(video):
        if index:                                       #skipping index zero
           whole_clip = utils.CrossFade(whole_clip, pair, crossfade_duration)
    
    whole_clip.set_output()
    scripts (avisynth or Vapoursynth) could be loaded into VirtualDub2 and exported as lossless video
    Last edited by _Al_; 22nd Jun 2019 at 23:49.
    Quote Quote  
  4. length, not lenght but if consistent it works, I am not able to remember that correctly in English
    (how come weight, height is correct?)
    Quote Quote  
  5. Bad RAM managment above!,
    I tried 200 images and it froze because it needed tons of RAM, too many clips is created beforehand,

    so this is somehow better solution,
    still, for about 100 UHD images and creating video with crossfades or not it needs about 3-4 GB of RAM for 1500 frames, sure a bit less for fullHD:
    Code:
    import os
    import vapoursynth as vs
    from vapoursynth import core
    import vsutils #https://github.com/jeremypoulter/vsutils/blob/master/vsutils.py
    utils = vsutils.vsutils()
    
    crossfades         = False  #True or False
    crossfade_duration = 0.5  #seconds
    ext                = (".jpg", ".tiff")
    directory          = r'G:\pics'
    image_length       = 200 #frames
    blank_length       = 80  #frames
    fps_num            = 60000
    fps_den            = 1001
    format             = vs.YUV420P8  #vs.YUV422P10 or vs.RGB48 etc ...
    width              = 1920
    height             = 1080
    resize_kernel      = 'Spline36'
    
    
    try:
        _resize  = getattr(vs.core.resize, resize_kernel)
    except:
        raise ValueError('wrong resize kernel: ', resize_kernel)
        
    def load_image_clip(image_path, *new_dimension):
        c = core.ffms2.Source( image_path, format=format, alpha=False, fpsnum=fps_num, fpsden=fps_den)
        if new_dimension:
            c = _resize(c, width=new_dimension[0], height=new_dimension[1])
        return c
    
    image_path_list = []
    for file in os.listdir(directory):
        if file.lower().endswith(tuple(ext)):
            image_path = os.path.join(directory, file)
            image_path_list.append(image_path)
    
    s = load_image_clip(image_path_list[0])
    print('number of images: ', f'{len(image_path_list)}') 
    print('first image resolution: ', f'{s.width}x{s.height}') 
    
    sample = s.format.bits_per_sample
    if s.format.color_family == vs.YUV:
        color = [int(2**sample/16), int(2**sample/2),int(2**sample/2)]
    elif s.format.color_family == vs.RGB:
        color = [int(2**sample/16), int(2**sample/16),int(2**sample/16)]
    print('blank color: ', color)
    
    format_starter = load_image_clip(image_path_list[0], width, height)
    blank = core.std.BlankClip(format_starter, color=color)
    blank = blank * blank_length
    whole_clip = blank
    
    for i, image_path in enumerate(image_path_list):
        #if i == 100: break #limit number of images if needed, 100 UHD images need about 3-4GB RAM, fullHD a  bit less
        image_clip = load_image_clip(image_path, width, height)
        image_clip = image_clip * image_length
        if crossfades:
            whole_clip = utils.CrossFade(whole_clip, image_clip, crossfade_duration)
            whole_clip = utils.CrossFade(whole_clip, blank, crossfade_duration)
        else:
            whole_clip = whole_clip + image_clip + blank
    
    whole_clip.set_output()
    Last edited by _Al_; 23rd Jun 2019 at 14:41.
    Quote Quote  
  6. if you want self sustaining script without downloading that vsutil.py:
    Code:
    import os
    import vapoursynth as vs
    from vapoursynth import core
    import math
    import functools
    
    crossfades         = True  #True or False
    crossfade_duration = 0.5  #seconds
    ext                = (".jpg", ".tiff")
    directory          = r'G:\pics'
    image_length       = 200 #frames
    blank_length       = 80  #frames
    fps_num            = 60000
    fps_den            = 1001
    format             = vs.YUV420P8  #vs.YUV422P10 , vs.RGB24 (8bit) or vs.RGB48 (16bit) etc ...
    width              = 1920
    height             = 1080
    resize_kernel      = 'Spline36'
    
    
    try:
        _resize  = getattr(vs.core.resize, resize_kernel)
    except:
        raise ValueError('wrong resize kernel: ', resize_kernel)
    
    def FadeEachFrame(clipa, clipb, n, number_frames):  #https://github.com/jeremypoulter/vsutils/blob/master/vsutils.py
        weight = (n+1)/(number_frames+1)
        return core.std.Merge(clipa, clipb, weight=[weight, weight])
        
    def CrossFade(clip1, clip2, duration):  #https://github.com/jeremypoulter/vsutils/blob/master/vsutils.py
        fps = clip1.fps_num/clip1.fps_den
        number_frames = math.floor(duration * fps) - 2
        clip1_start_frame = clip1.num_frames - (number_frames + 1)
        clip1_end_frame = clip1.num_frames - 1
        clip2_start_frame = 1
        clip2_end_frame = number_frames + 1
        a=clip1[0:clip1_start_frame]
        b1=clip1[clip1_start_frame:clip1_end_frame]
        b2=clip2[clip2_start_frame:clip2_end_frame]
        b=core.std.FrameEval(b1, functools.partial(FadeEachFrame, clipa=b1, clipb=b2, number_frames=number_frames))
        c=clip2[clip2_end_frame:clip2.num_frames]
        return a+b+c
        
    def load_image_clip(image_path, *new_dimension):
        c = core.ffms2.Source( image_path, format=format, alpha=False, fpsnum=fps_num, fpsden=fps_den)
        if new_dimension:
            c = _resize(c, width=new_dimension[0], height=new_dimension[1])
        return c
    
    image_path_list = []
    for file in os.listdir(directory):
        if file.lower().endswith(tuple(ext)):
            image_path = os.path.join(directory, file)
            image_path_list.append(image_path)
    
    s = load_image_clip(image_path_list[0])
    print('number of images: ', f'{len(image_path_list)}') 
    print('first image resolution: ', f'{s.width}x{s.height}') 
    
    color = None
    sample = s.format.bits_per_sample
    if s.format.color_family == vs.YUV:
        color = [int(2**sample/16), int(2**sample/2),int(2**sample/2)]
    elif s.format.color_family == vs.RGB:
        color = [int(2**sample/16), int(2**sample/16),int(2**sample/16)]
    print('blank color: ', color)
    if color is None:
        raise ValueError('unsupported format')
    
    format_starter = load_image_clip(image_path_list[0], width, height)
    blank = core.std.BlankClip(format_starter, color=color)
    blank = blank * blank_length
    whole_clip = blank
    
    for i, image_path in enumerate(image_path_list):
        #if i == 100: break #limit number of images if needed, 100 UHD images need about 3-4GB RAM, fullHD a  bit less
        image_clip = load_image_clip(image_path, width, height)
        image_clip = image_clip * image_length
        if crossfades:
            whole_clip = CrossFade(whole_clip, image_clip, crossfade_duration)
            whole_clip = CrossFade(whole_clip, blank, crossfade_duration)
        else:
            whole_clip = whole_clip + image_clip + blank
    
    whole_clip.set_output()
    #if running just python to request frames to debug or see printing values
    #for frame in range(0,len(whole_clip)):
    #    whole_clip.get_frame(frame)
    Last edited by _Al_; 23rd Jun 2019 at 15:07.
    Quote Quote  
  7. Member
    Join Date
    Oct 2008
    Location
    Japan
    Search Comp PM
    Amazing!

    Thank you very much indeed AL

    The only trouble is I found Virtual Dubb pretty difficult when I last used it and I have forgotten whether I have ever used Avisynth, though it rings a bell, and I have not used Vapoursynth.

    I also found that using the method above my videos do not play in Moodle's embedded video player. I am not sure why perhaps the player needs movement to find a keyframe to start. So instead of just stills and my voice I may have to go back to a video of me speaking.

    But it could be somethign to do with the particular format of video that Vegas produced. Since you have gone to all this trouble, I feel I should give it a go. Please may I clarify that I will fist need to install Virtual Dubb and Vapoursynth to use your wonderful script?

    Thanks again,
    Quote Quote  
  8. Originally Posted by timtak View Post
    Amazing!
    I also found that using the method above my videos do not play in Moodle's embedded video player. I am not sure why perhaps the player needs movement to find a keyframe to start. So instead of just stills and my voice I may have to go back to a video of me speaking.
    AVS4YOU might be the reason, it is the final delivery in that chain of steps.
    I used to had Vegas' batch render working. Select loop area and press "R", give it a name/title. Select regions like that. Then in Tools/scripting, there is "Batch Render" script option. included script. Check "Render Region" (at the bottom). But that default script cannot give names by region titles, it only generates names with render template name (and gives it an index). I used to use some "Batch render 8" script or some v.2, not sure where I downloaded it from. That had a name box and I wrote "[RTITLE]" , no quotes, in it and that rendered videofiles named after region titles.

    The only trouble is I found Virtual Dubb pretty difficult
    use VirtualDub2 now, it can export even MP4 with H.264, and besides Avisynth it loads Vapoursynth script as well. You can install some lossless codec and rendered your scripts in it and that lossless you can import into vegas to add audio track.

    About that Vapoursynth, you are not obligated to try it, because I posted that script. I was going that way anyway sort of.To try it, you'd need to install:
    Python 64bit first , o this web site, https://www.python.org/downloads/release/python-373/ you'd select windows x86 64 executable installer (at the bottom)
    then Vapoursynth 64bit, from this site: http://www.vapoursynth.com/doc/installation.html get VapourSynth Installer (from the top)

    this way you can have vapoursyth scripts *.vpy working in VirtualDub2, but because you have Python and Vapoursynth 64 bit, use VirtualDub 64bit as well! 32 bit would not work.

    If you go this way, it is not easy, might be overwhelming, it is kind of like programming, but you say, you tried Avisynth before, so there is some expectation. Vapoursynth script uses Python , so one advantage is, if you want to know something, someone already asked that somewhere on web for sure.
    Last edited by _Al_; 23rd Jun 2019 at 23:23.
    Quote Quote  



Similar Threads