VideoHelp Forum

+ Reply to Thread
Page 2 of 2
FirstFirst 1 2
Results 31 to 38 of 38
Thread
  1. hmm, audio ? maybe later using source filter "bas" ?
    if joining videos, preference would be no crossfades naturally, so joining seems to be no problem, even mixing videos and images (quiet audio would be added)

    I wanted to fade in or fade out audios from obvious reason, likeNLE's do (avoid abrupt sound changes, clicking), but that turn out to be not possible because audio in vapoursyth does not allow FrameEval() for audio samples. FrameEval() only allows vs.VideoNode to work with. So other modules are an option, I quickly check pydub, that might work, but did not go any further with it.

    So without crossfades, concatenating video and images and rotate video (looks like mp4 oor mov only) and images if needed .
    The approach would be to have an object called clip for example that caries both video and audio, and allowing certain actions: trim, slicing, adding, multiplying, printing, that would also be carried for both video and audio. That could provide class Clip() and could be a separate module, called pairs.py:
    Code:
    class Clip:
        '''
         Creates class that carries both vs.VideoNode and vs.AudioNode.
         Using the same syntax as for vs.VideoNode on this class object for: trim/slice, add, multiply or print,
         the same operation is also done to the audio at the same time
         examples:
         from vapoursynth import core
         video = core.lsmas.LibavSMASHSource('video.mp4')
         audio = core.bas.Source('video.mp4')
         clip = Clip(video, audio)
         clip = clip[0] + clip[20:1001] + clip[1500:2000] + clip[2500:]
         #clip = clip.trim(first=0, length=1) + clip.trim(firt=20, last=1000) + clip.trim(1500, length=500) + clip.trim(2500)
         clip = clip*2
         clip = clip + clip
         print(clip)
         clip.video.set_output(0)
         clip.audio.set_output(1)
         '''
    
        def __init__(self, video = None, audio=None, attribute_audio_path=None):
            self.video = video
            self.audio = audio
            if self.video is None:
                self.video = core.std.BlankClip()
            if self.audio is None:
                if attribute_audio_path is None:
                    raise ValueError('argument attribute_audio_path is needed to get default audio for images (could be a really short video')
                attr_audio = core.bas.Source(attribute_audio_path)
                length = int(attr_audio.sample_rate/self.video.fps*self.video.num_frames)
                self.audio = attr_audio.std.BlankAudio(length=length)
    
        def trim(self, first=0, last=None, length=None):
            afirst  = self.to_samples(first)    if first  is not None else None
            alast   = self.to_samples(last+1)-1 if last   is not None else None
            alength = self.to_samples(length)   if length is not None else None
            return Clip( self.video.std.Trim(first=first, last=last, length=length),
                         self.audio.std.AudioTrim(first=afirst,last=alast,length=alength)
                        )
        def to_samples(self, frame):
            return int((self.audio.sample_rate/self.video.fps)*frame)
    
        def __add__(self, other):
            return Clip(self.video + other.video, self.audio + other.audio)
    
        def __mul__(self, multiple):
            return Clip(self.video*multiple, self.audio*multiple)
    
        def __getitem__(self, val):
            if isinstance(val, slice):
                if val.step is not None:
                    raise ValueError('Using steps while slicing AudioNode together with VideoNode makes no sense')
                start = self.to_samples(val.start) if val.start is not None else None
                stop =  self.to_samples(val.stop)  if val.stop  is not None else None
                return Clip( self.video.__getitem__(val),
                             self.audio.__getitem__(slice(start,stop))
                             )
            elif isinstance(val, int):
                start = self.to_samples(val)
                stop = int(start + self.audio.sample_rate/self.video.fps)
                return Clip( self.video[val],
                             self.audio.__getitem__(slice(start,stop))
                             )        
        def __repr__(self):
            return '{}\n{}\n{}'.format('Clip():\n-------', repr(self.video), repr(self.audio))
    
        def __str__(self):
            return '{}\n{}\n{}'.format('Clip():\n-------', str(self.video), str(self.audio))
    So get_clip function might look like this:
    Code:
    import pairs
    ATTRIBUTE_AUDIO_PATH = r'D:\test2\some_very_short_video.m2ts'
    
    def get_clip(path):
        #get video
        data = LOADER.get_data([str(path),])[0] 
        video = data.clip
        if data.load_isError:
            raise ValueError(f'{data.load_log}\nload.py could not create vs.VideoNode from {path}')
        video = rotation_check(video, path, save_rotated_image=SAVE_ROTATED_IMAGES)
        video = video.std.AssumeFPS(fpsnum=FPSNUM, fpsden=FPSDEN)
        if BOX:  video = boxing(video, WIDTH, HEIGHT)
        else:    video = resize_clip(video, WIDTH, HEIGHT)
        video = video[0]*LENGTH if len(video)<5 else video
        
        #get_audio
        try:
            audio = core.bas.Source(str(path))
        except AttributeError:
            raise ImportError('Vapoursynth audio source plugin "BestAudioSource.dll" could not be loaded\n'
                              'download: https://github.com/vapoursynth/bestaudiosource/releases/tag/R1')
        except vs.Error as e:
            print(f'BestAudioSource could not load audio for: {path}\n{e}')
            clip = pairs.Clip(video, attribute_audio_path=ATTRIBUTE_AUDIO_PATH) #will generate quiet clip with desired parameters
        except Exception as e:
            print(f'Error loading audio for: {path}\n{e}')
            clip = pairs.Clip(video, attribute_audio_path=ATTRIBUTE_AUDIO_PATH)
        else:
            clip = pairs.Clip(video, audio)    
        return clip
    I used my load.py, that loads any filepath and uses proper plugin source, ffms2 indexes to temp folder, out of the way (could be deleted later). And also it remembers indexes, so it does not index again. I uploaded it here , you just need load.py and viewfunc.py, both can be downloaded. If you just run: import load; load.settings() it will launch GUI, where you can change preferences and it will always remember those settings. But you'd need to have tkinter installed, which comes with python, but not with portable python. Without tkinter UI settings would not work, load.py will work, but if you need to change plugin sources prereferences DEFAULT_PLUGIN_MAP within load.py would need to be manually changed.

    Then tha main loop code to work with clips, you had it a bit different, but just how it works with that class Clip():
    Code:
    import load
    
    LOADER = load.Sources()
    paths = Path(DIRECTORY).glob("*.*")
    path = get_path(paths)
    if path is None:
        raise ValueError(f'Extensions: {EXTENSIONS}, not found in {DIRECTORY}')
    clips = get_clip(path)    #loads obj class Clip() ! not vs.VideoNode
    print('wait ...')
    while 1:
        path = get_path(paths)
        if path is None:
            break
        clip = get_clip(path)   #again , it always loads obj class Clip(), not vs.VideoNode
        clips += clip  #class Clip() can add, where internally it adds video and audio
    
    clips.video.set_output(0)
    clips.audio.set_output(1)
    Code also needs to define attribute video filepath that vapoursynth would get audio parameters from for quiet audio (if image is loaded). For some reason I cannot change samplerate from integer to float and cannot set default BlankAudio with float.
    Code:
    ATTRIBUTE_AUDIO_PATH = r'D:\test2\some_very_short_video.m2ts'
    I hope I did not forgot anything, except maybe I had all scripts in portable directory/python_modules and use this script to encode output video, where it mixed images, mp4 and m2ts videos and rotated images, so this was encode.py:
    Code:
    import vapoursynth as vs
    from vapoursynth import core
    from subprocess import Popen, PIPE
    from pathlib import Path
    import sys
    import os
    
    def main(script, output_path):
    
        PATH        = Path(os.getcwd()) #Path(sys._MEIPASS) #if frozen
        script      = str(PATH / 'python_modules' / f'{script}')
        VSPipe      = str(PATH / 'VSPipe.exe')
        x264        = str(PATH / 'tools' / 'x264.exe')
        neroAacEnc  = str(PATH / 'tools' / 'neroAacEnc.exe')
        Mp4box      = str(PATH / 'tools' / 'MP4Box.exe')
        ##import loadDLL #if frozen
        ##VS_PLUGINS_DIR = PATH / 'vapoursynth64/plugins'
        ##isDLLS_LOADED, DLLS_LOG = loadDLL.vapoursynth_dlls(core, VS_PLUGINS_DIR)
        path        = Path(output_path)
        temp_video = str(path.parent / f'{path.stem}.264')
        temp_audio = str(path.parent / f'{path.stem}.m4a')
    
        vspipe_video = [VSPipe, '--container',  'y4m',
                                script,
                                '-']
        
        x264_cmd = [x264,       #'--frames',           f'{len(video)}',
                                '--demuxer',           'y4m',  
                                '--crf',               '18',
                                '--vbv-maxrate',       '30000',
                                '--vbv-bufsize',       '30000',
                                '--keyint',            '60',
                                '--tune',              'film',
                                '--colorprim',         'bt709',
                                '--transfer',          'bt709',
                                '--colormatrix',       'bt709',
                                '--output',             temp_video,
                                '-']
    
        vspipe_audio = [VSPipe, '--outputindex', '1',
                                '--container',  'wav',
                                script,
                                '-']
    
        aac_cmd = [neroAacEnc,  '-ignorelength',
                                '-lc',
                                '-cbr', '96000',
                                '-if', '-',
                                '-of', temp_audio]
    
    
        mp4box_cmd = [Mp4box,   '-add' , f'{temp_video}',
                                '-add',  f'{temp_audio}#audio',
                                '-new',  output_path]
    
        p1 = Popen(vspipe_video, stdout=PIPE, stderr=PIPE)
        p2 = Popen(x264_cmd, stdin=p1.stdout, stdout=PIPE, stderr=PIPE)
        p1.stdout.close()
        p2.communicate()
    
        p1 = Popen(vspipe_audio, stdout=PIPE, stderr=PIPE)
        p2 = Popen(aac_cmd, stdin=p1.stdout, stdout=PIPE, stderr=PIPE)
        p1.stdout.close()
        p2.communicate()
    
        p = Popen(mp4box_cmd)
        p.communicate()
    
    if __name__=='__main__':
        #scripts and outputs have full paths
        #python "encode.py" "media_to_show.py" "output.mp4"
        if len(sys.argv) > 2: 
            main(sys.argv[1], sys.argv[2])
    Last edited by _Al_; 21st Jan 2023 at 12:06.
    Quote Quote  
  2. Member hydra3333's Avatar
    Join Date
    Oct 2009
    Location
    Australia
    Search Comp PM
    Beaut !
    Some more code to look into, thanks, and now I'm happy playing with code as in back in my nerdy youth
    I may take some time out to also learn a bit of python, have been flying on first principles and shaggy memories of fortran77.
    Quote Quote  
  3. , I remember fortran too, actually not remembering it much, but those times I spend to make code then making punch cards and only then running that code using those cards
    Quote Quote  
  4. Edit: There's been quite a few posts I didn't see before I submitted this one, so it mightn't be too relevant now, but anyway....

    hydra3333,
    I had another thought.... MJPEG.
    Instead of converting to bitmaps and having to convert those to video, why not just convert to jpeg and stick them in a container?
    I guess it'll depend on a player's mjpeg support. My 11yo Samsung TV supports it, but only at a low resolution. If you want to test though, there's samples attached....

    I put them into an MKV using AnotherGUI. I added the command line myself, so it mightn't be especially clever, but it works. I saved the command line as a bat file, for reference.

    Code:
    "C:\Program Files\AnotherGUI\ffmpeg.exe" -start_number 1 -report -framerate 0.5 -y -i "D:\%%d.jpg" -threads 1 -vcodec copy "D:\Test JPEG 444.mkv"
    PAUSE
    After messing around to get it right for 1080p, I think I finally understand how IrfanView's text placement works. The MKVs below all contain jpegs saved at 80% quality.
    IrfanView does have an option (plugin) to save a slideshow as an MP4, but it doesn't work for me so I know nothing about it. That's probably an XP thing.

    Image
    [Attachment 68774 - Click to enlarge]


    I guess if you're wanting to add clips you're going to have to re-encode the pictures, and you'll probably need to duplicate them so they'll display for "X" seconds and match the video frame rate too. You could open an MKV like the ones below and do that in a script while appending video etc, but maybe there's a better way, or it's not going to be the easiest method if there's also video involved. It was just another idea that might come in handy, or not....
    Image Attached Files
    Last edited by hello_hello; 21st Jan 2023 at 01:13.
    Quote Quote  
  5. Member hydra3333's Avatar
    Join Date
    Oct 2009
    Location
    Australia
    Search Comp PM
    Originally Posted by hello_hello View Post
    I had another thought.... MJPEG.
    Instead of converting to bitmaps and having to convert those to video, why not just convert to jpeg and stick them in a container?
    I guess it'll depend on a player's mjpeg support. My 11yo Samsung TV supports it, but only at a low resolution. If you want to test though, there's samples attached....
    Hello, hello_hello, thank you for the proposal and the very handy information, what a great approach.
    It certainly has a place in dealing with the issue, and may turn out to be the "right" solution in many circumstances.
    Nice sample clips too, I smiled watching them.
    I'll post some stuff too, once I can find items absent family faces, since privacy is apparently a thing nowadays.

    For now, and also with a view to keeping the few remaining grey cells in use, I will first fiddle with _Al_'s gear.

    Looks like I'll have to bite the bullet and run the python installer and move away from direct vpy-into-ffmpeg as inability to pass parameters and run it standalone can be a tad disconcerting. Yes I could create and use a parameter file however "it's not as easy".
    Be interesting to see what if any the vspipe overheads may be vs vpy-into-ffmpeg.
    Quote Quote  
  6. Member hydra3333's Avatar
    Join Date
    Oct 2009
    Location
    Australia
    Search Comp PM
    edit: oops
    Quote Quote  
  7. if you use that load.py,
    I actually uploaded it with active custom imwri_Read function for reading images and that is a beta sort of, only good if loading all images into a sequence, part of them etc. If using that plugin, it loads images with the same pattern as a clip, which is not desirable if loading images from same camera in a directory. It would have a tendency to load them all in a clip.

    Use regular core.imwri.Read(), which I did not add there, so either upload that load.py again, I replace it and fixed it and set as a default,
    or
    launch load settings UI (double click load.py) and then fill box "Add plugin" with: imwri.Read , then press "Add" button. Then add extensions and kwargs to that plugin (copy those from that custom imwri_Read plugin). Then delete those extensions for that previous imwri_Read. Save it.
    Last edited by _Al_; 22nd Jan 2023 at 23:44.
    Quote Quote  
  8. Member hydra3333's Avatar
    Join Date
    Oct 2009
    Location
    Australia
    Search Comp PM
    Thank you. Have been away for some days. Surgery early next week. My progress may be a tad slow
    Quote Quote  



Similar Threads