VideoHelp Forum
+ Reply to Thread
Results 1 to 6 of 6
Thread
  1. I have footage from two cameras filming the same show. Because of the EU restriction and tariffs on length of files in video equipment, they are split in multiple clips: four for one DSLR and 5 for a camcorder. I would like to produce an hour-long video of that show, switching between sources. I need the correspondence between the first frame of each clip from the DSLR to the number of the synchronous frame from the camcorder, and vice-versa. If I find nothing, I'll code something in Python that looks at the envelope of the soundwave and looks for the best alignment by brute force.

    I doubt that AviSynth+ does such pre-processing but would gladly be proven wrong. Has anyone faced a similar problem, or used some piece of software that does it?
    Quote Quote  
  2. You can do auto-syncing by waveform and your entire multi-camera edit in tools like Resolve (free) or premiere or Avid.

    Any reason you're enamored of Avisynth for this?
    Quote Quote  
  3. Thanks both! The demo on the PluralEyes website does what I want indeed, this blog post on Resolve as well.

    I like AviSynth+ for several reasons, the main being the Edit Decision List in text format. The media is separate from the edits and the size of the project is the size of the original media and a few KB. I can version control the edit decision list and easily make changes or minute adjustments. And I can copy-paste edits across different projects. I also like its frameserving because gives high quality preview without needing to compile the film.
    Quote Quote  
  4. I wrote a mini essay here explaining how I stack videos to match frames. The idea there was to edit the sound from one source to match the video from another, but it'll possibly give you some ideas. You can use Trim() to combine frame ranges from multiple videos into a single output in a script in a similar manner.
    https://forum.doom9.org/showthread.php?p=1900025#post1900025

    You can use ShowTime() or ShowSMPTE() in Avisynth and they both have an offset argument (I'm pretty sure). In other-words, you can specify which frame from each recording you want to call the first frame, and all else being equal the correct offset for each should allow the frame numbers from identical videos to match when using ShowSMPTE().

    There's also the Position() function in my signature but it doesn't include ShowSMPTE or an offset argument, however it does display the current frame number and position in Seconds.ms and/or HH:MM:SS.ms. To apply an offset, you'd need to do something like this:

    Video1 = Video1.Trim(50,0).Position()
    Video2 = Video2.Trim(163,0).Position()

    For Video1 the first frame becomes frame 50 and Position() starts counting from there. For Video2 it's frame 163. Then you could stack them to check they're aligned etc, and you can stack more than two videos. ie.

    A = StackVertical(Video1, Video2)
    B = StackVertical(Video3, Video4)
    StackHorizontal(A, B)

    When you have the videos lined up you can choose frame ranges to take from each (there's no reason why you have to frame-align them first, but it might make it easier). Each video could use it's own audio (if there is audio) or after editing you can add a single audio track etc

    Video1 = Video1.Trim(50,0)
    Video2 = Video2.Trim(163,0)

    Video1.Trim(0,999) ++ \
    Video2.Trim(1000,11653) ++ \
    Video1.Trim(11654,13258)

    A = last
    AudioDub(A, SomeAudioTrack)

    The AudioMeter function can help to align waveforms in Avisynth if need be. It's really just a function to make the waveform plugin a bit more versatile, and it can display audio meters too. To visually compare their waveforms, you'd need to stack two videos after applying the function to each.
    https://forum.videohelp.com/threads/384967-AudioSpeed-AudioMeter-AudioWave-scripts

    StainlessS created a similar function using a filmstrip layout and the waveform plugin here.
    https://forum.doom9.org/showthread.php?p=1876933#post1876933

    The two functions combined.
    https://forum.doom9.org/showthread.php?p=1899452#post1899452

    None of it's automatic, but it's some thoughts in case any of it helps.


    Edit: Do DSLR's record at constant frame rate? If not, you'll probably have to decode at a constant frame rate. To decode a source using FFMS2 at a constant frame rate you'd do this (decoding at 23.976fps in this case). FFMS2 will add or drop frames as required.

    Video1 = FFMS2("D:\SomeVideo.mkv", FPSNum=24000, FPSDen=1001)

    If the frame rates are constant but not quite the same you can use AssumeFPS() to adjust them if need be. It only changes the frame rate, not the frame count. ie

    Video1 = Video1.AssumeFPS(24000,1001).Trim(50,0).Position()
    Last edited by hello_hello; 26th Feb 2020 at 08:57.
    Quote Quote  
  5. In AviSynth you can use an audio waveform display filter to visualize the audio. Then stack two videos vertically to figure out the alignment. Waveform(width=5, height=120) of one video:

    Image
    [Attachment 52162 - Click to enlarge]


    Here the audio of the current frame is in the center and 5 frames on either side are displayed.
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!