VideoHelp Forum
+ Reply to Thread
Results 1 to 18 of 18
Thread
  1. Member
    Join Date
    Nov 2005
    Location
    United States
    Search Comp PM
    If I have something that lasts 2.77 seconds say, when it gets converted into frames and if I then loop that segment, the result is off by some amount in time.

    Should I try to keep a running total of dropped fractional parts and add or subtract them periodically, or is there a simpler thing that people usually do.

    In other words

    2.77 seconds at 60fps is 166.2 frames which gets converted to 166 frames. Do I try to keep track of the parts like ".2" and add them when they are greater than 1?
    Quote Quote  
  2. Member
    Join Date
    Sep 2012
    Location
    Australia
    Search Comp PM
    How the hell are you getting .2 of a frame? Is this audio delay or some kind of weird frame rate conversion?
    Quote Quote  
  3. Member Cornucopia's Avatar
    Join Date
    Oct 2001
    Location
    Deep in the Heart of Texas
    Search PM
    First, what is this thing that lasts for 2.77 seconds that is video-related? Why is it not an exact # of frames? - are you doing some scientific visualization in Matlab or something?

    Scott
    Quote Quote  
  4. Member
    Join Date
    Nov 2005
    Location
    United States
    Search Comp PM
    I am matching video with sound. I guess maybe I can change the sound to match the framerate. Is that what people usually do?

    I get fractional frames because the audio doesn't fit perfectly into the number of frames.
    Quote Quote  
  5. Member
    Join Date
    Sep 2012
    Location
    Australia
    Search Comp PM
    https://forum.videohelp.com/threads/368301-AviSynth-Animate-syntax

    Append the things into an MKV or MP4 (UT Codec if you must) then let FFVideoSource or DirectShowSource (or whatever) convert the frame-rate to constant.
    Quote Quote  
  6. Member
    Join Date
    Nov 2005
    Location
    United States
    Search Comp PM
    So, for example, if something is off by .2 frames within less than 3 seconds, if it happened over and over it would be off by 1/5 second within a minute (or in any case by an unacceptable amount). If there are many files appended, it doesn't solve the problem, right?

    I guess either someone painstaking goes through and corrects things by duplicating individual frames, or you keep track of the fractional parts that are lost and duplicate a frame every time the fractions add up to a frame.
    Quote Quote  
  7. What is the audio content?

    Can you time compress the audio to 2.75 seconds in an audio editor ? (or avisynth timestretch() , but you might get better results in audio editor )

    60fps * 2.75s = 165 frames

    2.75/2.77 = 99.277778%



    What is the video content ? What relationship does it have with audio? How is sync involved ? (is this speech, music , something else ? )
    Quote Quote  
  8. Member Cornucopia's Avatar
    Join Date
    Oct 2001
    Location
    Deep in the Heart of Texas
    Search PM
    It is easy to use audio TC/E, or loop or add silence or clip out a section, or a combination of all those in order to get EXACT audio and video. And speaking as an audio engineer, yes, that is what people usually do, adjust the audio to match video. But as pdr mentioned, it depends on the sound.

    Regardless, if you are working with video, the smallest indivisible unit of time is the frame/field. If one is using audio WITH video, one should be prepared to do what is necessary for the audio to fit with the video, seeing as how the audio has a much finer timebase and so is more "malleable".

    Scott
    Quote Quote  
  9. Member
    Join Date
    Sep 2012
    Location
    Australia
    Search Comp PM
    Originally Posted by cheyrn View Post
    So, for example, if something is off by .2 frames within less than 3 seconds, if it happened over and over it would be off by 1/5 second within a minute (or in any case by an unacceptable amount). If there are many files appended, it doesn't solve the problem, right?

    I guess either someone painstaking goes through and corrects things by duplicating individual frames, or you keep track of the fractional parts that are lost and duplicate a frame every time the fractions add up to a frame.
    If you convert the videos to a variable frame rate UT and convert the audio to PCM, assuming the audio and video were the same length in the first place (which is unlikely, I know) when converted to constant frame rate the video and audio will remain in sync permanently and no audio will be lost. As for the case of the video and audio being of different length, that's not what the question you're asking is about.

    I THINK if I save a video using AVISynth and VirtualDub into a UT/PCM AVI what I get on the other end is a video which has audio of the same length. Do that to all you're videos, then append them to a VFR MKV, load them into AVISynth using DirectShowSource while converting to CFR and, unless I'm sorely mistaken, that's problem solved.

    -Edit- DirectShowSource("f:\Video.mkv", fps = 25.000, convertfps = true)
    Last edited by ndjamena; 12th Nov 2014 at 07:58.
    Quote Quote  
  10. Originally Posted by cheyrn View Post
    I am matching video with sound. I guess maybe I can change the sound to match the framerate. Is that what people usually do?

    I get fractional frames because the audio doesn't fit perfectly into the number of frames.
    Provide sample rate for audio, number of samples for audio, provide frame rate for video and number of frames.

    Usually audio is resampled to match video as humans are less sensitive to absolute frequency value.
    Quote Quote  
  11. Originally Posted by cheyrn View Post
    If I have something that lasts 2.77 seconds say, when it gets converted into frames and if I then loop that segment, the result is off by some amount in time.

    Should I try to keep a running total of dropped fractional parts and add or subtract them periodically, or is there a simpler thing that people usually do.

    In other words

    2.77 seconds at 60fps is 166.2 frames which gets converted to 166 frames. Do I try to keep track of the parts like ".2" and add them when they are greater than 1?
    Why can't you adjust with the frame rate a little by adding the following to your Avisynth script?

    AssumeFPS(16600,277)

    That'll adjust the frame rate to 59.9277978fps and 2.77 seconds should work out to 166 frames exactly.
    Quote Quote  
  12. Member
    Join Date
    Nov 2005
    Location
    United States
    Search Comp PM
    I don't think the DirectShowSource variable rate idea would address the problem, because there isn't a consistent difference in length that a program would know about. I mean, the program won't know that I intended for the 166 x nth frame to be 1 frame later.

    I think the AssumeFPS ratio idea makes sense. The video will get converted to a standard framerate but I think that means that the fractional amounts will automatically get distributed across the video. Is that right?

    In other words, ChangeFPS(<entire video at 59.9277978fps>, 30) would automatically drop or duplicate frames across the entire video, right?

    And the audio compression idea seems like it would fix things in a simliar way.

    So, one of those seems like the solution. Thanks.

    What this is about is I am making a music video (it's embarrassing to write those words for some reason). I am taking clips of characters from a video game (fraps) and changing their speed to resemble human movement. That's why I have been mentioning tiny numbers of frames.
    Quote Quote  
  13. Originally Posted by cheyrn View Post


    I think the AssumeFPS ratio idea makes sense. The video will get converted to a standard framerate but I think that means that the fractional amounts will automatically get distributed across the video. Is that right?

    Yes. It will be converted to the framerate you specify. It doesn't add or drop frames. It plays the video back faster or slower. So if it was in sync before with the "0.2" frame overhang, it will become out of sync if you use AssumeFPS alone


    In other words, ChangeFPS(<entire video at 59.9277978fps>, 30) would automatically drop or duplicate frames across the entire video, right?
    If source FPS is 59.92779778 (eg. you used AssumeFPS on it beforehand), and you use ChangeFPS(30), yes, it will add or delete frames across the entire video to become the new specified FPS




    What this is about is I am making a music video (it's embarrassing to write those words for some reason). I am taking clips of characters from a video game (fraps) and changing their speed to resemble human movement. That's why I have been mentioning tiny numbers of frames.



    Why would it be embarrasing to make a music video ? Is MTV not "cool" anymore ?

    Something like this is typically done in a NLE . You have many more options using a video editor - you can edit either audio, video or both and get realtime feedback. Also, there are many different ways/methods you could "edit" just for achieving sync , and you have the ability to do various types of "creative edits" to cover up problems. This is not something that you can do in avisynth very easily or quickly.

    The typical music video scenario is a bit different. I would hesitate to alter the audio, unless there were other options , because the "music" is supposed to be the focus of attention, but your project might be intended for other reasons
    Quote Quote  
  14. Member Cornucopia's Avatar
    Join Date
    Oct 2001
    Location
    Deep in the Heart of Texas
    Search PM
    Finally the whole answer to the Q I posed in #3!
    This tells me that you can A) adjust the framerate to fall within a range, because "human movement" is not just one thing so whole frame #s could be accommodated, B) your original audio is not used unadjusted anyway so you could adjust the audio without too much worry, and C) you are making this harder than it really needs to be.

    Scott
    Quote Quote  
  15. Member
    Join Date
    Sep 2012
    Location
    Australia
    Search Comp PM
    Originally Posted by cheyrn View Post
    I don't think the DirectShowSource variable rate idea would address the problem, because there isn't a consistent difference in length that a program would know about. I mean, the program won't know that I intended for the 166 x nth frame to be 1 frame later.
    I can't tell if I'm not getting it or you're not getting it.

    If you need the video length to match the audio length EXACTLY, then set the video frame-rate of each segment to whatever it takes to achieve that end. Once you append the videos and convert the complete file to constant the worst that can happen is that a frame will be off by LESS than a frame, but there's nothing you can do about that and since anyone watching would need to be a superman to notice there's no point in worrying about it.

    Convert the audio into PCM, cut it into whatever sections you need, convert the video frame rates to match lengths with the audio, mux the video with the audio, append all the files with all the differing frame rates, then reload it with AVISynth while converting it to constant.

    What you're doing is going to be as fiddly as hell, and attempting to accomplish it within AVISynth alone will be a nightmare.

    -Edit- if you're intended destination is NTSC, then how the hell do you expect to get .2 of a frame from anything? You must be rounding.
    Last edited by ndjamena; 12th Nov 2014 at 14:33.
    Quote Quote  
  16. Originally Posted by ndjamena View Post

    If you need the video length to match the audio length EXACTLY, then set the video frame-rate of each segment to whatever it takes to achieve that end. Once you append the videos and convert the complete file to constant the worst that can happen is that a frame will be off by LESS than a frame, but there's nothing you can do about that and since anyone watching would need to be a superman to notice there's no point in worrying about it.
    I think he's worried about looping and their cumulative effects. I'm guessing he's looping some video game animations for parts of the video.

    I'm going to take a wild guess that is what his animate() thread was about as well for remapping the frame ranges



    What you're doing is going to be as fiddly as hell, and attempting to accomplish it within AVISynth alone will be a nightmare.

    That would be my "take home" message.

    It's one thing to learn about avisynth for fun - and avisynth is awesome for some tasks - but it's pure pain trying to do this type of work in avisynth. It will be easily 20-50x faster in a NLE (that's being conservative)
    Quote Quote  
  17. Member
    Join Date
    Nov 2005
    Location
    United States
    Search Comp PM
    Actually, I started doing this in virtualdub, but it's so much work. Being able to do things with a program is very much easier for me anyway. So, essentially, I find where things are in Reaper and in Virtualdub, then instead of doing very very slow manipulation by hand, some if it is done with avisynth. So, it's using a bit of both.

    A very important issue for me is that the slowness of doing it manually makes it certain that I will not do certain things that I would like to, because it makes me feel tired thinking about the work. I end up convincing myself that I don't really want to do it after all.

    The Animate thing was because I want something to interpolate ranges, so that changes happen non-linearly.

    So, I am working on using spline, since I don't see how to get the interpolation out of animate.

    Using the variable rate directshow thing might be something I need to look at, but the hard part is matching up the audio and video which are independent of each other and at different speeds. It's not going to help with that, right? It's sort of like the joke, how can you live a fulfilling life as a millionaire? First, get a million dollars, then....

    Originally Posted by poisondeathray View Post
    Yes. It will be converted to the framerate you specify. It doesn't add or drop frames. It plays the video back faster or slower. So if it was in sync before with the "0.2" frame overhang, it will become out of sync if you use AssumeFPS alone.
    It becomes out of sync with the overhang. Assuming for the moment that the sound is at a constant rate, this adjusting of the speed of the video would be constant? So, if the sound is consistently chunked at 2.77 seconds, then using 16600/277 fps would make them be in sync?
    Quote Quote  
  18. Originally Posted by cheyrn View Post

    A very important issue for me is that the slowness of doing it manually makes it certain that I will not do certain things that I would like to, because it makes me feel tired thinking about the work. I end up convincing myself that I don't really want to do it after all.
    On the other hand, the menial repetitive tasks of syncing, take away from the time that could be used doing a better creative job on this project or on vacation (at least other things) . Trust me, when I say this type of work is much much easier in a NLE. What you're trying to do with this music video is like using a pen to hammer in nails - wrong tool


    The Animate thing was because I want something to interpolate ranges, so that changes happen non-linearly.
    I assumed it was just a thought experiment, but a solution was given with RemapFrames, and jagabo wrote a function as well. Eitherway, these type of changes are easier in a NLE - you even have choices different types of interpolation

    So, I am working on using spline, since I don't see how to get the interpolation out of animate.
    spline ? not sure what you're referring to



    Using the variable rate directshow thing might be something I need to look at, but the hard part is matching up the audio and video which are independent of each other and at different speeds. It's not going to help with that, right? It's sort of like the joke, how can you live a fulfilling life as a millionaire? First, get a million dollars, then....
    VFR - the way it's commonly used - is different. The audio is the same, but timecodes control the display time of frames. Some might display longer or shorter than others. It's more commonly used as a bandwidth saving measure. e.g. a static scene might have a very low FPS, consisting of 1 FPS , or even fractional. That saves bandwidth to encode 1 frame vs. say 300 frames (maybe a 10 sec 30FPS shot for a CFR stream)


    Originally Posted by poisondeathray View Post
    Yes. It will be converted to the framerate you specify. It doesn't add or drop frames. It plays the video back faster or slower. So if it was in sync before with the "0.2" frame overhang, it will become out of sync if you use AssumeFPS alone.
    It becomes out of sync with the overhang. Assuming for the moment that the sound is at a constant rate, this adjusting of the speed of the video would be constant? So, if the sound is consistently chunked at 2.77 seconds, then using 16600/277 fps would make them be in sync?
    Be more specific - was the single 2.77 section already out of sync , or out of sync due to looping ? (appending segments end on end)

    But think about what you're asking - It's imposible to answer your question without more information. Best guess is no

    Just because audio and video length match up perfectly, does not mean it will sync up

    You need to describe what is being synced e.g. lets say they are drum beats. It would be sheer luck that if you matched the length of the video and audio , that a random animation would sync up to some random music piece. You probably have higher change of winning the lottery.
    Quote Quote  
Visit our sponsor! Try DVDFab and backup Blu-rays!