If I have something that lasts 2.77 seconds say, when it gets converted into frames and if I then loop that segment, the result is off by some amount in time.
Should I try to keep a running total of dropped fractional parts and add or subtract them periodically, or is there a simpler thing that people usually do.
In other words
2.77 seconds at 60fps is 166.2 frames which gets converted to 166 frames. Do I try to keep track of the parts like ".2" and add them when they are greater than 1?
Try StreamFab Downloader and download from Netflix, Amazon, Youtube! Or Try DVDFab and copy Blu-rays!
+ Reply to Thread
Results 1 to 18 of 18
Thread
-
-
How the hell are you getting .2 of a frame? Is this audio delay or some kind of weird frame rate conversion?
-
First, what is this thing that lasts for 2.77 seconds that is video-related? Why is it not an exact # of frames? - are you doing some scientific visualization in Matlab or something?
Scott -
I am matching video with sound. I guess maybe I can change the sound to match the framerate. Is that what people usually do?
I get fractional frames because the audio doesn't fit perfectly into the number of frames. -
https://forum.videohelp.com/threads/368301-AviSynth-Animate-syntax
Append the things into an MKV or MP4 (UT Codec if you must) then let FFVideoSource or DirectShowSource (or whatever) convert the frame-rate to constant. -
So, for example, if something is off by .2 frames within less than 3 seconds, if it happened over and over it would be off by 1/5 second within a minute (or in any case by an unacceptable amount). If there are many files appended, it doesn't solve the problem, right?
I guess either someone painstaking goes through and corrects things by duplicating individual frames, or you keep track of the fractional parts that are lost and duplicate a frame every time the fractions add up to a frame. -
What is the audio content?
Can you time compress the audio to 2.75 seconds in an audio editor ? (or avisynth timestretch() , but you might get better results in audio editor )
60fps * 2.75s = 165 frames
2.75/2.77 = 99.277778%
What is the video content ? What relationship does it have with audio? How is sync involved ? (is this speech, music , something else ? ) -
It is easy to use audio TC/E, or loop or add silence or clip out a section, or a combination of all those in order to get EXACT audio and video. And speaking as an audio engineer, yes, that is what people usually do, adjust the audio to match video. But as pdr mentioned, it depends on the sound.
Regardless, if you are working with video, the smallest indivisible unit of time is the frame/field. If one is using audio WITH video, one should be prepared to do what is necessary for the audio to fit with the video, seeing as how the audio has a much finer timebase and so is more "malleable".
Scott -
If you convert the videos to a variable frame rate UT and convert the audio to PCM, assuming the audio and video were the same length in the first place (which is unlikely, I know) when converted to constant frame rate the video and audio will remain in sync permanently and no audio will be lost. As for the case of the video and audio being of different length, that's not what the question you're asking is about.
I THINK if I save a video using AVISynth and VirtualDub into a UT/PCM AVI what I get on the other end is a video which has audio of the same length. Do that to all you're videos, then append them to a VFR MKV, load them into AVISynth using DirectShowSource while converting to CFR and, unless I'm sorely mistaken, that's problem solved.
-Edit- DirectShowSource("f:\Video.mkv", fps = 25.000, convertfps = true)Last edited by ndjamena; 12th Nov 2014 at 07:58.
-
-
I don't think the DirectShowSource variable rate idea would address the problem, because there isn't a consistent difference in length that a program would know about. I mean, the program won't know that I intended for the 166 x nth frame to be 1 frame later.
I think the AssumeFPS ratio idea makes sense. The video will get converted to a standard framerate but I think that means that the fractional amounts will automatically get distributed across the video. Is that right?
In other words, ChangeFPS(<entire video at 59.9277978fps>, 30) would automatically drop or duplicate frames across the entire video, right?
And the audio compression idea seems like it would fix things in a simliar way.
So, one of those seems like the solution. Thanks.
What this is about is I am making a music video (it's embarrassing to write those words for some reason). I am taking clips of characters from a video game (fraps) and changing their speed to resemble human movement. That's why I have been mentioning tiny numbers of frames. -
Yes. It will be converted to the framerate you specify. It doesn't add or drop frames. It plays the video back faster or slower. So if it was in sync before with the "0.2" frame overhang, it will become out of sync if you use AssumeFPS alone
In other words, ChangeFPS(<entire video at 59.9277978fps>, 30) would automatically drop or duplicate frames across the entire video, right?
What this is about is I am making a music video (it's embarrassing to write those words for some reason). I am taking clips of characters from a video game (fraps) and changing their speed to resemble human movement. That's why I have been mentioning tiny numbers of frames.
Why would it be embarrasing to make a music video ? Is MTV not "cool" anymore ?
Something like this is typically done in a NLE . You have many more options using a video editor - you can edit either audio, video or both and get realtime feedback. Also, there are many different ways/methods you could "edit" just for achieving sync , and you have the ability to do various types of "creative edits" to cover up problems. This is not something that you can do in avisynth very easily or quickly.
The typical music video scenario is a bit different. I would hesitate to alter the audio, unless there were other options , because the "music" is supposed to be the focus of attention, but your project might be intended for other reasons -
Finally the whole answer to the Q I posed in #3!
This tells me that you can A) adjust the framerate to fall within a range, because "human movement" is not just one thing so whole frame #s could be accommodated, B) your original audio is not used unadjusted anyway so you could adjust the audio without too much worry, and C) you are making this harder than it really needs to be.
Scott -
I can't tell if I'm not getting it or you're not getting it.
If you need the video length to match the audio length EXACTLY, then set the video frame-rate of each segment to whatever it takes to achieve that end. Once you append the videos and convert the complete file to constant the worst that can happen is that a frame will be off by LESS than a frame, but there's nothing you can do about that and since anyone watching would need to be a superman to notice there's no point in worrying about it.
Convert the audio into PCM, cut it into whatever sections you need, convert the video frame rates to match lengths with the audio, mux the video with the audio, append all the files with all the differing frame rates, then reload it with AVISynth while converting it to constant.
What you're doing is going to be as fiddly as hell, and attempting to accomplish it within AVISynth alone will be a nightmare.
-Edit- if you're intended destination is NTSC, then how the hell do you expect to get .2 of a frame from anything? You must be rounding.Last edited by ndjamena; 12th Nov 2014 at 14:33.
-
I think he's worried about looping and their cumulative effects. I'm guessing he's looping some video game animations for parts of the video.
I'm going to take a wild guess that is what his animate() thread was about as well for remapping the frame ranges
What you're doing is going to be as fiddly as hell, and attempting to accomplish it within AVISynth alone will be a nightmare.
That would be my "take home" message.
It's one thing to learn about avisynth for fun - and avisynth is awesome for some tasks - but it's pure pain trying to do this type of work in avisynth. It will be easily 20-50x faster in a NLE (that's being conservative) -
Actually, I started doing this in virtualdub, but it's so much work. Being able to do things with a program is very much easier for me anyway. So, essentially, I find where things are in Reaper and in Virtualdub, then instead of doing very very slow manipulation by hand, some if it is done with avisynth. So, it's using a bit of both.
A very important issue for me is that the slowness of doing it manually makes it certain that I will not do certain things that I would like to, because it makes me feel tired thinking about the work. I end up convincing myself that I don't really want to do it after all.
The Animate thing was because I want something to interpolate ranges, so that changes happen non-linearly.
So, I am working on using spline, since I don't see how to get the interpolation out of animate.
Using the variable rate directshow thing might be something I need to look at, but the hard part is matching up the audio and video which are independent of each other and at different speeds. It's not going to help with that, right? It's sort of like the joke, how can you live a fulfilling life as a millionaire? First, get a million dollars, then....
It becomes out of sync with the overhang. Assuming for the moment that the sound is at a constant rate, this adjusting of the speed of the video would be constant? So, if the sound is consistently chunked at 2.77 seconds, then using 16600/277 fps would make them be in sync? -
On the other hand, the menial repetitive tasks of syncing, take away from the time that could be used doing a better creative job on this project or on vacation (at least other things) . Trust me, when I say this type of work is much much easier in a NLE. What you're trying to do with this music video is like using a pen to hammer in nails - wrong tool
The Animate thing was because I want something to interpolate ranges, so that changes happen non-linearly.
So, I am working on using spline, since I don't see how to get the interpolation out of animate.
Using the variable rate directshow thing might be something I need to look at, but the hard part is matching up the audio and video which are independent of each other and at different speeds. It's not going to help with that, right? It's sort of like the joke, how can you live a fulfilling life as a millionaire? First, get a million dollars, then....
It becomes out of sync with the overhang. Assuming for the moment that the sound is at a constant rate, this adjusting of the speed of the video would be constant? So, if the sound is consistently chunked at 2.77 seconds, then using 16600/277 fps would make them be in sync?
But think about what you're asking - It's imposible to answer your question without more information. Best guess is no
Just because audio and video length match up perfectly, does not mean it will sync up
You need to describe what is being synced e.g. lets say they are drum beats. It would be sheer luck that if you matched the length of the video and audio , that a random animation would sync up to some random music piece. You probably have higher change of winning the lottery.