I was wrong, you can use recursive functions in AviSynth. Here's my latest using Dissolve():
It needs the fourth argument at the end to use as a working variable. It should always be a 2 (basically, it acts as a recursion depth counter). To get a normal video dissolve with the steeper audio dissolve you can use:Code:function NonlinearDissolveRecursive(clip v1, clip v2, int count, int width) { Dissolve(v1, v2, width) count = count - 2 width = width + 2 count <= 0 ? last : NonLinearDissolveRecursive(Trim(last, 0, v1.FrameCount-1), Trim(last, v1.FrameCount-width+2,0), count, width) } ColorBars().Info() v1=Trim(0, 29) l_ch = GetChannel(v1, 1) r_ch = GetChannel(v1, 2) stereo = MergeChannels(r_ch, l_ch) v2=AudioDub(v1,stereo) # ie, swap audio channels in v2 v2=FlipVertical(v2) NonlinearDissolveRecursive(v1, v2, 16, 2) # equivalent to Dissolve(v1, v2, 16) in terms of frame count
Code:AudioDub(Dissolve(v1, v2, 16), NonlinearDissolveRecursive(v1, v2, 16, 2))
+ Reply to Thread
Results 31 to 55 of 55
-
-
Anonymous344Guest
Yeah, I see what you mean. That would be a problem. I'm not sure that anything can be done about it with Avisynth as it currently is.
The effect of either of your two alternatives on the video is irrelevant to me though, as I customarily have the video in one "track" and the same video dubbed to the audio stacked above it with StackVertical(). This way I can use the dissolve function to work on the audio and see where the dissolve is by its effect on the video with which it is dubbed. Then I perform straight cuts on the video-only track below, at suitable points, and check that it's sync at the next shot change. Ultimately, I render out the audio in Virtualdub, return the video-only stream to the encoder, and then mux the output. -
Anonymous344Guest
In response to my questions on the subject, IanB sent me this information, which I'll post here in case it is useful.
Originally Posted by IanBCode:lap = 25 # number of frames overlap Clip1=...Source(...) Clip2=...Source(...) Fc1=FrameCount(Clip1) P1=Clip1.Trim(0, Fc1-lap-1) # Start piece of clip1 P2=Clip1.Trim(Fc1-lap, 0) # End piece of clip 1 to overlap with clip2 P3=Clip2.Trim(0, lap-1) # Start piece of clip 2 to overlap with clip 1 P4=Clip2.Trim(lap, 0) # End piece of clip 2 V=P1 + Dissolve(P2, P3, lap) + P4 P2e=P2.DoTrailingEnvelope(lap) # Fade out processing P3e=P3.DoLeadingEnvelope(lap) # Fade in processing A=P1 + MixAudio(P2e, P3e, 1.0, 1.0) + P4 AudioDub(V, A)
-
Unfortunately, I no longer have time to visit this forum regularly, but I came across this thread following a reference on Doom9 by the OP (Jeff B), and have a few comments.
The real problem isn't MixAudio() - it's that Animate() does not support audio processing filters at all.
The audio you actually get at each point is (for practical purposes) unpredictable, hence the strange behaviour on seeking.
Yes, Avisynth functions have always supported recursion. The most common use (as here) is to perform some kind of iteration.
A user-friendly alternative (for those who find recursion difficult to grasp) is to use the 'for' or 'while' loop extensions provided by the GScript plugin.
In Avisynth 2.60, you can use AudioTrim() to trim by time rather than frames.
In principle, this would allow you to cut the track into small enough sections such that the steps between them after processing would not be noticeable. The cutting and re-splicing could be done iteratively using a recursive function or a GScript loop.
However, in practice, you might need a very large number of sections to get decent results, possibly hitting memory limits (since each instance of MixAudio has its own audio buffer).
I can't see what you were trying to do here. Perhaps your idea was sound, but the execution of it is definitely flawed as each output frame is a blend (in both video and audio) of several frames from each clip, rather than a blend of the single corresponding frames from each. You can see this if you add ShowFrameNumber(scroll=true) after the ColorBars() source filter (a useful tip for testing this sort of thing).
Possibly more attractive is a parabolic fade (the same as provided by Sox, see below), which can be done by
Code:MixAudio(FadeIn0(n), FadeIn0(n).FadeIn0(n), 2, -1) # parabolic fade-in over n frames MixAudio(FadeOut0(n), FadeOut0(n).FadeOut0(n), 2, -1) # parabolic fade-out over n frames
Code:MixAudio(Dissolve(a,b,n), Dissolve(a.FadeOut0(n), b.FadeIn0(n), n), 2, -1) # parabolic cross-fade from a to b over n frames
There is a SoxFilter plugin which allows you to run Sox effects within AviSynth. Among the effects supported are various types of fades (linear, logarithmic, quarter of a sinewave, half a sinewave, and inverted parabola). For example, this will produce a parabolic fade-in over 1 second:
Code:SoxFilter("fade p 1.0")
-
Anonymous344Guest
Thanks for your interest and comments, Gavino.
I noticed this new feature, but would actually prefer to input values by frames rather than by time, as I find frames easier to envisage. Moreover, as I mentioned above, I have to have the audio dubbed to the video to reassure me that the two are still in sync and so that crossfades manifest themselves visually. This is why I have used Dissolve() until now, despite its limitations.
There is a SoxFilter plugin which allows you to run Sox effects within AviSynth. Among the effects supported are various types of fades (linear, logarithmic, quarter of a sinewave, half a sinewave, and inverted parabola). For example, this will produce a parabolic fade-in over 1 second:
Code:SoxFilter("fade p 1.0")
-
Not directly. Time can be specified in seconds, including decimal fractions of a second (+ hours and minutes if necessary), or in audio samples. However, it is simple to convert a frame number into seconds by dividing by the frame rate (a clip property). The example I gave could be changed to a fade of n frames by:
Code:secs = n/Framerate() SoxFilter("fade p "+string(secs))
Last edited by Gavino; 4th Nov 2013 at 17:23. Reason: I meant MixAudio(), not MixVideo()
-
Last edited by jagabo; 4th Nov 2013 at 17:32.
-
It only appears to work because you used a static image (ColorBars()).
If you look closely at the overlaid Info() text, specifically at the frame numbers and time stamps, you will see that each frame in the overlap region is a blend of several frames from both clips. For example, frame 21 is a blend of frames 21, 23, 25, 27 and 29 from v1 and frames 1, 3, 5 and 7 from v2. You can see the effect more clearly if you use ShowFrameNumber(scroll=true) instead of Info().
Since the audio is processed in exactly the same way, each sample must similarly contain a mixture from several different points in time from each clip, which is clearly not what you want in a cross-fade.
So there appears to be something wrong in the way you have implemented the function.
Since I don't yet fully understand what you were trying to do, I can't say where the error is.
I'm not sure why the audio (unlike the video) actually does give a smooth fade (as shown by the display). Perhaps it is because you have a fixed frequency tone being mixed with different phases of itself and producing some kind of interference pattern(?). -
-
Yes, I know that, but I guess you missed this part of my post:
Since both video and audio go through similar processes in parallel, the video provides a demonstration of what is being mixed with what at each point in the transition. Since the video is wrong in this respect, the audio must be too.
To put it another way, when you said "the basic idea was to mix several crossfades of different lengths to get a non-linear crossfade", the mixes need to be in sync with each other, and the video shows they are not.
So although the audio display looks 'right', the audio at each point of the overlap region contains contributions from several different points of each clip, which is not what is wanted. -
Yes. No doubt there are some cases where it will be noticeable. Note, I didn't mean this as an actual solution. Just an attempt to generate some kind of non-linear crossfade with built in AviSynth filters.
I came up with a demonstration of the problem. I created clips with short tone bursts then ran it through the function:
Instead of the burst simply decreasing or increasing in amplitude you can see the overlap. With a proper linear audio crossfade you get the expected:
Last edited by jagabo; 5th Nov 2013 at 10:55.
-
Interesting test, and quite revealing.
By contrast, I would expect your other version from post #30 to work properly:
Dissolve(FadeOut(v1,16).FadeOut(16).FadeOut(16), FadeIn(v2,16).FadeIn(16).FadeIn(16), 22)
or perhaps better:
Dissolve(FadeOut0(v1,16).FadeOut0(16).FadeOut0(16), FadeIn0(v2,16).FadeIn0(16).FadeIn0(16), 16)
And similarly, my 'parabolic' crossfade from post #34:
MixAudio(Dissolve(v1,v2,16), Dissolve(v1.FadeOut0(16), v2.FadeIn0(16), 16), 2, -1)
This has the property, often wanted in an audio cross-fade, of a slower fall-off at the start of the fade-out and a faster rise at the start of the fade-in, maintaining the overall volume (output power) approximately constant (unlike a linear crossfade, where overall volume is 3db lower at the midpoint).Last edited by Gavino; 5th Nov 2013 at 13:52.
-
Anonymous344Guest
Gavino, would it be possible for someone to write a function for your parabolic crossfade that is called like Dissolve() and has a similar effect on the video?
-
Code:
function ParabolicCrossFade(clip c1, clip c2, int n) { d = Dissolve(c1, c2, n) df = Dissolve(c1.FadeOut0(n), c2.FadeIn0(n), n) video = mt_lutxy(d, df, expr="x 2 * y -", U=3, V=3) audio = MixAudio(d, df, 2, -1) AudioDub(video, audio) }
EDIT: The above does a parabolic crossfade on both video and audio, which is what I thought you meant by "called like Dissolve() and has a similar effect on the video". Thinking again, you probably in fact wanted the video to behave like Dissolve(). In that case it's even simpler and does not require MaskTools.
Code:function ParabolicCrossFade(clip c1, clip c2, int n) { # alternative version d = Dissolve(c1, c2, n) df = Dissolve(c1.FadeOut0(n), c2.FadeIn0(n), n) audio = MixAudio(d, df, 2, -1) AudioDub(d, audio) }
Last edited by Gavino; 5th Nov 2013 at 17:10. Reason: version with normal dissolve for video
-
Last edited by jagabo; 5th Nov 2013 at 18:14.
-
Anonymous344Guest
Yes. That is what I meant; however, I am not fussy about exactly how the function affects the video, as long as the points at which the audio crossfade is taking place are shown by a visual effect that can be used as a guide. Thank you very much for writing both versions of the function.
Jagabo, thank you for illustrating the parabolic crossfade.
-
I would think the parabolic crossfade could lead to clipping if both tracks were very loud.
-
Yes, that is a danger with this type of crossfade.
At the mid-point, the gain on each component is 0.75 (compared to 0.5 in a linear crossfade), so the overall amplitude could potentially reach 1.5.
Whether you actually pass 1.0 (and hence clip) at any point depends on the overall amplitude of the components, and their degree of correlation.
In cases where this problem occurs, you could reduce the input levels (using Amplify()) before mixing them. -
Anonymous344Guest
In that case, for the function that Gavino wrote, a way would have to be found to apply Amplify() to the parts undergoing the crossfade, rather than to the whole track.
-
You're right - in general, you wouldn't want to reduce the volume outside the crossfade.
However, applying a fixed reduction over the crossfade will lead to a discontinuity at its ends.
Better would be to adjust the mixing factors inside the function via an extra parameter.
Code:function ParabolicCrossFade(clip c1, clip c2, int n, float "factor") { # third version factor = Default(factor, 2.0) d = Dissolve(c1, c2, n) df = Dissolve(c1.FadeOut0(n), c2.FadeIn0(n), n) audio = MixAudio(d, df, factor) AudioDub(d, audio) }
To avoid clipping, 'factor' can be reduced to somewhere between 1.0 and 2.0, with 1.0 giving a completely linear crossfade. -
Anonymous344GuestHowever, applying a fixed reduction over the crossfade will lead to a discontinuity at its ends. Better would be to adjust the mixing factors inside the function via an extra parameter.
-
I'm sure you could do this yourself, but here's a callable version of
Code:Dissolve(FadeOut0(v1,16).FadeOut0(16).FadeOut0(16), FadeIn0(v2,16).FadeIn0(16).FadeIn0(16), 16)
Code:function DissolveFast(clip c1, clip c2, int "frames") { frames = Default(frames, 24) # default to 24 frame crossfade if not specified Dissolve(FadeOut0(c1,frames).FadeOut0(frames).FadeOut0(frames), FadeIn0(c2,frames).FadeIn0(frames).FadeIn0(frames), frames) }
-
I'm just trying Jagabo's Dissolve function above and I get a delay in audio when the second clip starts and it stays delayed.
Gavino's does the same thing. I'm using Vdub to test.
It seems to do this function when clip 1 ends and clip 2 starts, is that correct? -
OK, too much words
Who-s the best here? who has enough skill to perform a practical experience, i.e. to solve a real problem?
I'll give him a video clip, let's say about 500 MB and 3 song of different sizes that I want to attach them as a backgroung ( audio track)
Each of those song need to be crossfaded, jointed each other so that to cover the lenght of video track.
Well?
Or a simplest problem:
On a 500 MB video clip I need to add an audio track of slightly longer, of about 20 seconds. It requires to be crossfaded. How could I do that so that to match the size(lenght) of video track?
Similar Threads
-
Avisynth help please.
By Otokonokotron in forum EditingReplies: 7Last Post: 19th Nov 2012, 16:47 -
avisynth
By sportflyer in forum Newbie / General discussionsReplies: 1Last Post: 16th Feb 2010, 04:36 -
Using avisynth
By bsuska in forum Video ConversionReplies: 8Last Post: 16th Jul 2009, 08:32 -
problem with Sony Vegas 4 crossfades
By silvyr in forum EditingReplies: 2Last Post: 27th Mar 2009, 20:46 -
AVIsynth help!
By helper in forum Newbie / General discussionsReplies: 11Last Post: 15th Oct 2008, 03:35