VideoHelp Forum




+ Reply to Thread
Page 2 of 2
FirstFirst 1 2
Results 31 to 55 of 55
  1. I was wrong, you can use recursive functions in AviSynth. Here's my latest using Dissolve():

    Code:
    function NonlinearDissolveRecursive(clip v1, clip v2, int count, int width)
    {
       Dissolve(v1, v2, width)
       count = count - 2
       width = width + 2
       count <= 0 ? last : NonLinearDissolveRecursive(Trim(last, 0, v1.FrameCount-1), Trim(last, v1.FrameCount-width+2,0), count, width)
    }
    
    
    ColorBars().Info()
    v1=Trim(0, 29)
    l_ch = GetChannel(v1, 1) 
    r_ch = GetChannel(v1, 2) 
    stereo = MergeChannels(r_ch, l_ch)
    v2=AudioDub(v1,stereo) # ie, swap audio channels in v2
    v2=FlipVertical(v2)
    
    NonlinearDissolveRecursive(v1, v2, 16, 2) # equivalent to Dissolve(v1, v2, 16) in terms of frame count
    It needs the fourth argument at the end to use as a working variable. It should always be a 2 (basically, it acts as a recursion depth counter). To get a normal video dissolve with the steeper audio dissolve you can use:

    Code:
    AudioDub(Dissolve(v1, v2, 16), NonlinearDissolveRecursive(v1, v2, 16, 2))
    Quote Quote  
  2. Anonymous344
    Guest
    Originally Posted by jagabo View Post
    What I don't like is the first clip's audio fades to nearly inaudible before the second clip's audio starts to be heard. So the audio is closer to a fade out of the first clip followed by a fade in of the second clip rather than a concurrent fade out and fade in.
    Yeah, I see what you mean. That would be a problem. I'm not sure that anything can be done about it with Avisynth as it currently is.

    The effect of either of your two alternatives on the video is irrelevant to me though, as I customarily have the video in one "track" and the same video dubbed to the audio stacked above it with StackVertical(). This way I can use the dissolve function to work on the audio and see where the dissolve is by its effect on the video with which it is dubbed. Then I perform straight cuts on the video-only track below, at suitable points, and check that it's sync at the next shot change. Ultimately, I render out the audio in Virtualdub, return the video-only stream to the encoder, and then mux the output.
    Quote Quote  
  3. Anonymous344
    Guest
    In response to my questions on the subject, IanB sent me this information, which I'll post here in case it is useful.

    Originally Posted by IanB
    Dissolve() is the only 2 clip cross fade function in the Avisynth core. As you have noticed the transition is simple linear. Because the human ears response to sound level is logarithmic this linear transition is not ideal. e.g. at the cross fade mid point each clip is 0.5 of the input value. A level of 0.5 is -6dB which does not sound like a half.

    MixAudio() simply adds the audio of 2 clips in a fixed ratio, the default multiplier is 50% again -6dB for both clips. But the multiplier can be any value for each clip.

    Functions like FrameCount() are properties of the clip, in this case the number of video frames in the clip. Divide this by the FrameRate() to calculate the duration in seconds.

    Third party plugins like Sox and Bass offer many additional audio processing features. I am not current on what features are available but if you open a new thread posing this question I am sure someone will have the required information.

    Assuming you can find something that processes a suitable leading and trailing envelope over a clip you could use MixAudio(clip1, clip2, 1.0, 1.0) to merge the envelope shaped clips.
    Code:
    lap = 25 # number of frames overlap
    Clip1=...Source(...)
    Clip2=...Source(...)
    Fc1=FrameCount(Clip1)
    P1=Clip1.Trim(0, Fc1-lap-1) # Start piece of clip1
    P2=Clip1.Trim(Fc1-lap, 0) # End piece of clip 1 to overlap with clip2
    P3=Clip2.Trim(0, lap-1) # Start piece of clip 2 to overlap with clip 1
    P4=Clip2.Trim(lap, 0) # End piece of clip 2
    V=P1 + Dissolve(P2, P3, lap) + P4
    P2e=P2.DoTrailingEnvelope(lap) # Fade out processing
    P3e=P3.DoLeadingEnvelope(lap) # Fade in processing
    A=P1 + MixAudio(P2e, P3e, 1.0, 1.0) + P4
    AudioDub(V, A)
    Sox was mentioned earlier in the thread, but I can't see how to make it work with Avisynth.
    Quote Quote  
  4. Member
    Join Date
    Jul 2009
    Location
    Spain
    Search Comp PM
    Unfortunately, I no longer have time to visit this forum regularly, but I came across this thread following a reference on Doom9 by the OP (Jeff B), and have a few comments.

    Originally Posted by jagabo View Post
    Second, audio appears to be processed by MixAudio() in discreet chunks separate from the associated frames. So the steps don't line up with frames, a step can last for several frames, and you get random behavior with random seeks.
    The real problem isn't MixAudio() - it's that Animate() does not support audio processing filters at all.
    The audio you actually get at each point is (for practical purposes) unpredictable, hence the strange behaviour on seeking.

    Originally Posted by jagabo View Post
    I was wrong, you can use recursive functions in AviSynth.
    Yes, Avisynth functions have always supported recursion. The most common use (as here) is to perform some kind of iteration.
    A user-friendly alternative (for those who find recursion difficult to grasp) is to use the 'for' or 'while' loop extensions provided by the GScript plugin.

    Originally Posted by jagabo View Post
    Another approach would be to use Trim to cut the crossfade section into single frames and adjust the mix frame by frame using the same POW() function. That would still leave you with stepwise adjustments. You might be able to work around that by temporarily increasing the frame rate with ChangeFPS(), then reducing it back.
    In Avisynth 2.60, you can use AudioTrim() to trim by time rather than frames.
    In principle, this would allow you to cut the track into small enough sections such that the steps between them after processing would not be noticeable. The cutting and re-splicing could be done iteratively using a recursive function or a GScript loop.

    However, in practice, you might need a very large number of sections to get decent results, possibly hitting memory limits (since each instance of MixAudio has its own audio buffer).

    Originally Posted by jagabo View Post
    I came up with this:
    Code:
    function NonlinearDissolve16f(clip v1, clip v2) # 16 frames
      ...
    Not great.
    I can't see what you were trying to do here. Perhaps your idea was sound, but the execution of it is definitely flawed as each output frame is a blend (in both video and audio) of several frames from each clip, rather than a blend of the single corresponding frames from each. You can see this if you add ShowFrameNumber(scroll=true) after the ColorBars() source filter (a useful tip for testing this sort of thing).

    Originally Posted by jagabo View Post
    What I don't like is the first clip's audio fades to nearly inaudible before the second clip's audio starts to be heard. So the audio is closer to a fade out of the first clip followed by a fade in of the second clip rather than a concurrent fade out and fade in.
    ...
    You can come pretty close to the same audio output with a simple:
    Dissolve(FadeOut(v1,16).FadeOut(16).FadeOut(16), FadeIn(v2,16).FadeIn(16).FadeIn(16), 22)
    Possibly more attractive is a parabolic fade (the same as provided by Sox, see below), which can be done by
    Code:
    MixAudio(FadeIn0(n), FadeIn0(n).FadeIn0(n), 2, -1) # parabolic fade-in over n frames
    MixAudio(FadeOut0(n), FadeOut0(n).FadeOut0(n), 2, -1) # parabolic fade-out over n frames
    Similarly, a parabolic cross-fade can be done by
    Code:
    MixAudio(Dissolve(a,b,n), Dissolve(a.FadeOut0(n), b.FadeIn0(n), n), 2, -1) # parabolic cross-fade from a to b over n frames
    Note the use of FadeIn/Out0 to avoid adding extra frames.

    Originally Posted by Jeff B View Post
    Sox was mentioned earlier in the thread, but I can't see how to make it work with Avisynth.
    There is a SoxFilter plugin which allows you to run Sox effects within AviSynth. Among the effects supported are various types of fades (linear, logarithmic, quarter of a sinewave, half a sinewave, and inverted parabola). For example, this will produce a parabolic fade-in over 1 second:
    Code:
    SoxFilter("fade p 1.0")
    Quote Quote  
  5. Anonymous344
    Guest
    Thanks for your interest and comments, Gavino.

    Originally Posted by Gavino View Post
    In Avisynth 2.60, you can use AudioTrim() to trim by time rather than frames.
    I noticed this new feature, but would actually prefer to input values by frames rather than by time, as I find frames easier to envisage. Moreover, as I mentioned above, I have to have the audio dubbed to the video to reassure me that the two are still in sync and so that crossfades manifest themselves visually. This is why I have used Dissolve() until now, despite its limitations.

    There is a SoxFilter plugin which allows you to run Sox effects within AviSynth. Among the effects supported are various types of fades (linear, logarithmic, quarter of a sinewave, half a sinewave, and inverted parabola). For example, this will produce a parabolic fade-in over 1 second:
    Code:
    SoxFilter("fade p 1.0")
    Thank you! I am not familiar with all of the types that you mentioned, but assume that they correspond to different waveform shapes. Is it possible to script SoxFilter crossfades in frames rather than seconds, like Dissolve(), and to make them have a similar, visible effect on the video?
    Quote Quote  
  6. Member
    Join Date
    Jul 2009
    Location
    Spain
    Search Comp PM
    Originally Posted by Jeff B View Post
    Is it possible to script SoxFilter crossfades in frames rather than seconds, like Dissolve(), and to make them have a similar, visible effect on the video?
    Not directly. Time can be specified in seconds, including decimal fractions of a second (+ hours and minutes if necessary), or in audio samples. However, it is simple to convert a frame number into seconds by dividing by the frame rate (a clip property). The example I gave could be changed to a fade of n frames by:
    Code:
    secs = n/Framerate()
    SoxFilter("fade p "+string(secs))
    SoxFilter only affects the audio. Given a formula for how it varies over time, you could in principle separately program a similar effect for the video by using Animate() with Merge(), in a similar way to jagabo's use with MixAudio() in post #16.
    Last edited by Gavino; 4th Nov 2013 at 17:23. Reason: I meant MixAudio(), not MixVideo()
    Quote Quote  
  7. Originally Posted by Gavino View Post
    Originally Posted by jagabo View Post
    I came up with this:
    Code:
    function NonlinearDissolve16f(clip v1, clip v2) # 16 frames
      ...
    Not great.
    I can't see what you were trying to do here. Perhaps your idea was sound, but the execution of it is definitely flawed as each output frame is a blend (in both video and audio) of several frames from each clip, rather than a blend of the single corresponding frames from each.
    It's been a while but if I remember correctly the basic idea was to mix several crossfades of different lengths to get a non-linear crossfade. It basically worked:

    Click image for larger version

Name:	nlcf.jpg
Views:	470
Size:	209.8 KB
ID:	21027
    Last edited by jagabo; 4th Nov 2013 at 17:32.
    Quote Quote  
  8. Member
    Join Date
    Jul 2009
    Location
    Spain
    Search Comp PM
    Originally Posted by jagabo View Post
    It basically worked
    It only appears to work because you used a static image (ColorBars()).
    If you look closely at the overlaid Info() text, specifically at the frame numbers and time stamps, you will see that each frame in the overlap region is a blend of several frames from both clips. For example, frame 21 is a blend of frames 21, 23, 25, 27 and 29 from v1 and frames 1, 3, 5 and 7 from v2. You can see the effect more clearly if you use ShowFrameNumber(scroll=true) instead of Info().

    Since the audio is processed in exactly the same way, each sample must similarly contain a mixture from several different points in time from each clip, which is clearly not what you want in a cross-fade.

    So there appears to be something wrong in the way you have implemented the function.
    Since I don't yet fully understand what you were trying to do, I can't say where the error is.

    I'm not sure why the audio (unlike the video) actually does give a smooth fade (as shown by the display). Perhaps it is because you have a fixed frequency tone being mixed with different phases of itself and producing some kind of interference pattern(?).
    Quote Quote  
  9. Originally Posted by Gavino View Post
    Originally Posted by jagabo View Post
    It basically worked
    It only appears to work because you used a static image (ColorBars()).
    If you look closely at the overlaid Info() text...
    We were only interested in the audio. Didn't care what happened to the video.
    Quote Quote  
  10. Member
    Join Date
    Jul 2009
    Location
    Spain
    Search Comp PM
    Originally Posted by jagabo View Post
    We were only interested in the audio. Didn't care what happened to the video.
    Yes, I know that, but I guess you missed this part of my post:
    Originally Posted by Gavino View Post
    Since the audio is processed in exactly the same way [as the video], each sample must similarly contain a mixture from several different points in time from each clip, which is clearly not what you want in a cross-fade.
    Since both video and audio go through similar processes in parallel, the video provides a demonstration of what is being mixed with what at each point in the transition. Since the video is wrong in this respect, the audio must be too.

    To put it another way, when you said "the basic idea was to mix several crossfades of different lengths to get a non-linear crossfade", the mixes need to be in sync with each other, and the video shows they are not.

    So although the audio display looks 'right', the audio at each point of the overlap region contains contributions from several different points of each clip, which is not what is wanted.
    Quote Quote  
  11. Originally Posted by Gavino View Post
    Since both video and audio go through similar processes in parallel, the video provides a demonstration of what is being mixed with what at each point in the transition. Since the video is wrong in this respect, the audio must be too.
    Yes. No doubt there are some cases where it will be noticeable. Note, I didn't mean this as an actual solution. Just an attempt to generate some kind of non-linear crossfade with built in AviSynth filters.

    I came up with a demonstration of the problem. I created clips with short tone bursts then ran it through the function:

    Click image for larger version

Name:	bad.jpg
Views:	419
Size:	130.7 KB
ID:	21054

    Instead of the burst simply decreasing or increasing in amplitude you can see the overlap. With a proper linear audio crossfade you get the expected:

    Click image for larger version

Name:	good.jpg
Views:	436
Size:	128.0 KB
ID:	21055
    Last edited by jagabo; 5th Nov 2013 at 10:55.
    Quote Quote  
  12. Member
    Join Date
    Jul 2009
    Location
    Spain
    Search Comp PM
    Interesting test, and quite revealing.

    By contrast, I would expect your other version from post #30 to work properly:
    Dissolve(FadeOut(v1,16).FadeOut(16).FadeOut(16), FadeIn(v2,16).FadeIn(16).FadeIn(16), 22)

    or perhaps better:
    Dissolve(FadeOut0(v1,16).FadeOut0(16).FadeOut0(16), FadeIn0(v2,16).FadeIn0(16).FadeIn0(16), 16)

    And similarly, my 'parabolic' crossfade from post #34:
    MixAudio(Dissolve(v1,v2,16), Dissolve(v1.FadeOut0(16), v2.FadeIn0(16), 16), 2, -1)

    This has the property, often wanted in an audio cross-fade, of a slower fall-off at the start of the fade-out and a faster rise at the start of the fade-in, maintaining the overall volume (output power) approximately constant (unlike a linear crossfade, where overall volume is 3db lower at the midpoint).
    Last edited by Gavino; 5th Nov 2013 at 13:52.
    Quote Quote  
  13. Anonymous344
    Guest
    Gavino, would it be possible for someone to write a function for your parabolic crossfade that is called like Dissolve() and has a similar effect on the video?
    Quote Quote  
  14. Member
    Join Date
    Jul 2009
    Location
    Spain
    Search Comp PM
    Code:
    function ParabolicCrossFade(clip c1, clip c2, int n) {
      d = Dissolve(c1, c2, n)
      df = Dissolve(c1.FadeOut0(n), c2.FadeIn0(n), n)
      video = mt_lutxy(d, df, expr="x 2 * y -", U=3, V=3)
      audio = MixAudio(d, df, 2, -1)
      AudioDub(video, audio)
    }
    It requires MaskTools to do the video combination bit. I previously thought this could be done with Merge(), but I discovered it doesn't allow combination with arbitrary weights.

    EDIT: The above does a parabolic crossfade on both video and audio, which is what I thought you meant by "called like Dissolve() and has a similar effect on the video". Thinking again, you probably in fact wanted the video to behave like Dissolve(). In that case it's even simpler and does not require MaskTools.
    Code:
    function ParabolicCrossFade(clip c1, clip c2, int n) { # alternative version
      d = Dissolve(c1, c2, n)
      df = Dissolve(c1.FadeOut0(n), c2.FadeIn0(n), n)
      audio = MixAudio(d, df, 2, -1)
      AudioDub(d, audio)
    }
    Last edited by Gavino; 5th Nov 2013 at 17:10. Reason: version with normal dissolve for video
    Quote Quote  
  15. Gavino's second ParabolicCrossFade():
    Click image for larger version

Name:	parabolic.png
Views:	401
Size:	10.6 KB
ID:	21066

    Dissolve(FadeOut0(p1,16).FadeOut0(16).FadeOut0(16) , FadeIn0(p2,16).FadeIn0(16).FadeIn0(16), 16):
    Click image for larger version

Name:	fast.png
Views:	389
Size:	9.5 KB
ID:	21067
    Last edited by jagabo; 5th Nov 2013 at 18:14.
    Quote Quote  
  16. Anonymous344
    Guest
    Originally Posted by Gavino View Post
    The above does a parabolic crossfade on both video and audio, which is what I thought you meant by "called like Dissolve() and has a similar effect on the video". Thinking again, you probably in fact wanted the video to behave like Dissolve()
    Yes. That is what I meant; however, I am not fussy about exactly how the function affects the video, as long as the points at which the audio crossfade is taking place are shown by a visual effect that can be used as a guide. Thank you very much for writing both versions of the function.

    Jagabo, thank you for illustrating the parabolic crossfade.

    Quote Quote  
  17. I would think the parabolic crossfade could lead to clipping if both tracks were very loud.
    Quote Quote  
  18. Member
    Join Date
    Jul 2009
    Location
    Spain
    Search Comp PM
    Originally Posted by jagabo View Post
    I would think the parabolic crossfade could lead to clipping if both tracks were very loud.
    Yes, that is a danger with this type of crossfade.
    At the mid-point, the gain on each component is 0.75 (compared to 0.5 in a linear crossfade), so the overall amplitude could potentially reach 1.5.
    Whether you actually pass 1.0 (and hence clip) at any point depends on the overall amplitude of the components, and their degree of correlation.
    In cases where this problem occurs, you could reduce the input levels (using Amplify()) before mixing them.
    Quote Quote  
  19. Anonymous344
    Guest
    In that case, for the function that Gavino wrote, a way would have to be found to apply Amplify() to the parts undergoing the crossfade, rather than to the whole track.
    Quote Quote  
  20. Member
    Join Date
    Jul 2009
    Location
    Spain
    Search Comp PM
    You're right - in general, you wouldn't want to reduce the volume outside the crossfade.
    However, applying a fixed reduction over the crossfade will lead to a discontinuity at its ends.
    Better would be to adjust the mixing factors inside the function via an extra parameter.
    Code:
    function ParabolicCrossFade(clip c1, clip c2, int n, float "factor") { # third version
      factor = Default(factor, 2.0)
      d = Dissolve(c1, c2, n)
      df = Dissolve(c1.FadeOut0(n), c2.FadeIn0(n), n)
      audio = MixAudio(d, df, factor)
      AudioDub(d, audio)
    }
    The default factor (2.0) gives the same effect as before, with a 'flat' parabola at either end.
    To avoid clipping, 'factor' can be reduced to somewhere between 1.0 and 2.0, with 1.0 giving a completely linear crossfade.
    Quote Quote  
  21. Anonymous344
    Guest
    However, applying a fixed reduction over the crossfade will lead to a discontinuity at its ends. Better would be to adjust the mixing factors inside the function via an extra parameter.
    That makes sense. Thank you for adding a parameter to the function. Being able to adjust the mixing will also add some useful flexibility.
    Quote Quote  
  22. I'm sure you could do this yourself, but here's a callable version of

    Code:
     Dissolve(FadeOut0(v1,16).FadeOut0(16).FadeOut0(16), FadeIn0(v2,16).FadeIn0(16).FadeIn0(16), 16)
    with a variable number of frames:

    Code:
    function DissolveFast(clip c1, clip c2, int "frames")
    {
      frames = Default(frames, 24) # default to 24 frame crossfade if not specified
      Dissolve(FadeOut0(c1,frames).FadeOut0(frames).FadeOut0(frames),  FadeIn0(c2,frames).FadeIn0(frames).FadeIn0(frames), frames)
    }
    Quote Quote  
  23. Anonymous344
    Guest
    Originally Posted by jagabo View Post
    I'm sure you could do this yourself
    I am not so sure. There is much that I still did not understand about how Avisynth works. Thank you for the function.
    Quote Quote  
  24. I'm just trying Jagabo's Dissolve function above and I get a delay in audio when the second clip starts and it stays delayed.
    Gavino's does the same thing. I'm using Vdub to test.
    It seems to do this function when clip 1 ends and clip 2 starts, is that correct?
    Quote Quote  
  25. Member Benjy's Avatar
    Join Date
    Oct 2009
    Location
    Canada
    Search Comp PM
    OK, too much words

    Who-s the best here? who has enough skill to perform a practical experience, i.e. to solve a real problem?
    I'll give him a video clip, let's say about 500 MB and 3 song of different sizes that I want to attach them as a backgroung ( audio track)
    Each of those song need to be crossfaded, jointed each other so that to cover the lenght of video track.

    Well?

    Or a simplest problem:

    On a 500 MB video clip I need to add an audio track of slightly longer, of about 20 seconds. It requires to be crossfaded. How could I do that so that to match the size(lenght) of video track?
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!