VideoHelp Forum
+ Reply to Thread
Page 5 of 5
FirstFirst ... 3 4 5
Results 121 to 147 of 147
Thread
  1. Code:
    input_1=AVISource("1.avi")
    input_2=AVISource("2.avi")
    input_3=AVISource("3.avi")
    input_4=AVISource("4.avi")
    input_5=AVISource("5.avi")
    input_6=AVISource("6.avi")
    input_7=AVISource("7.avi")
    input_8=AVISource("8.avi")
    input_9=AVISource("9.avi")
    interleave(input_1,input_2,input_3,input_4,input_5,input_6,input_7,input_8,input_9)
    pointresize(width,height*2)
    medianblurt(0,0,0,4,false,false)
    selectevery(9,4)
    u = UToY().ConvertToYUY2()
    v = VToY().ConvertToYUY2()
    YToUV(u, v, ConvertToYUY2().PointResize(width,height/2))
    http://avisynth.org/mediawiki/Sampling

    Avisynth uses MPEG2 chroma placement for YV12, chroma samples are halfway between luma.
    You can pointresize losslessly, but converting back to YUY2 causes a loss.

    I did it right for the first time, are you impressed Gavino
    Quote Quote  
  2. Member
    Join Date
    Dec 2011
    Location
    Russia
    Search Comp PM
    Thanks, I'll give it a try
    Quote Quote  
  3. Member
    Join Date
    Dec 2011
    Location
    Russia
    Search Comp PM
    Unfortunately, there is some bug as VirtualDub returned:
    "An out-of-bounds memory access (access violation) occurred in module 'medianblur'...
    ...reading address 3D746F6B."
    Quote Quote  
  4. Member
    Join Date
    Jul 2009
    Location
    Spain
    Search Comp PM
    Originally Posted by jmac698 View Post
    Code:
    ...
    interleave(input_1,input_2,input_3,input_4,input_5,input_6,input_7,input_8,input_9)
    pointresize(width,height*2)
    medianblurt(0,0,0,4,false,false)
    selectevery(9,4)
    u = UToY().ConvertToYUY2()
    v = VToY().ConvertToYUY2()
    YToUV(u, v, ConvertToYUY2().PointResize(width,height/2))
    http://avisynth.org/mediawiki/Sampling

    Avisynth uses MPEG2 chroma placement for YV12, chroma samples are halfway between luma.
    You can pointresize losslessly, but converting back to YUY2 causes a loss.
    Yes, that's right - that's why jagabo found differences in chroma with his script (post #119).

    I did it right for the first time, are you impressed Gavino
    Yes, well done.
    But haven't you forgotten the ConvertToYV12() after the first pointresize in that script?
    I wonder if that explains Zabar12's problem with VirtualDub?
    Quote Quote  
  5. Ya, I'm bad with the little details. One more time:
    Code:
    input_1=AVISource("1.avi")
    input_2=AVISource("2.avi")
    input_3=AVISource("3.avi")
    input_4=AVISource("4.avi")
    input_5=AVISource("5.avi")
    input_6=AVISource("6.avi")
    input_7=AVISource("7.avi")
    input_8=AVISource("8.avi")
    input_9=AVISource("9.avi")
    interleave(input_1,input_2,input_3,input_4,input_5,input_6,input_7,input_8,input_9)
    pointresize(width,height*2)
    converttoyv12
    medianblurt(0,0,0,4,false,false)
    selectevery(9,4)
    u = UToY().ConvertToYUY2()
    v = VToY().ConvertToYUY2()
    YToUV(u, v, ConvertToYUY2().PointResize(width,height/2))
    Sorry about that, but if you learn more about scripting, you'll know what you're doing and be able to correct these little errors easier, without waiting for someone to do it for you.

    So let's be clear about what we're doing here.
    Obviously opening the 9 videos and giving them a name. Also be sure that they are starting at exactly the same point. You could edit them with an external program, or use trim commands like this:
    input_1=AVISource("1.avi").trim(9,0)
    which would cut off the first 10 frames (I believe).
    If you had to edit manually in avisynth, you would make a temporary script like;
    input_1=AVISource("1.avi").trim(9,0)
    input_2=AVISource("2.avi").trim(7,0)
    stackhorizontal(input_1,input_2)
    This would show the first two side-by-side, and you would adjust the first numbers in the trim until they seem to show the same frame. In practice, it could be a lot of work to find the first spot where all videos are in common. I will go by some scene change, or the point at the very beginning of the tape where it starts from black.

    Next the resizing, as you know this is only to work around a limitation of a plugin. If you double the height of something, you have to put it back, after being done with that plugin. The part which puts it back is complicated, but you can just use our little snippets. You also should know, that if a plugin says it only works with YV12, and you are starting with YUY2, that it could crash if you don't convert it first, in which case you use our trick.

    As far as the medianrt command, I just went by the manual and don't know if it works. I am simply telling it to make the median of 4 frames before and after, including the current one, which is 9 frames. To make use of this, I had to interleave the 9 videos, which makes the same frame from each video appear in a row, so frame 0 from input_1, frame 0 from input_2, ... frame0 from input_9, frame 1 from input_1 ... etc.

    Now the frame 0 first output from medianrt will be median(0-1,0-2,0-3,0-4,0-5) (I'm using shorthand here to mean, frame 0 from input 1, frame 0 from input 2 etc.), because there's NO 4 previous frames at the start. You have to wait until the 5th frame before you have a valid answer (so there's 4 before).
    Also I want only this answer and not all the other combinations, so I use the selectevery to give me the 5th frames of every group of 9 (counting from 0 it gives frame #4).

    I dunno if that makes any sense, but it could help you learn scripting in the future.
    Last edited by jmac698; 26th Jan 2012 at 14:22.
    Quote Quote  
  6. What about this:

    Code:
    input_1=AVISource("1.avi")
    input_2=AVISource("2.avi")
    input_3=AVISource("3.avi")
    input_4=AVISource("4.avi")
    input_5=AVISource("5.avi")
    input_6=AVISource("6.avi")
    input_7=AVISource("7.avi")
    input_8=AVISource("8.avi")
    input_9=AVISource("9.avi")
    interleave(input_1,input_2,input_3,input_4,input_5,input_6,input_7,input_8,input_9)
    StackHorizontal(last,UtoY(),VtoY())
    ConvertToYV12()
    medianblurt(0,0,0,4,false,false)
    selectevery(9,4)
    ConvertToYUY2()
    YtoUV(Crop(width/2,0,width/4,height), Crop(width*3/4,0,width/4,height), Crop(0,0,width/2,height))
    Ie, move the chroma channels into the luma channel for MedianBlurT(), then put them back. I don't know the arguments to MedianBlurT() -- if one of them is to skip the chroma channels you can enable that.
    Quote Quote  
  7. That's a great idea, in fact I've used the same thing to record component video onto VHS, so I should have thought of it ... eventually
    Quote Quote  
  8. Member
    Join Date
    Jul 2009
    Location
    Spain
    Search Comp PM
    Originally Posted by jmac698 View Post
    use trim commands like this:
    input_1=AVISource("1.avi").trim(9,0)
    which would cut off the first 10 frames (I believe).
    The first 9 frames (frames 0 to 8) are cut.
    Those pesky details again...

    Originally Posted by jagabo View Post
    What about this:
    ...
    Ie, move the chroma channels into the luma channel for MedianBlurT(), then put them back.
    Yes, excellent idea, jagabo.
    Quote Quote  
  9. Member
    Join Date
    Dec 2011
    Location
    Russia
    Search Comp PM
    jmac698
    Last night I finally took hart to run Median(9). The script seems to work fine but the speed is only 2.5fps vs. 16fps on Median2. The good news is that I couldn't notice any changes in chroma when comparing median2 to median(9). Though I'm not sharp eyed.
    Quote Quote  
  10. What result did you get with median 9 zabar, I was not too impressed compared to median 2
    Quote Quote  
  11. Member
    Join Date
    Dec 2011
    Location
    Russia
    Search Comp PM
    To be honest, I didn't find any improvement over median2 either. However, I ran median(9) on a recording of reasonably good quality. Possibly, median9 can demonstrate its real power on bad tapes.

    Though, when I see some strong distortions the first thing I try is to run the tape on another VCR. And sometimes it helps a lot.
    Quote Quote  
  12. I'm trying the Median1 (see here) script for the first time today and i get an error: unable to load RemoveGrainTSSE3.dll error=0x7e

    Any tip ?

    I have loaded the needed dll i believe RemoveGrainTSSE3.dll (for Clense) what could go wrong ?

    Edit:
    Well it seems to work without RemoveGrainTSSE3.dll so nevermind
    Last edited by themaster1; 8th Mar 2012 at 06:25.
    *** DIGITIZING VHS / ANALOG VIDEOS SINCE 2001**** GEAR: JVC HR-S7700MS, TOSHIBA V733EF AND MORE
    Quote Quote  
  13. Try RemoveGrainSSE2, I'm sure I read there was an issue with SSE3 and the Cleanse function?
    Quote Quote  
  14. Try different version of removegrain t ( all that versions and plugins )here is my avisynth plugins folder http://www.mediafire.com/?6qizxuuqcm7b6dx
    Quote Quote  
  15. I thought I'd contribute a little something here:
    Like jmac, I heavily rely on performing medians of multiple captures in order to get "exactly what's on the tape, no more, no less...assuming the VCR has enough bandwidth and doesn't use DNR or sharpening, etc." However, I've recently discovered that there's a way to "stack the deck" against noise and cheat a little bit, and I've gotten excellent results from it.

    Code:
    # Load captures
    try1 = Avisource("1.avi")
    try2 = Avisource("2.avi")
    try3 = Avisource("3.avi")
    
    # Align captures temporally
    try1 = try1#.Trim2(whatever)  # See http://avisynth.org/stickboy/jdl-util.avsi
    try2 = try2#.Trim2(whatever)
    try3 = try3#.Trim2(whatever)
    
    # Test alignment:  Scan through interleaved or stacked output to see if captures stay aligned, and fix
    # them here if they get out of sync.  No matter what TBC or capture settings you use, this is likely to
    # happen if you have long stretches of the tape between scenes with unrecorded/garbage material.
    # return Interleaved(try1, try2, try3)
    
    # Create spatial medians, but beware artifacts from interlacing (or from coarsely spaced lines with separated fields):
    try1spatialmedian = try1.QTGMC(preset = "super fast", lossless = 1).RemoveGrain(mode = 4).Reinterlace()
    try2spatialmedian = try2.QTGMC(preset = "super fast", lossless = 1).RemoveGrain(mode = 4).Reinterlace()
    try3spatialmedian = try3.QTGMC(preset = "super fast", lossless = 1).RemoveGrain(mode = 4).Reinterlace()
    
    # Create three medians, each using two real captures and one spatial median.
    # Personally I renamed Median1 to MedianOf3 for clarity, but I'll use Median1 here
    # to emphasize that it's the same function.
    median12 = Median1(try1, try2, try3spatialmedian)
    median13 = Median1(try1, try2spatialmedian, try3)
    median23 = Median1(try1spatialmedian, try2, try3)
    
    # Create a final median.
    median = Median1(median12, median13, median23)
    
    # Return!
    return median
    
    # Helper function:
    function Reinterlace(clip c)
    {
        # If you have a BFF source, you will have to replace some or all
        # of the AssumeTFF calls with AssumeBFF.  I haven't really thought
        # the differences through though.  A well-formatted function
        # would take the field parity from the original interlaced clip,
        # but that may make usage more complicated.
        evens = c.SelectEven.AssumeTFF.SeparateFields.SelectEven()
        odds = c.SelectOdd.AssumeTFF.SeparateFields.SelectOdd()
        return Interleave(evens, odds).AssumeTFF.Weave()
    }
    The idea is as follows:
    Each of the initial medians use two real captures and a spatial median of another capture. The spatial median serves as a baseline for what the underlying image would look like without detail or noise. It's a quick and dirty way of saying, "Look, pretend for a moment that all high-frequency content is just noise. If it is, what should the underlying image probably look like?" If the other two captures are noisy enough to have pixels on opposite sides of the spatial median, the spatial median is used. (Basically, the initial medians use the spatial median of one capture, clamped within the actual values of the other two captures.) This greatly clamps down on high-frequency noise in flat areas. If the real captures "agree" on the detail or noise, then the softer version of each particular detail pixel (the one closer to the median) will be used.

    You generally do NOT want to use the initial medians directly: Not only does the process create a slight bias toward softening of detail, but a median of try1, try2, and try3's spatial median will completely throw out any fine detail shared by try1 and try3 but not try2 (for example). It will also give strange results in cases where one or more captures "glitch" on a particular frame with tons of dropouts or a huge vertical shift or something similar, and it can overuse the spatial median when the other two captures have different levels.

    Instead, the trick here is that doing this process three times helps eliminate the corner cases, and it helps recover the detail that you would get with a naive median of three captures. The detail will be slightly softer, but generally speaking it should all be there. At the same time, this reduces more noise and converges more quickly toward what's "actually on the tape" than naive medians of multiple captures. The basic algorithm works something like this: "Use the spatial median closest to the actual median of three captures, but clamp it to the pixel range defined by the three captures themselves." I kind of expected this to soften detail considerably, but in practice, it looks great...and this isn't from a casual viewing perspective but from the standpoint of interleaving the final frames with the original captures, zooming in, and scrutinizing them closely.

    This method is very slow, but it should often be faster and more manageable than capturing a tape five, seven or nine times and taking a median of all of them. I can also sometimes get better results using three captures with this method than I can get using nine captures with a naive median! I should note that I'm a detail fanatic, by the way: I'm creating "digital intermediates" and archiving them, and I wouldn't dare DREAM of directly using a "real denoiser" like Neatvideo on a single clip until I'm creating distribution copies from those.

    I used a spatial median above, but you can apply the above principle to any form of denoising, however harsh or sophisticated. However, using Neatvideo or something similar three times is going to get extremely slow...and simply denoising a single capture and limiting the result by the other two is risky and can produce artifacts if your captures have vertical jitter, different luma levels, etc.
    Last edited by Mini-Me; 9th Mar 2012 at 15:34.
    Quote Quote  
  16. Nice Idea Mini-Me i will try this weekend when i have the time

    I can also sometimes get better results using three captures with this method than I can get using nine captures with a naive median!
    I sometimes can't tell apart between naive 3 and 9
    Quote Quote  
  17. Member
    Join Date
    Jul 2009
    Location
    Spain
    Search Comp PM
    Originally Posted by Mini-Me View Post
    Code:
    function Reinterlace(clip c)
    {
        # If you have a BFF source, you will have to replace some or all
        # of the AssumeTFF calls with AssumeBFF.  I haven't really thought
        # the differences through though.  A well-formatted function
        # would take the field parity from the original interlaced clip,
        # but that may make usage more complicated.
        evens = c.SelectEven.AssumeTFF.SeparateFields.SelectEven()
        odds = c.SelectOdd.AssumeTFF.SeparateFields.SelectOdd()
        return Interleave(evens, odds).AssumeTFF.Weave()
    }
    A much simpler way is:
    Code:
    function Reinterlace(clip c) {
      c.SeparateFields().SelectEvery(4,0,3).Weave()
    }
    (And this takes the field parity from the input clip 'c'.)
    Quote Quote  
  18. Originally Posted by Gavino View Post
    Originally Posted by Mini-Me View Post
    Code:
    function Reinterlace(clip c)
    {
        # If you have a BFF source, you will have to replace some or all
        # of the AssumeTFF calls with AssumeBFF.  I haven't really thought
        # the differences through though.  A well-formatted function
        # would take the field parity from the original interlaced clip,
        # but that may make usage more complicated.
        evens = c.SelectEven.AssumeTFF.SeparateFields.SelectEven()
        odds = c.SelectOdd.AssumeTFF.SeparateFields.SelectOdd()
        return Interleave(evens, odds).AssumeTFF.Weave()
    }
    A much simpler way is:
    Code:
    function Reinterlace(clip c) {
      c.SeparateFields().SelectEvery(4,0,3).Weave()
    }
    (And this takes the field parity from the input clip 'c'.)
    That's much cleaner, and it looks like it should work to retrieve the original (non-generated) fields after QTGMC, no matter what the original parity was. Thank you.
    Quote Quote  
  19. Mini-Me today i tried your variant script for median

    # Load captures
    try1=AVISource("D:\test11a.avi").ConverttoYV12(int erlaced=true).AssumeTFF
    try2=AVISource("D:\test22a.avi").ConverttoYV12(int erlaced=true).AssumeTFF
    try3=AVISource("D:\test33a.avi").ConverttoYV12(int erlaced=true).AssumeTFF

    # median of 3 clips from Helpers.avs by G-force
    Function Median1(clip input_1, clip input_2, clip input_3, string "chroma")
    {
    chroma = Default(chroma,"process") #default is "process". Alternates: "copy first" or "copy second"

    Interleave(input_1,input_2,input_3)
    chroma == "process" ? Clense(reduceflicker=false) : Clense(reduceflicker=false,grey=true)
    SelectEvery(3,1)

    chroma == "copy first" ? last.MergeChroma(input_1) : chroma == "copy second" ? last.MergeChroma(input_2) : last

    Return(last)
    }
    # Align captures temporally
    #try1 = try1#.Trim2(whatever) # See http://avisynth.org/stickboy/jdl-util.avsi
    #try2 = try2#.Trim2(whatever)
    #ry3 = try3#.Trim2(whatever)

    # Test alignment: Scan through interleaved or stacked output to see if captures stay aligned, and fix
    # them here if they get out of sync. No matter what TBC or capture settings you use, this is likely to
    # happen if you have long stretches of the tape between scenes with unrecorded/garbage material.
    # return Interleaved(try1, try2, try3)

    # Create spatial medians, but beware artifacts from interlacing (or from coarsely spaced lines with separated fields):
    try1spatialmedian = try1.QTGMC(preset = "super fast", lossless = 1).RemoveGrain(mode = 4).Reinterlace()
    try2spatialmedian = try2.QTGMC(preset = "super fast", lossless = 1).RemoveGrain(mode = 4).Reinterlace()
    try3spatialmedian = try3.QTGMC(preset = "super fast", lossless = 1).RemoveGrain(mode = 4).Reinterlace()

    # Create three medians, each using two real captures and one spatial median.
    # Personally I renamed Median1 to MedianOf3 for clarity, but I'll use Median1 here
    # to emphasize that it's the same function.
    median12 = Median1(try1, try2, try3spatialmedian)
    median13 = Median1(try1, try2spatialmedian, try3)
    median23 = Median1(try1spatialmedian, try2, try3)

    # Create a final median.
    median = Median1(median12, median13, median23)

    # Return!
    return median

    # Helper function:
    function Reinterlace(clip c)
    {
    # If you have a BFF source, you will have to replace some or all
    # of the AssumeTFF calls with AssumeBFF. I haven't really thought
    # the differences through though. A well-formatted function
    # would take the field parity from the original interlaced clip,
    # but that may make usage more complicated.
    evens = c.SelectEven.AssumeTFF.SeparateFields.SelectEven()
    odds = c.SelectOdd.AssumeTFF.SeparateFields.SelectOdd()
    return Interleave(evens, odds).AssumeTFF.Weave()
    }

    I also used as Gavino suggestion
    function Reinterlace(clip c) {
    c.SeparateFields().SelectEvery(4,0,3).Weave()
    }

    I didn't found any difference between the naive median and this. The reason i tried is that i had some tape with ton of dropouts so is this about this method or about naive median method that you talk about I am confused
    You generally do NOT want to use the initial medians directly: Not only does the process create a slight bias toward softening of detail, but a median of try1, try2, and try3's spatial median will completely throw out any fine detail shared by try1 and try3 but not try2 (for example). It will also give strange results in cases where one or more captures "glitch" on a particular frame with tons of dropouts or a huge vertical shift or something similar, and it can overuse the spatial median when the other two captures have different levels.
    it should not produce any weird result but in my case it is the same as in naive option and as you sad is 5-6 times slower than naive median.

    I have couple of question regarding this method.

    1. Is this process "losless" ? because i had to use YV12 (naive can accept YUV2 files) i would like example with pointresize for this case meaning multiple captures in one script ( i use it often for one video)
    2. Why did you use RemoveGrain ( can it be used with any other filter like MCTemporaldenoise for instance)?
    3. You say that it softness the details and in the same time you say that it might trough fine details ( but in the same it is much sharper than naive median). So which one softness more?
    4. Is using

    # median of 3 clips from Helpers.avs by G-force
    Function Median1(clip input_1, clip input_2, clip input_3, string "chroma")
    {
    chroma = Default(chroma,"process") #default is "process". Alternates: "copy first" or "copy second"

    Interleave(input_1,input_2,input_3)
    chroma == "process" ? Clense(reduceflicker=false) : Clense(reduceflicker=false,grey=true)
    SelectEvery(3,1)

    chroma == "copy first" ? last.MergeChroma(input_1) : chroma == "copy second" ? last.MergeChroma(input_2) : last

    Return(last)
    }


    # replace these with three separate caps
    # use Trim() to sync them.
    v1=AVISource("d:\kosarka1.avi")
    v2=AVISource("d:\kosarka2.avi")
    v3=AVISource("d:\kosarka3.avi")

    Median1(v1,v2,v3)

    ok for interlaced material and do i have to put somewhere interlaced=true or is it ok like this
    5. all 3 videos has to have exact number of frames and size ( in mb) right ?
    Thanks
    Last edited by mammo1789; 25th Oct 2012 at 02:00.
    Quote Quote  
  20. Originally Posted by mammo1789 View Post
    I didn't found any difference between the naive median and this. The reason i tried is that i had some tape with ton of dropouts so is this about this method or about naive median method that you talk about I am confused
    You generally do NOT want to use the initial medians directly: Not only does the process create a slight bias toward softening of detail, but a median of try1, try2, and try3's spatial median will completely throw out any fine detail shared by try1 and try3 but not try2 (for example). It will also give strange results in cases where one or more captures "glitch" on a particular frame with tons of dropouts or a huge vertical shift or something similar, and it can overuse the spatial median when the other two captures have different levels.

    By that particular comment, I meant that you do not want to use the intermediate median12, median23, and median13 clips directly for your final output, because doing so will emphasize the spatial medians too greatly.

    However, using the actual output of the script is perfectly fine, and so is using a simple median of your original captures, like Median1(try1, try2, try3).

    Originally Posted by mammo1789 View Post
    it should not produce any weird result but in my case it is the same as in naive option and as you sad is 5-6 times slower than naive median.
    That doesn't surprise me: Depending on the thickness of your dropouts, the difference between the above script and a naive median can often be too subtle to notice. For me, it usually worked noticeably better when I was dealing with a lot of thin (1 line high) dropouts spaced throughout a frame, but it won't improve upon a naive median all the time, especially for thicker dropouts. If it's not helping you enough to notice, it may be better just to take more captures and take a median of 5, 7, or 9. Keep in mind though, if the dropouts are in the same spot of the same frames for every capture, you're not going to see much improvement from that either.


    Originally Posted by mammo1789 View Post
    I have couple of question regarding this method.

    1. Is this process "losless" ? because i had to use YV12 (naive can accept YUV2 files) i would like example with pointresize for this case meaning multiple captures in one script ( i use it often for one video)
    By definition, no filter that changes the image is lossless.

    However, if you're talking about colorspace conversions, converting to YV12 from YUY2 in the first few lines above does lose chroma definition. You can work around this though and merge the chroma from a naive median of three YUY2 clips if you want:
    Code:
    # Load captures
    try1yuy2=AVISource("D:\test11a.avi", pixel_type = "YUY2").AssumeTFF() # Side note:  Don't AssumeTFF if you have BFF files
    try2yuy2=AVISource("D:\test22a.avi", pixel_type = "YUY2")
    try3yuy2=AVISource("D:\test33a.avi", pixel_type = "YUY2")
    
    try1 = try1yuy2.ConvertToYV12(interlaced = True)
    try2 = try2yuy2.ConvertToYV12(interlaced = True)
    try3 = try3yuy2.ConvertToYV12(interlaced = True)
    
    
    # Align captures temporally
    #try1 = try1#.Trim2(whatever) # See http://avisynth.org/stickboy/jdl-util.avsi
    #try2 = try2#.Trim2(whatever)
    #try3 = try3#.Trim2(whatever)
    
    # Test alignment: Scan through interleaved or stacked output to see if captures stay aligned, and fix
    # them here if they get out of sync. No matter what TBC or capture settings you use, this is likely to
    # happen if you have long stretches of the tape between scenes with unrecorded/garbage material.
    # return Interleaved(try1, try2, try3)
    
    # Create spatial medians, but beware artifacts from interlacing (or from coarsely spaced lines with separated fields):
    try1spatialmedian = try1.QTGMC(preset = "super fast", lossless = 1).RemoveGrain(mode = 4).Reinterlace()
    try2spatialmedian = try2.QTGMC(preset = "super fast", lossless = 1).RemoveGrain(mode = 4).Reinterlace()
    try3spatialmedian = try3.QTGMC(preset = "super fast", lossless = 1).RemoveGrain(mode = 4).Reinterlace()
    
    # Create three medians, each using two real captures and one spatial median.
    # Personally I renamed Median1 to MedianOf3 for clarity, but I'll use Median1 here
    # to emphasize that it's the same function.
    median12 = Median1(try1, try2, try3spatialmedian)
    median13 = Median1(try1, try2spatialmedian, try3)
    median23 = Median1(try1spatialmedian, try2, try3)
    
    # Create a semi-final median and convert back to YUY2
    median = Median1(median12, median13, median23).ConvertToYUY2(interlaced = True)
    
    # Also create a naive median with YUY2 chroma...to make this script even slowwwwwwwwwwwweerrrrrrrrrr. ;)
    naivemedian = Median1(try1yuy2, try2yuy2, try3yuy2)
    
    # Finalize!
    final = median.MergeChroma(naivemedian)
    
    # Return!
    return final
    
    
    
    # Helper functions:
    
    function Reinterlace(clip c)
    {
    c.SeparateFields().SelectEvery(4,0,3).Weave()
    } 
    
    # median of 3 clips from Helpers.avs by G-force
    Function Median1(clip input_1, clip input_2, clip input_3, string "chroma")
    {
    chroma = Default(chroma,"process") #default is "process". Alternates: "copy first" or "copy second"
    
    Interleave(input_1,input_2,input_3)
    chroma == "process" ? Clense(reduceflicker=false) : Clense(reduceflicker=false,grey=true)
    SelectEvery(3,1)
    
    chroma == "copy first" ? last.MergeChroma(input_1) : chroma == "copy second" ? last.MergeChroma(input_2) : last
    
    Return(last)
    }
    As a side note, I used the built-in ConvertToYV12 and ConvertToYUY2 functions for their recognizability. However, if you're concerned about quality conversions, you should probably use Gavino's conversion routines here: http://forum.doom9.org/showthread.php?p=1471623#post1471623

    It doesn't actually matter for the script I just posted, since the final result uses YUY2 chroma that never went through a YV12 conversion anyway...but it's worth keeping that in mind for other scripts.

    Originally Posted by mammo1789 View Post
    2. Why did you use RemoveGrain ( can it be used with any other filter like MCTemporaldenoise for instance)?
    Yes, it can, and the results will be a lot better with MCTemporalDenoise in cases where you have thick dropouts in one frame but clean neighboring frames. (Actually though, RemoveNoiseMC should be even more effective in this case, because it's made for almost totally eliminating features that show up in only individual frames: http://forum.doom9.org/showthread.php?t=110078) It'll be slow though. I used RemoveGrain(mode = 4), because it's a quick and dirty function that takes a spatial median of the current pixel and surrounding eight pixels. By doing so, it eliminates any features that take up four or less of those nine pixels...such as a one-line horizontal dropout that takes up only three. However, those one-line dropouts will still end up in the final clip if they're present in all three original clips, due to the medians of multiple captures limiting the influence of the RemoveGrain function. (That limiting is important though, because if the script let you eliminate features that are present in all three captures, it would also let you eliminate a lot of detail...not good. If you ever used RemoveGrain(mode = 4) directly for your final output, you'd end up with a bit of an oil painting effect.)

    Originally Posted by mammo1789 View Post
    3. You say that it softness the details and in the same time you say that it might trough fine details ( but in the same it is much sharper than naive median). So which one softness more?
    I think you misunderstood what I was saying in my last post: All I meant is that you don't want to use the intermediate clips median12, median23, or median13 as your final result. A lot of people might be tempted to modify the script to return one of those three clips, because it would be a lot faster than the full script, so I felt a need to justify why the full script takes yet another median of all three intermediate medians instead: The extra work takes more time, but it keeps the softening to a minimum.

    You can still use Median1(try1, try2, try3) though without any problems at all. It will still soften even less than the full script, but the difference is more subtle...as you said, you couldn't tell the difference between Median1 and my script! (By the way, you can compare the results using Subtract(clip1, clip2).Levels(112, 1.0, 144, 0, 255) if you want to highlight the differences.)

    Originally Posted by mammo1789 View Post
    4. Is using

    # median of 3 clips from Helpers.avs by G-force
    Function Median1(clip input_1, clip input_2, clip input_3, string "chroma")
    {
    chroma = Default(chroma,"process") #default is "process". Alternates: "copy first" or "copy second"

    Interleave(input_1,input_2,input_3)
    chroma == "process" ? Clense(reduceflicker=false) : Clense(reduceflicker=false,grey=true)
    SelectEvery(3,1)

    chroma == "copy first" ? last.MergeChroma(input_1) : chroma == "copy second" ? last.MergeChroma(input_2) : last

    Return(last)
    }


    # replace these with three separate caps
    # use Trim() to sync them.
    v1=AVISource("d:\kosarka1.avi")
    v2=AVISource("d:\kosarka2.avi")
    v3=AVISource("d:\kosarka3.avi")

    Median1(v1,v2,v3)

    ok for interlaced material and do i have to put somewhere interlaced=true or is it ok like this
    5. all 3 videos has to have exact number of frames and size ( in mb) right ?
    Thanks

    Anyway, just using Median1 is perfectly fine for interlaced material. The only reason I temporarily deinterlaced with QTGMC in the above script is so that functions like RemoveGrain(mode = 4), RemoveNoiseMC, or MCTemporalDenoise would work correctly.

    Also, you typically only need to use "interlaced = True" for colorspace conversion functions, like converting between YUY2 and YV12, because those functions need to know how to resample the chroma. Median1 is just fine for YUY2 material though, so if you're going to use that all by itself, you don't need to change colorspaces at all. (That said, if you want to keep your YUY2 chroma, make sure you specify it in your source filter, like Avisource("clip.avi", pixel_type = "YUY2"). For some reason, Avisource likes to automatically convert my clips to YV12 - incorrectly I might add - if I don't specify I want YUY2.)
    Last edited by Mini-Me; 25th Oct 2012 at 16:05.
    Quote Quote  
  21. Thanks Mini-Me for the explanations I will experiment with McTemporaldenoise to see how it goes.
    I usually use the median function for capturing 8mm tapes from camera composite (doesn't have svideo) > DVD recorder Es15 or other pany of mine and then from there with svideo to capture card Averemedia Trinity. My camera has worn out heads so it produces allot of thin drop outs and some glitches the tbc from dvd rec keeps the frames from dropping and stables the picture no waviness.
    Yesterday i had some tape and I recorded 3 times to use median ( and your script), i had very difficult time of aligning the 3 captures, it seemed that on places that look like same frame for all captures had different frame count in the end ( so something must gone wrong)

    Can it be that I am trying to align longer videos around 90 min that i have this issues.
    After i did make them same frame count and on the same spot i got some weird artifacts ( with median1 and with your script )
    could it be from longer videos ( because some of the scenes don't have artifacts only on certain frames) or from not align good
    the first 2 pictures are from your script and naive median and the third is with artifacts on the same videos
    Image Attached Thumbnails Click image for larger version

Name:	median mini me.png
Views:	661
Size:	773.4 KB
ID:	14395  

    Click image for larger version

Name:	naive median.png
Views:	692
Size:	874.6 KB
ID:	14396  

    Click image for larger version

Name:	morphing.png
Views:	574
Size:	628.1 KB
ID:	14397  

    Quote Quote  
  22. Originally Posted by mammo1789 View Post
    Thanks Mini-Me for the explanations I will experiment with McTemporaldenoise to see how it goes.
    I usually use the median function for capturing 8mm tapes from camera composite (doesn't have svideo) > DVD recorder Es15 or other pany of mine and then from there with svideo to capture card Averemedia Trinity. My camera has worn out heads so it produces allot of thin drop outs and some glitches the tbc from dvd rec keeps the frames from dropping and stables the picture no waviness.
    Yesterday i had some tape and I recorded 3 times to use median ( and your script), i had very difficult time of aligning the 3 captures, it seemed that on places that look like same frame for all captures had different frame count in the end ( so something must gone wrong)

    Can it be that I am trying to align longer videos around 90 min that i have this issues.
    After i did make them same frame count and on the same spot i got some weird artifacts ( with median1 and with your script )
    could it be from longer videos ( because some of the scenes don't have artifacts only on certain frames) or from not align good
    the first 2 pictures are from your script and naive median and the third is with artifacts on the same videos
    Ugh...not good. You might be in for some real misery here. Using median filters across multiple clips only works when every single frame is perfectly aligned for each capture...but it looks like your captures are dropping or inserting frames in places. Even if your videos line up in one area, dropped or inserted frames later on will make them lose synchronization with each other, and they'll usually get worse and worse as the videos go on.

    If you align the beginning of your videos, how many frames are they usually off by toward the end? If you only have to deal with a few videos, and they're only off by a few frames, you could just interleave your captures in Avisynth, preview with Virtualdub, and search through to find the first frame where one of your captures differs. (I've done this a LOT throughout the past year, and I roughly approach it from the standpoint of a binary search instead of stepping through by just 1/50/etc. frames at a time.) As soon as you find out which capture dropped or inserted a frame and where, you can insert or remove a frame in that same spot to compensate...then preview again and find the next spot where a capture dropped or inserted a frame, then fix it again. Keep doing this until you've reached the end of your videos, and you'll eventually have all three aligned all the way through.

    I did things manually like that with my EP VHS tapes, because the captures were occasionally inserting a frame at random every hour or so. (For the record, these inserts did not show up in Virtualdub's statistics, because it wasn't Virtualdub inserting the frames. They were being inserted earlier in the chain by my TBC.) It created a lot of work for me, but it was manageable. The typical advice here is to get a TBC to fix this kind of thing, and this will help a HUGE amount if your VCR is just putting out a weak sync signal that your capture card isn't locking onto correctly. However, in your case, your ES-15 should probably be doing a good enough job in that area, so lack of a TBC is probably not your problem. In my case no TBC could have helped (and in fact, I was using a pretty good one already), because I think the VCR I was using was running ever-so-slightly slow. Full-frame TBC's and frame synchronizers can ensure your capture card receives frames in sync with a consistent clock, but they have no effect over the playback speed of your VCR, so if it's running slow, your TBC will eventually have to insert a duplicated frame while it's waiting for input...and if your VCR is running fast, your TBC will eventually have to drop a frame once its memory has been exceeded. Some TBC's are better and more reliable than others due to having more memory or having more graceful behavior when things go wrong, but none of them can fully correct for a deck that's playing at the wrong speed.

    Recently, I've been dealing with a problem that's even worse: I've been trying to match the video from one capture (taken with one VCR) to the audio of another capture (taken with another VCR), and it's hell. My video deck for SP tapes is a Sony SVO-5800, and its built-in TBC is dead-on frame accurate. As long as I set my Virtualdub settings correctly (more on that later), capturing with the SVO-5800 never inserts frames, and it will only occasionally drop a frame when playing back static (unrecorded tape) between scenes. I use it due to its excellent TBC and video quality for SP tapes...but its audio is extremely noisy and has unreliable levels for my old home movies (pro deck weren't meant for playing those back). I could play back my tapes with any old deck for better audio, except many are recorded in linear stereo, and there are VERY few VCR's with such a rare feature. As a result, I have to capture my linear stereo audio with an old Sears SR-2000 from the 1980's. Unfortunately, that deck will never EVER give me reliably frame-accurate output, no matter what TBC I use, and I've tried a bunch.

    Virtualdub resamples the Sears audio to sync up with the Sears video while it's capturing, so all I have to do is align the video from both VCR's to have the audio aligned too...but there are just SO many frame drops that manually aligning the videos is unacceptably time-consuming. I get the least frame drops when I use my Philips DVDR3475 as a passthrough TBC, but they come somewhat sporadically, and sometimes they come in "sprees" where the capture will drop several in a row. That's not particularly good for audio quality given the resampling Virtualdub performs to maintain synchronization, so I've actually taken to using my AVT-8710 as my TBC instead (which I never found useful for anything else before). It results in more frame drops, but they're usually more consistently spaced throughout the video (like one every 10000-15000 frames), so I can just align the beginning of my Sears capture to my Sony capture, count how many frames they're off by at the very end (e.g. 20), then stretch the Sears audio appropriately with SoX resampling to match the Sony SVO-5800 video. It sounds like a pain, and it is, but it's much less stressful than manually aligning multiple captures by finding the exact frame where they get out of sync and inserting or removing frames to compensate.

    So...what lessons can you learn from this? Sometimes you have to manually align captures and keep them aligned by removing duplicated frames and inserting frames to compensate for drops during capture...but it's a huge pain in the butt, so it's probably worthwhile to find a better way. If you have the "right" VCR and the right Virtualdub settings, you will get consistent captures every time, so let's start with setting up your Virtualdub settings right (I'm assuming you're using Virtualdub for capture):

    Go into Capture mode and open up the "Capture" menu, then "Timing..." The ONLY box that should be checked should be "Correct video timing for fewer frame drops/inserts," and the resync mode should be "Sync audio to video by resampling the audio to a faster or slower rate," and the audio latency determination should be "Automatic." All the other boxes should be unchecked. These recommendations differ from Virtualdub's defaults.

    Using those recommendations, a lot of VCR's will give you perfectly frame-accurate captures, especially if you use a full-frame TBC or a DVD recorder with a frame synchronizer (I believe your ES-15 probably has a decent one, though someone can correct me if I'm wrong here; I've never used it). If your VCR plays at the right speed and your TBC can lock onto its signal, you should only have to align the beginning of your captures for every frame to be aligned for a median.

    My problem comes from the fact that I'm trying to capture linear stereo audio, and it's a very rare feature that's only present on pro decks (which have terrible audio for my home movies) and old 1980's premium consumer decks (which do not play back tapes at precisely the correct speed). You shouldn't have this problem: In your case, hopefully your current VCR will produce consistently frame-accurate results with the above Virtualdub settings. If it doesn't, it might be worth experimenting with other VCR's, unless the one you're using has much better quality in some other area that's important to you. If that's the case, you might be stuck with the life-consuming drudgery of manually aligning your captures and compensating for dropped/inserted frames.

    I really hope you can get this working without too many problems, because I know exactly how frustrating frame drop problems can be when you're trying to align multiple captures. Once you get all this solved, I'd definitely encourage you to play with MCTemporalDenoise or RemoveNoiseMC instead of RemoveGrain(mode = 4) in the above script, because the first two images you showed are way too similar to justify using my time-consuming filter over a naive median.
    Last edited by Mini-Me; 26th Oct 2012 at 00:12.
    Quote Quote  
  23. Thanks Mini-Me I will try later with the 8mm setup ( wich produces a lot of thin lines drop outs ) to see how it goes. I wondered if i use ( in my case it was on i put it off) progressive in dvd recorder and passtrough it ( that he might do some deinterlacing in passtrough so that can screw the fields order that i was getting or maybe I am wrong.
    I tried with my oldest vcr JVC HR-D211EM regular vcr mono trough es15 ( by the way I align them perfectly this time immediately) to see how it will go with MCtemporaldenoise instead and here it is the result notice how noise is suppressed more with this script ( to justify the work ) but it is slooow as hell 0,08 fps for 3 min video it took almost 2 hours.
    The result looks much cleaner i couldn't see some loosing of details but might be wrong.

    It seems that this can be ultimately noise reduction

    The last picture is naive median it seems that it only helpfull for drop outs and this script does aditional noise reduction due to using mctemporaldenoise ( or removegrain in your case ) i will try with removedirtmc
    Image Attached Thumbnails Click image for larger version

Name:	edno do edno mctemp.png
Views:	576
Size:	1.02 MB
ID:	14451  

    Click image for larger version

Name:	naive median.png
Views:	632
Size:	570.6 KB
ID:	14454  

    Click image for larger version

Name:	remove grain mod.png
Views:	676
Size:	497.5 KB
ID:	14455  

    Click image for larger version

Name:	removedirtmc.png
Views:	639
Size:	539.6 KB
ID:	14456  

    Last edited by mammo1789; 27th Oct 2012 at 09:33.
    Quote Quote  
  24. I tried again the script median (mini-me) with RemoveDirtMc but on tape that had some more nasty drop outs
    the script runs around 3 fps ( faster than mctemporaldenoise alone and slower than removegrain version ).
    I didn't notice too much softening ( do it has some ) what do you think it seems better than naive median
    example
    Image Attached Thumbnails Click image for larger version

Name:	uuuu.png
Views:	1933
Size:	1.84 MB
ID:	14492  

    Quote Quote  
  25. There are many cases in which the filter is doing damage to the video.
    For example:

    Image
    [Attachment 17141 - Click to enlarge]

    Is there a way to solve it?
    I use Median1-function (3 caps)
    If I will use in five cap so that fewer would occur?

    I sync the videos right ..
    I tested it with the Subtract (cap1, cap2) .. Subtract (cap1, cap3)

    Everything fits ..

    I have to go through the whole video and replace frames. With ReplaceFramesSimple ..

    It's a nightmare.

    Is there a way to make a log file which contains all the frames that this function worked on them?
    that way i can Check the video more faster by jumping to these frames
    Quote Quote  
  26. Originally Posted by gil900 View Post
    I sync the videos right ..
    I tested it with the Subtract (cap1, cap2) .. Subtract (cap1, cap3)
    What source filter are you using? DirectShowSource(), for example, is not always frame accurate.
    Quote Quote  
  27. Originally Posted by jagabo View Post
    Originally Posted by gil900 View Post
    I sync the videos right ..
    I tested it with the Subtract (cap1, cap2) .. Subtract (cap1, cap3)
    What source filter are you using? DirectShowSource(), for example, is not always frame accurate.
    this is my script:

    Code:
    cap1 = AVISource("s_cap1.avi")
    cap2 = AVISource("s_cap2.avi")
    cap3 = AVISource("s_cap3.avi")#.trim(2,0)
    
    #test = Subtract(cap1,cap3)
    
    MedVid1 = Median1(cap1,cap2,cap3)
    #MedVid2 = Median1(cap3,cap2,cap1)
    #MedVid3 = Median1(cap2,cap3,cap1)
    
    
    
    #last = ReplaceFramesSimple(MedVid1,MedVid3,Mappings="[5430 6169] [5430 6169]")
    
    
    
    last = MedVid1
    
    
    ConvertToRGB(matrix="Rec601", interlaced=true)
    Crop(6,0,-8,-8)
    
     Function Median1(clip input_1, clip input_2, clip input_3, string "chroma")
    {# median of 3 clips from Helpers.avs by G-force
    
    chroma = Default(chroma,"process") #default is "process". Alternates: "copy first" or "copy second"
    
    Interleave(input_1,input_2,input_3)
    chroma == "process" ? Clense(reduceflicker=false) : Clense(reduceflicker=false,grey=true)
    SelectEvery(3,1)
    
    chroma == "copy first" ? last.MergeChroma(input_1) : chroma == "copy second" ? last.MergeChroma(input_2) : last
    
    Return(last)
    }
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!