VideoHelp Forum
+ Reply to Thread
Page 3 of 3
FirstFirst 1 2 3
Results 61 to 75 of 75
Thread
  1. Great plugin, glad to see you updated with sync, that's a real pain. Scripts for sync have existed for years, and the most recent effort is a plugin by stainlesss.

    @renard
    I ran into the same conclusion, I also have made years of recordings with dropped frames due to virtualdub silently failing by default. And I found (probably) the same settings to fix that.
    Some discussion here.
    http://forum.doom9.org/showthread.php?t=163958
    There's an inherent problem in the windows API for this, that makes you set the capture rate before capturing. But I do blame Virtualdub for having bad settings by default. With the proper settings, the problem is a lot less infrequent, probably as good as possible given windows limitations. Still, there's really no way to truly capture a VHS without at least two passes.

    In my thread (referenced below) I explained some drop/insert patterns I've found. I've also seen other patterns from a TBC, I can tell because the *capture card* recorded all the frames, the *TBC* made a dupe frame. That's because dupe frames from a capture card are digital and 100% exact dupes, where from a TBC, they are also 100%, but when that gets digitized, there's a small amount of analog noise added, so when you see a dupe with a very small difference, it's been digitized from a dupe from another device

    I think the reason, is that a certain capture rate is set, but the VCR runs at some random rate, so the two get out of sync. A good driver/capture program will change the capture rate occasionally to keep in sync, so say for example the capture rate is 29.90 for 15 frames, but two frames were dropped, so it adjusts the capture rate to 31.9 which should be the new rate.

    How it should actually work, is capture every frame as it happens and not try to follow the 29.97 standard at all. While viewing the stream, yes frames will be duped by necessity, but now with GSync etc. you can even have your monitor show the true VCR frame rate. Meanwhile you'd have to go back and subtly resample the audio to match all the random timings of each video frame.

    @ johnmeyer
    I came to the same conclusion as you about "reference frames". I happened to call it "fuzzy matching". I made a pretty good writeup about it here:
    http://forum.doom9.org/showthread.php?t=165462

    So whereas you are getting good results with your sync, that kind of simple comparison will really break down on noisy clips. I suggest correlation because of these mathematical properties:
    -immune to differences in brightness, contrast, and (though may be of no use) order of pixels
    The problem with a snowstorm of comets, is not very many pixels are matching. There is also the problem (within the clip) of low motion scenes, where you couldn't tell if a frame is dropped or not, that's the real killer.


    Even with this more powerful matching function, I had to resort to fuzzy matching. I'll have to work out the algorithm and maybe up date your plugin, someday.

    I could also mention that there's other ways besides median, like sigma clipping or Artificial Skepticism (Stetson, 1989).
    http://deepskystacker.free.fr/english/technical.htm#stackingmethods

    My thread explains many of my points, so just so you know, there could be a long run of frames where you don't know for sure if you're in sync or not, so you'd either need 2 passes or make a lookahead to something like 30 frames.

    Last idea I would add to make this ideal, is matching various sources (scale and translate) and colours (histogram matching) and jitter (scale/translation adjustment per line per clip).

    That would be really nice I'll have to write it someday.

    For now, back to researching dark frame subtraction (which is why I'm thinking about median again).
    Last edited by jmac698; 30th Aug 2016 at 16:32.
    Quote Quote  
  2. This filter isn't better than Merge()

    https://postimg.org/gallery/1zyby0r1e
    Quote Quote  
  3. Originally Posted by Aludin View Post
    This filter isn't better than Merge()

    https://postimg.org/gallery/1zyby0r1e

    Maybe you didn't have a suitable "candidate ?"

    Were your sources aligned (temporally and spatially), was it "random noise" ? and how many sources?
    Quote Quote  
  4. Member
    Join Date
    Dec 2005
    Location
    Finland
    Search Comp PM
    @Aludin

    What are your sources like? The median process is appropriate for dropouts and streaks such as these ones in one of my earlier posts, or any other artifacts that vary wildly between captures.

    If you've got just slight noise on an otherwise clean image, the difference between average and median is naturally going to be very minimal.
    Quote Quote  
  5. Originally Posted by poisondeathray View Post
    Maybe you didn't have a suitable "candidate ?"
    That's the key issue, how do you decide which candidate is good and which one isn't? It's a lot simpler to just take all of them into the equation and average them out. The minute you start picking and choosing, you gamble with the quality.

    Originally Posted by poisondeathray View Post
    Were your sources aligned (temporally and spatially), was it "random noise" ? and how many sources?
    Yes, they were all aligned. There was only compression artifacts really and 27 sources total. I had to do 9 at a time because I didn't have enough memory so once I had the 3 processed outputs, I ran them with the tool and got the final. I used sync=30 so nobody here would complain about me not doing enough and this made it take many hours to process only for it to be worse than Merge that took a few minutes.

    What are your sources like? The median process is appropriate for dropouts and streaks such as these ones in one of my earlier posts, or any other artifacts that vary wildly between captures.
    That makes sense. But with a sufficient number of sources, even wild changes won't matter.
    Quote Quote  
  6. Originally Posted by Aludin View Post
    That's the key issue, how do you decide which candidate is good and which one isn't? It's a lot simpler to just take all of them into the equation and average them out. The minute you start picking and choosing, you gamble with the quality.


    Simpler perhaps, but not necessarily "better". Use the right tool for the job. I don't use a screwdriver to hammer in nails (you can't but it's better to use a hammer)

    A simple mean isn't necessarily good approach either, you can place higher weight on tracks that are "better", for example

    The minimum requirements for any median approach are alignment, and "random"

    "Compression artifacts" aren't necessarily "random" . For example, if you have macroblocks in the same spatial location, median isn't going to be ideal approach - that's not a "random" distribution. For example, if all your "takes" had used the same compressor, type of compression, there is a high chance that the artifacts aren't random at all. If you had a MPEG2 , AVC from another cap etc... there is a higher chance that the mb distribution and artifact distribution will be more "random"

    If you look at some vhs drop outs, they do not occur in the same place usually (their distribution is effectively "random" )

    Did you have a chance to look at the image processing link in the other thread? Have a look at the object removal example. You can draw a parallel to "random" dropouts on the basis that their distribution isn't in the same spatial location on aligned frames. Typical sensor noise , such as low light shooting is also roughly "randomly" distributed

    A mean average might be better suited for some situations, but it's definitely worse in the cases where "random" defects occur across multiple takes. If you tried a mean average on that object removal or a dropout example, you will get "ghosting" or residual echos from other frames, because they are included in the mean average. Median will exclude them and result in a higher SNR
    Quote Quote  
  7. Member
    Join Date
    Dec 2005
    Location
    Finland
    Search Comp PM
    @Aludin

    poisondeathray has already explained the processing aspects well. It is true that with a large enough number of captures the difference diminishes, but to get close to a median of five with an averaging process would take many more samples. And capturing one hour of tape takes one hour, as I'm sure you know You can see the difference in this post - at five samples averaging leaves all the noise visible, although slightly diminished, while median has eliminated the noise almost completely.

    I'd like to additionally clarify what the sync option does, as I may have done a poor job explaining it earlier in the thread. It should only really be used with large values to determine gross misalignment in the input streams (using the debug output as a guide), after which it can be turned off or left on with a small radius (to catch any occasional frame drops). Running at sync = 30 with inputs that are already aligned does nothing except vastly increase processing time - there is zero effect on image quality. Also, the inputs need to be aligned just the same with an averaging process, this is not any additional requirement of using a median.
    Quote Quote  
  8. I get the purpose now. It's to fix glitches among a limited number of source clips. But if you had 20 or 30 clips with those white dropouts then I guarantee they will also disappear. This Median filter looks suitable for glitches rather than noise because I still see noise on the fingers of that guy. Look at the girl's hair on my screenshots I posted, the Median has more noise there than Merge. Macroblocks are typically in the same spot in flatter areas but they're not the exact same quantization because the sources are all slightly different. So all that remains is the general shape of the macroblock but all the other artifacts gone.
    Quote Quote  
  9. Originally Posted by Aludin View Post
    I get the purpose now. It's to fix glitches among a limited number of source clips. But if you had 20 or 30 clips with those white dropouts then I guarantee they will also disappear. This Median filter looks suitable for glitches rather than noise because I still see noise on the fingers of that guy. Look at the girl's hair on my screenshots I posted, the Median has more noise there than Merge. Macroblocks are typically in the same spot in flatter areas but they're not the exact same quantization because the sources are all slightly different. So all that remains is the general shape of the macroblock but all the other artifacts gone.
    Fixing glitches are necessarily the sole purpose. Median can be used it for object removal, clean plates, certain types of noise removal (again , it's critical, that it's "random")

    It depends on they type of "noise", and how you define "noise" . For example, some people might define film grain as "noise" . (One man's grain is another man's noise) The simplest definition of "noise" is just unwanted signal. But what is "wanted" vs. "unwanted" is debatable . The pattern and type of "noise" is important to consider. It has to be "random" - those are the candidates suitable for median. If it's random, median will almost always outperform mean - this is easy to demonstrate with objective metrics and easy to see with eye. It's easy to understand with math too

    Grain is almost a perfect example of "random noise" . This is an example of 5 frames of a Kodak Vision grain plate. Am I going to submit a blanket statement that median is better than merge ? No - it's only better most of the time on suitable candidates.

    kodak merge psnr_avg:30.76 psnr_r:34.53 psnr_g:32.30 psnr_b:28.02
    Click image for larger version

Name:	kodak_merge.png
Views:	205
Size:	1.74 MB
ID:	39238

    kodak median psnr_avg:34.06 psnr_r:38.10 psnr_g:35.81 psnr_b:31.18
    Click image for larger version

Name:	kodak_median.png
Views:	218
Size:	1.59 MB
ID:	39239
    Quote Quote  
  10. Member
    Join Date
    Dec 2005
    Location
    Finland
    Search Comp PM
    Originally Posted by Aludin View Post
    I get the purpose now. It's to fix glitches among a limited number of source clips. But if you had 20 or 30 clips with those white dropouts then I guarantee they will also disappear. This Median filter looks suitable for glitches rather than noise because I still see noise on the fingers of that guy.
    Yes, basically median means "pick out the most likely correct sample and discard others", whereas averaging represents "take into account all samples equally" (regardless of how wrong some of them might be). If one were to introduce one completely incorrect input (say, an entirely different video) into the mix, a median process would just throw that out, while an average would have a steady 20% of it mixed in (assuming 5 samples). It would take a lot of samples to completely nullify the effect of one "bad apple" in the mix.

    Neither average nor median can ever remove all noise from e.g. camcorder footage, since some (or a lot) of it is caused by the original CCD - it is burned into the image. This kind of multi-sampling will only remove noise and glitches caused by the transfer path, i.e. the playback mechanism, cabling, capture device and so on, which is random and different with each capture.

    So, this isn't really a denoising process, but a statistical method to get the best possible representation of an analog signal as originally recorded. Denoising algorithms attempt to tell apart noise from image data by entirely different criteria, and can follow as required after a good transfer has been achieved.
    Quote Quote  
  11. First of all... THANKS AJK!
    Your plugin is one of the most necessary plugins ever to improve the quality of VHS captures.
    The results are AWESOME!

    Well....
    It's a lot of time since I knew your plugin but I DID NEVER REALIZE the existence of the "sync" functionality until today.
    This change everything!!

    I struggled a lot writing a script that align the clips automatically quite in the same way you do with "sync". http://www.digitalfaq.com/forum/video-restore/10437-extract-information-vhs.html#post67328
    I didn't know I was trying to reinvent the wheel!

    I compared the result of my script with yours: they are almost the same, but your is much faster!
    This make me feel even a little dumb because I've put a lot of effort in something that was already existing, and done better , but at least now I feel that my struggling had an end!

    There's something in that your sync can be improved though, if you are willing to:
    1) You could add a function alignClips(A, B), that does the exact thing of sync, but just returning the clip A aligned with the base clip B.
    This can be helpful if you want to pre-process the aligned clips before to calculate their median (useful to make further global luma-matchs before combining them).

    Edit: I just realized that this can be achieved by using the function in this "unorthodox" way: Median(Clip1, Clip2, Clip2), repeating the clip to align two times, but it's not very efficient

    2) You could add the possibility to search the alignment not only temporally, but also spatially, by combining temporal and spatial shifting.
    This can be helpful if some frame from a capture "jumps" up and down for some tape transportation error.

    3) If the First clip has a duplicate frame or a serious damaged frame that does not match with anyone of the other captures, the "corresponding frame" of the other captures is quite "random" if you use just a luma comparison on that frame.
    To find the best matching frame you could use, as a statistic to determine the frame matching, not just the luma matching of the two frames, but the SUM of the luma matching of the two frames and the luma matching of n adiacent frames. This gives a better result with totally messed up damaged tapes that have not only comets but severe crumpling!


    Originally Posted by ajk View Post
    Originally Posted by renard View Post
    I don't see in the readme or the wiki if there is a limitation to progressive and/or interlaced streams ?
    There is not; the nature of the processing is such that whether a clip is interlaced or not does not matter. Basically, pixels are not moved, so fields can't get mixed.
    Are you sure..?
    Sometimes happened to me that ANY frame of capture 1 can't match with any frame of capture 2 because the fields are shifted by one place.

    Example:

    First Capture
    <Top Field 1, Bottom Field 1, Top Field 2, Bottom Field 2, Top field 3, Bottom Field 3...>
    The capture card gives:
    [Top Field 1, Bottom Field 1][Top Field 2, Bottom Field 2][Top field 3, Bottom Field 3]... (TFF)

    Second Capture
    <Top Field 0, Bottom Field 0, Top Field 1, Bottom Field 1, Top Field 2, Bottom Field 2, Top field 3, Bottom Field 3...>
    The capture card misses for some reason the Top field 0 and gives:
    [Bottom Field 0, Top field 1][Bottom field 1, Top field 2][Bottom field 2, Top field 3].... (BFF)

    In that case the correct way to look up for the matching is by field, and not by frame.



    THANKS Again for your fantastic plugin!
    Last edited by benzio; 17th Mar 2020 at 07:58.
    Quote Quote  
  12. Formerly 'vaporeon800' Brad's Avatar
    Join Date
    Apr 2001
    Location
    Vancouver, Canada
    Search PM
    If your captures are offset by one field like that, I think it would be better to manually align the start by trimming or adding one field. Then the field order will match.

    SeparateFields().Trim(1,0).Weave()
    Quote Quote  
  13. Originally Posted by vaporeon800 View Post
    If your captures are offset by one field like that, I think it would be better to manually align the start by trimming or adding one field. Then the field order will match.

    SeparateFields().Trim(1,0).Weave()
    It's also imporant which field is the TOP one and the BOTTOM one, not just the order.
    Your method shift the fields to different frames, but also switches top and bottom field, making the lines jagged.

    I made the same mistake creating this function that I used some time ago and the result was bad:
    Code:
    function shiftFieldsKeepingFieldOrder(clip source){
    	source = source.AssumeTFF()
    	source = source.separateFields()
    	source = trim(source,1,0)
    	source = source.weave()
    	return source 
    }
    Last edited by benzio; 17th Mar 2020 at 09:31.
    Quote Quote  
  14. Member
    Join Date
    Dec 2005
    Location
    Finland
    Search Comp PM
    Originally Posted by benzio View Post
    It's a lot of time since I knew your plugin but I DID NEVER REALIZE the existence of the "sync" functionality until today.
    This change everything!!

    I didn't know I was trying to reinvent the wheel!

    I compared the result of my script with yours: they are almost the same, but your is much faster!
    This make me feel even a little dumb because I've put a lot of effort in something that was already existing, and done better , but at least now I feel that my struggling had an end!
    Thanks for the feedback! Don't feel bad about that, I think time is never wasted when learning new things

    1) You could add a function alignClips(A, B), that does the exact thing of sync, but just returning the clip A aligned with the base clip B.
    This can be helpful if you want to pre-process the aligned clips before to calculate their median (useful to make further global luma-matchs before combining them).

    Edit: I just realized that this can be achieved by using the function in this "unorthodox" way: Median(Clip1, Clip2, Clip2), repeating the clip to align two times, but it's not very efficient
    Hmm... I suppose that would be possible, but can you just not process the clips anyway before merging them? What kind of filters do you have in mind where the alignment would matter?

    2) You could add the possibility to search the alignment not only temporally, but also spatially, by combining temporal and spatial shifting.
    This can be helpful if some frame from a capture "jumps" up and down for some tape transportation error.
    I did think about this but it would make the process a lot slower, if it had to look even 8 pixels up and down. Maybe it could be an option to only try that IF there is no reasonable candidate available temporally.

    3) If the First clip has a duplicate frame or a serious damaged frame that does not match with anyone of the other captures, the "corresponding frame" of the other captures is quite "random" if you use just a luma comparison on that frame.
    To find the best matching frame you could use, as a statistic to determine the frame matching, not just the luma matching of the two frames, but the SUM of the luma matching of the two frames and the luma matching of n adiacent frames. This gives a better result with totally messed up damaged tapes that have not only comets but severe crumpling!
    I also toyed with ideas for how to not have a "master" clip but to recognize drops by examining all the clips... couldn't really find a reasonable solution that I could easily implement in code. In practice it seemed to be enough to just try to get as good captures as possible, and fix any remaining issues manually if they are too bothersome.


    Originally Posted by ajk View Post
    There is not; the nature of the processing is such that whether a clip is interlaced or not does not matter. Basically, pixels are not moved, so fields can't get mixed.
    Are you sure..?
    Sometimes happened to me that ANY frame of capture 1 can't match with any frame of capture 2 because the fields are shifted by one place.
    Well, as long as there are no glitches in the geometry of the frames, I'm sure - obviously jumps or any misalignment will cause problems but it's not strictly got to do with interlacing. I again didn't find this an issue in practice; usually I captured tapes three times, but ones that were problematic I captured five or even seven times (usually not the whole tape but a section), and this ensured that good frames outnumbered the bad.

    I'll consider your suggestions but I have to confess that during the past year I have captured all my remaining Hi8 tapes, so I don't have that much motivation to continue work on the plugin Perhaps one day, and of course the source code is available for anyone else to tinker with!
    Quote Quote  
  15. Formerly 'vaporeon800' Brad's Avatar
    Join Date
    Apr 2001
    Location
    Vancouver, Canada
    Search PM
    It's difficult for me to keep field order issues straight in my head without playing around with clips, but:

    Originally Posted by benzio View Post
    It's also imporant which field is the TOP one and the BOTTOM one, not just the order.
    Your method shift the fields to different frames, but also switches top and bottom field, making the lines jagged.
    Your Capture 2 already has the fields in a different spatial position within the frame compared to Capture 1; the Avisynth code should make them match.

    I made the same mistake creating this function that I used some time ago and the result was bad
    Your function starts with AssumeTFF, which would be wrong for Capture 2.
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!