VideoHelp Forum
+ Reply to Thread
Page 2 of 3
FirstFirst 1 2 3 LastLast
Results 31 to 60 of 75
Thread
  1. Member PuzZLeR's Avatar
    Join Date
    Oct 2006
    Location
    Toronto Canada
    Search Comp PM
    Originally Posted by ajk View Post
    @PuzZLeR

    Yeah, clip1 is of course just used for convenience here, and the master clip can be any of the captures. If one appears to have less dropped frames than the others, that's the best one to use.

    You probably are the biggest fan of this plugin, so how about some examples from past work
    The cool thing is that they're all gone. Once I capture 3/5 times, then line them up into a script, process with median() I delete the 3/5 captures as I have no use for them any longer as there is no side effect in the final processed version. Also, it's always a relief when you delete 5 RGB sized captures after.

    So I have no "before" which makes posting the "after" not very demonstrative. And I'm pretty much done with much previous work, so even to test this new update I have to generate 3/5 new captures. And they have to be long ones too to really get alot of drops. (Yes, as you know, in real time.)

    I'd be glad to upload some stuff soon however now that you mention it.

    Originally Posted by ajk
    Now, as it stands, if clip1 is missing a frame which exists in all the other clips, all those others will simply be discarded. But if we were to compare each clip to every other clip, it would be easy to see that a frame in fact exists in four out of five (or however many) clips, and should be included in the output.
    I did think of this shortcoming earlier and did have a workable partially manual solution in mind to accompany the updated plugin. In the end, it would be much quicker than a total manual solution, and the final result wouldn't transfer also the drops of clip 1.

    But I'm sure you're on your way too in this.

    I do get it that fabricating some abstract "totally fixed" clip would be a much more complex algorithm, almost recursive, instead of using clip1 as your benchmark. Maybe in the future, but today, for now, I'm sure there's still a good solution.

    Let me also do some thinking/testing.
    I hate VHS. I always did.
    Quote Quote  
  2. Originally Posted by ajk View Post
    Then, for calculating the output, it will use whichever frames have been deemed to be the best matching.
    As I'm sure you know, since you've actually written the plugin, frame matching can be really, really difficult. Perhaps you have already run into the problem I'm about to describe.

    First, to state the obvious: each captures is slightly different because of the noise you are trying to remove. That, of course, is the whole point of this exercise. Therefore, since the captures are different, you don't end up with true, 100% identical duplicates, like what you get with a capture card where it drops a frame and then later duplicates a frame so audio sync is not lost. By contrast to that situation, when capturing the same tape multiple times, comparing the same frame from each different captures will always give you non-zero matching metrics.

    I've done a lot of work with various frame difference metrics, and they often spike unexpectedly. As a result of this experience, I expect you will have a lot of false positives. Perhaps this doesn't matter much because if you replace a frame that doesn't need to be replaced, the only downside will be slightly less noise reduction for that frame, or perhaps a little softness. The mistake may not call attention to itself. This may still be true even if the frame you replace isn't from the same moment in time as the frame you use for the replacement.

    Perhaps the following may help, although it may not be relevant, but here goes.

    Ten years ago I tried to develop a system for recovering good frames from a video camera capture from a 24 fps 16mm projector from which the shutter was removed. If you go through the math, a 30 fps interlaced camcorder, with the shutter set to 1/1000 second will always end up with two adjacent fields that perfectly capture each frame of the original film. The trick is to them figure out how to combine the correct fields together (sometimes you want to combine the bottom field from the current frame with the top field of the next frame), and also how to decimate the redundant fields, as well as those fields that capture the pulldown, which even at 1/1000 second shutter speed are completely blurred.

    The reason I bring this up is that for that problem, I was able to use the blurred frames as a reference point, and then work backwards and forward, up to ten frames, from those "reference points." In other words, the pulldown fields were so obvious and the metrics so huge, that I could be 100% certain not only that they would be bad, but also that the next two fields would match, and belonged together. Because neither the projector nor the camcorder actually operate at a mathematically perfect 24 and 29.97 fps, you can't use pure math to determine matches going forward from this sync point, and eventually the match will come one field earlier or one field later than the math would predict. However, working backwards from the next reference frame can help with the judgement of whether a field matches this field or the next field. So, for my system, I use the field matching metrics to determine which fields to combine together, but when the metrics are indeterminate, I use count backwards and forwards from my reference points to make the final determiniation on which fields to match.

    The same thing might be useful for your project, but in your case the reference points would be scene changes. These (usually) provide huge frame-to-frame difference metrics, and you could use these as reference points for each capture. Once you detect that one capture's scene change no longer matches the frame number of the scene changes on the other catpures, you know that the problem area is between that point and the previous scene change. Doing it this way does two things. First, you can scan each capture using scene detection logic that is really fast. My current scene detection script operates at several hundred fps. Thus, as with any search algorithm, if you can quickly eliminate a large portion of the search area using some sort of initial scan, you can get to the end result faster. Second, you can minimize false positives because you only have to hunt for the bad sync points over a relatively small section (or sections) of each capture. What's more, if you have more than two captures, and two or more show no change at the scene change point, you only have to inspect the one capture that doesn't agree with the others.

    This is probably not useful, since it sounds like you've already got it working, but if you are looking for ideas to tweak it or speed it up, this might help.
    Quote Quote  
  3. Member
    Join Date
    Dec 2005
    Location
    Finland
    Search Comp PM
    @johnmeyer

    You are absolutely correct in what you say, frame matching can be difficult to get right. However, we can make some assumptions in this particular scenario (tapes) - to even consider merging several captures we have to start with a reasonably stable source.

    As long as the capture chain is decent - proper player, some kind of TBC, identical settings for all captures - we should end up with pretty similar results each time. Particularly, vertically all frames should match quite closely, unless there is a significant glitch on the tape.

    Horizontally there will be some jitter depending on the TBC situation, but since all of these formats are relatively low resolution in that direction, slight movement will not show up as a huge difference when comparing frames. This can be considered as a manner of horizontal blur.

    What follows is a real-world example of that troublesome tape I was talking about earlier. I have captured the same tape five times, and all captures have these white comets in random locations. For demonstration purposes I have intentionally misaligned the 5 captures by one frame each. This is the same result you would get from having a dropped frame or two somewhere earlier in the clip.

    Now, by running

    Code:
    Median(clip1, clip2, clip3, clip4, clip5, sync = 2)
    I get the following internal results (captured by DebugView)

    Code:
    median: frame 4	
    median: syncing clip 2 with clip 1	
    median: offset: -2, difference: 5.041217	
    median: offset: -1, difference: 2.174288
    median: offset: +0, difference: 5.499196	
    median: offset: +1, difference: 5.500536	
    median: offset: +2, difference: 7.353133	
    median: best match with: -1
    median: syncing clip 3 with clip 1		
    median: offset: -2, difference: 2.178357
    median: offset: -1, difference: 5.557119	
    median: offset: +0, difference: 6.497922	
    median: offset: +1, difference: 7.316655	
    median: offset: +2, difference: 7.316655		
    median: best match with: -2	
    median: syncing clip 4 with clip 1	
    median: offset: -2, difference: 7.829398	
    median: offset: -1, difference: 6.944317	
    median: offset: +0, difference: 6.004423	
    median: offset: +1, difference: 5.035089	
    median: offset: +2, difference: 2.118087
    median: best match with: +2
    median: syncing clip 5 with clip 1		
    median: offset: -2, difference: 6.968156	
    median: offset: -1, difference: 6.027880	
    median: offset: +0, difference: 5.029058	
    median: offset: +1, difference: 2.135991
    median: offset: +2, difference: 5.534620		
    median: best match with: +1
    We can visualize the process by lining up all the streams. I have emphasized the chosen frames by setting all other frames to greyscale. It's a large image so rather than inlining to the post I have attached it separately below. We can see that the correct frames have been found.

    And the end result of the Median() itself is this frame:

    Name:  median.png
Views: 1642
Size:  162.5 KB

    The amount of noise has clearly been reduced, and all comets have been eliminated. And even though the captures did not line up, we didn't end up with a garbled combination of mismatching frames, as would normally happen.

    So while matching frames is difficult, in this particular case the existence of close matches is guaranteed. If it is not, there is no point in trying to merge the captures in the first place. So we can get away with a reasonably simple matching algorithm
    Image Attached Thumbnails Click image for larger version

Name:	sync.png
Views:	280
Size:	2.26 MB
ID:	34464  

    Quote Quote  
  4. Thanks for the amazingly complete explanation.

    Just out of curiosity, what AVISynth command did you use to generate your frame difference metrics? Also, does your software keep track of how many times you have to shift captures, and if so, do you think you are getting many false positives where a capture is realigned when it doesn't need to be?
    Quote Quote  
  5. Member
    Join Date
    Dec 2005
    Location
    Finland
    Search Comp PM
    I did not use an existing AviSynth command, this is something I built into the Median() plugin itself. It's probably the simplest possible method: images are compared pixel-by-pixel and the total difference is scaled to a range of 0-100. A result of 0 means the images are identical, and 100 means they are entirely different (e.g. full black compared with full white).

    At the moment nothing is being kept track of - when advancing to the next frame all the difference metrics are recalculated. That is something that could probably be optimized, but doing so might of course lead to a larger possibility of incorrect decisions, if not done with care.
    Quote Quote  
  6. Originally Posted by ajk View Post
    ... images are compared pixel-by-pixel and the total difference is scaled to a range of 0-100. A result of 0 means the images are identical, and 100 means they are entirely different (e.g. full black compared with full white).
    Thanks. Actually, that sounds exactly like what the AVISynth YDifference functions do (from the doc: "return the absolute difference of pixel value between the current and previous frame of clip"). If so, I am quite familiar with how this works, and where it can fail. Despite some problems, it is a pretty good way to sense similarities and differences.
    Quote Quote  
  7. Member
    Join Date
    Dec 2005
    Location
    Finland
    Search Comp PM
    Using DebugView is a bit cumbersome, so I moved the debug information on the image itself. I also turned the metric around so 100% means exact match. Here is once again an example frame from my troublesome tape:

    Click image for larger version

Name:	original.png
Views:	587
Size:	689.6 KB
ID:	34476

    As we can see, there is a lot of noise and comets on the frame. Let's use the other captures and run Median():

    Code:
    Median(clip1, clip2, clip3, clip4, clip5, sync = 25)
    Click image for larger version

Name:	metrics.png
Views:	615
Size:	571.3 KB
ID:	34475

    Not too bad, but looking at the data in the corner we see that near matches have been found for clips 2, 3 and 4, each at over 98% similarity (the other 2% being noise). However, for clip number 5 we get a match with a significantly larger offset and even then only at about 92% similarity. This tells us that most likely clip 5 is not lined up even close, and should be fixed. Alternatively it is possible to increase the sync variable to a high enough value so it will be able to locate the proper frame, but this would slow down the processing further.

    In this case I looked at clip 5 and determined that it is 40 frames off, leaving it outside the search radius. Let's fix this and try again:

    Code:
    Median(clip1, clip2, clip3, clip4, clip5.Trim(40,0), sync = 25)
    Click image for larger version

Name:	fixed.png
Views:	635
Size:	566.2 KB
ID:	34477

    That's better. Now clip 5 lines up exactly, with an offset of 0. Also the similarity metric matches the other clips. And image-quality wise the end result is slightly cleaner with even less noise and comets.


    Somewhat unrelated, but just out of interest, this is what the result is if an average is calculated rather than the median:

    Code:
    MedianBlend(clip1, clip2, clip3, clip4, clip5.Trim(40,0), sync = 25, low = 0, high = 0)
    Click image for larger version

Name:	average.png
Views:	519
Size:	557.1 KB
ID:	34478
    Last edited by ajk; 14th Nov 2015 at 04:37. Reason: Added the "average" test
    Quote Quote  
  8. Member
    Join Date
    Dec 2005
    Location
    Finland
    Search Comp PM
    I bumped the version number up to 0.6. This version now includes the sync functionality to ease the burden of manually aligning clips that may have a dropped frame every now and then.

    Using the feature works as follows:

    Code:
    Median(clip1,clip2,clip3,clip4,clip5, sync=10, debug=true)
    The sync parameter will determine how far the filter will look for a good match for each frame. A high radius will slow down the processing, so it's good to align at least the starts of the clips manually, with Trim(). Unless you then have a great deal of dropped frames, a radius of 5-10 should be plenty.

    The debug data printed on the image will be useful for determining whether the clips are in sync all the way. Simply jump through different points in your video and look at the numbers. Your debug output might be something like this:

    Code:
    FRAME: 1425
    CLIPS: 5
    SYNC RADIUS: 5
    SYNC METRICS:
    2  +0  97.384919
    3  +1  97.152746
    4  -2  97.163277
    5  +3  97.213446
    As long as the value in the second column - offset - is always less than the radius, and the third number - similarity - is roughly the same for all clips, things are good. The absolute value of similarity depends on how much difference, or noise, there is between the clips. Once you are satisfied that the clips line up all throughout the video, turn off the debug printout and follow with further processing.

    There is also another parameter, samples, which allows you to choose how many pixels are compared to determine the similarity between images. The default of 4096 samples seems to do a good job without being slow, but you can play around with it if needed.

    I have tested this with a few samples now with good results, and intend to continue on with the rest of my tapes, but any feedback is again welcome!
    Quote Quote  
  9. Member
    Join Date
    Feb 2016
    Location
    Lyon, France, Europe, Earth, Sun microsystem, Oracle, in hands of God
    Search PM
    Hello AJK,

    I'm new to this forum and quite new to video either, but I'm about to compete with the "top fan" status for your plugin, even if I don't have tried it yet !

    I hope that the following won't reveal to clearly I'm a newbie.

    I read some forums (here and doom9) and tried to follow several threads, including those from jmac and, for sure, these thread.

    I dare to participate to this thread to encourage your development, even if you don't have much feedbacks ! I would also like to ask some questions and emit some remarks.

    1/ I don't see in the readme or the wiki if there is a limitation to progressive and/or interlaced streams ?
    2/ In the readme, it's written that the plugin isn't built for avisynth 2.6 but the wiki says "2.5.8 or greater". Which is true ?
    3/ In the wiki you specify the need of an odd number of video but I think it's not mentioned in the readme file.
    4/ For curiosity, I would be happy to know why you developed a plugin rather than a script, as we can find here for example (much less evolved !).
    5/ Jmac told about a great idea about not removing extreme values but only values which are too far from the standard deviation. What do you think about giving it a try ?
    6/ In this thread it was discussed about the fact that clip 1 would be the reference for the sync option, therefore risking to exclude 4 of 5 clips if the wrong one was the reference. I'm not sure I'm clear and I understood well, but if so, is this inconvenience still implemented ?
    7/ In the thread I understood that the sync function compared the whole frames, unless at the end of the thread (and in the readme and the wiki) it's 4096 pixels that are mentioned. So I fear I misunderstood something !

    Thanks for your plugin and for your reply !

    renArD
    Quote Quote  
  10. Member
    Join Date
    Dec 2005
    Location
    Finland
    Search Comp PM
    Originally Posted by renard View Post
    I dare to participate to this thread to encourage your development, even if you don't have much feedbacks ! I would also like to ask some questions and emit some remarks.
    Questions and comments are always welcome! I realize the process of combining multiple captures is quite time consuming, and therefore may not be particularly common practice, so it's great to hear feedback from whoever is willing to go through the trouble

    I don't see in the readme or the wiki if there is a limitation to progressive and/or interlaced streams ?
    There is not; the nature of the processing is such that whether a clip is interlaced or not does not matter. Basically, pixels are not moved, so fields can't get mixed.

    In the readme, it's written that the plugin isn't built for avisynth 2.6 but the wiki says "2.5.8 or greater". Which is true ?
    They are both true, I use 2.6 myself. The plugin does not at the moment support the new features of 2.6, but since 2.6 is backwards-compatible, it does work just fine with it. You just can't use for example the YV24 or YV16 colour spaces.

    In the wiki you specify the need of an odd number of video but I think it's not mentioned in the readme file.
    Well, the plugin will warn you if you don't input a suitable number of clips but I'll try to remember to update the readme. I suppose I have assumed that most people will remember from their mathematics classes in school how median works

    For curiosity, I would be happy to know why you developed a plugin rather than a script, as we can find here for example (much less evolved !).
    I did use those scripts originally, but when I wanted to try 7 clips, I didn't find one. Also adding the more advanced features was easier (for me) to do as a separate plugin since I know my way around C better than I do Avisynth. Additionally I just like that the plugin works by itself without dependencies to other plugins.

    Jmac told about a great idea about not removing extreme values but only values which are too far from the standard deviation. What do you think about giving it a try ?
    In my testing I found that a straight median(), or the already implemented medianblend(), work well for my videos. But if someone has an example where another type of calculation clearly brings better results, I'm more than happy to add support. I just don't want to spend a lot of time testing purely theoretical options - not enough hours in the day

    In this thread it was discussed about the fact that clip 1 would be the reference for the sync option, therefore risking to exclude 4 of 5 clips if the wrong one was the reference. I'm not sure I'm clear and I understood well, but if so, is this inconvenience still implemented ?
    Yes, if there is a missing frame in "clip1", the plugin won't be able to use it from the other clips. I haven't so far found a reasonable way to get around this.

    With a good capture you shouldn't have many missing frames so I don't think it's a big limitation. It's never any worse than having just one capture with dropped frames.

    In the thread I understood that the sync function compared the whole frames, unless at the end of the thread (and in the readme and the wiki) it's 4096 pixels that are mentioned. So I fear I misunderstood something !
    At first I compared the frames pixel by pixel. But that was pretty slow, and the results weren't any better than just comparing a smaller number of pixels. Therefore I set the number of pixels to compare to 4096 and added the "samples" parameter for tweaking, if needed. It should be pretty easy to see from the debug numbers whether the matching is working or not.


    Please post some samples and results if you try the plugin out with your tapes!
    Quote Quote  
  11. Member
    Join Date
    Feb 2016
    Location
    Lyon, France, Europe, Earth, Sun microsystem, Oracle, in hands of God
    Search PM
    Thanks a lot for your prompt reply! All your answers are very clear.

    I did use those scripts originally, but when I wanted to try 7 clips, I didn't find one. Also adding the more advanced features was easier (for me) to do as a separate plugin since I know my way around C better than I do Avisynth. Additionally I just like that the plugin works by itself without dependencies to other plugins.
    Still for curiosity, do you plan to release your sources? It's not that I would modify or copy it but I'm wondering how to manipulate videos in C (I would do that in Matlab I guess). Btw, I have the same question for audio, but all these is about programming so out of the topic

    Jmac told about a great idea about not removing extreme values but only values which are too far from the standard deviation. What do you think about giving it a try ?
    In my testing I found that a straight median(), or the already implemented medianblend(), work well for my videos. But if someone has an example where another type of calculation clearly brings better results, I'm more than happy to add support. I just don't want to spend a lot of time testing purely theoretical options - not enough hours in the day
    It's true to say that a standard deviation is a good clue only if you have enough samples to average the deviation, and enough samples to be able to reject pixels farther than the mean +/- standard deviation/2. In our case, it would be too much acquisitions.


    In this thread it was discussed about the fact that clip 1 would be the reference for the sync option, therefore risking to exclude 4 of 5 clips if the wrong one was the reference. I'm not sure I'm clear and I understood well, but if so, is this inconvenience still implemented ?
    Yes, if there is a missing frame in "clip1", the plugin won't be able to use it from the other clips. I haven't so far found a reasonable way to get around this.

    With a good capture you shouldn't have many missing frames so I don't think it's a big limitation. It's never any worse than having just one capture with dropped frames.
    Yes, there is no big deal. However, I think it might not be very time consuming to test if the missing frame is in the first clip (you would have all rates very low) and, in this particular case (which is rare and therefore not consuming) compare the others clips among themselves, for example by rotating temporarily the clips. Clip1 becomes clipN (last one). The new order would be clip2, clip3...clipN, clip1. The new clipN (former clip1) will be ejected thanks to median process, but if the new reference (clip2) is good, you can recover the dropped image. The test would be somehow recursive, to rotate again if the new reference doesn't match too, and so on until N-1 moves or two clips match (the temporary reference and at least one other).

    Please post some samples and results if you try the plugin out with your tapes!
    Ok I will do it ASAP
    Quote Quote  
  12. Member
    Join Date
    Dec 2005
    Location
    Finland
    Search Comp PM
    Originally Posted by renard View Post
    Still for curiosity, do you plan to release your sources? It's not that I would modify or copy it but I'm wondering how to manipulate videos in C (I would do that in Matlab I guess).
    The sources are already available at http://ajk.pp.fi/avisynth/, except for the latest version because that code is still a bit messy.

    It's not necessarily the best example for a plugin, but I think it should be clear enough so you can see what is going on.

    Yes, there is no big deal. However, I think it might not be very time consuming to test if the missing frame is in the first clip (you would have all rates very low) and, in this particular case (which is rare and therefore not consuming) compare the others clips among themselves, for example by rotating temporarily the clips. Clip1 becomes clipN (last one). The new order would be clip2, clip3...clipN, clip1. The new clipN (former clip1) will be ejected thanks to median process, but if the new reference (clip2) is good, you can recover the dropped image.
    Yes, I have thought about doing something like this. But it is not a high priority right now, since it will inevitably be a lot more complex than what the plugin is doing now. Also, I don't think it can be done just when the problem is noticed, since it is going to add a frame to the stream.

    Let us consider a "clip1" that has e.g. 1000 frames. Let's further assume that there are two dropped frames, so the actual count should be 1002 frames. Now let's assume a user jumps to frame 500 in Virtualdub. If we haven't somehow analyzed where the dropped frames are, which frame should the plugin return?

    Maybe there is something simple I have not considered, but it does not seem like a trivial thing to implement. Perhaps I will get there one day
    Quote Quote  
  13. Member
    Join Date
    Feb 2016
    Location
    Lyon, France, Europe, Earth, Sun microsystem, Oracle, in hands of God
    Search PM
    Thanks for you reply !

    For the dropped frames issue, if I well understood virtualdub, it doesn't change the number of frames but just duplicates the previous frame, isn't it ? If so, there wouldn't be a problem when accumulating dropped frames, even if user jumps to any frame. Where am I wrong ?
    Quote Quote  
  14. Member
    Join Date
    Dec 2005
    Location
    Finland
    Search Comp PM
    @renard

    Do you mean when capturing? In my experience VirtualDub keeps the average frame rate correct, but that doesn't mean a dropped frame is instantly replaced by a duplicate. A duplicate might be inserted somewhere further down the line to compensate for the drift. And since we might only be dealing with a short section of the video when taking the median, from the plugin's perspective a frame can be missing entirely (or there may be a duplicate).
    Quote Quote  
  15. Member
    Join Date
    Feb 2016
    Location
    Lyon, France, Europe, Earth, Sun microsystem, Oracle, in hands of God
    Search PM
    Sorry for disturbing the thread.
    I've just discovered the "timing" options in VirtualDub acquire settings. I used to have the two first checks (drop and insert null frames).

    I'm now trying to find the best values. It seems the best is :
    - no drop
    - no insert
    - no resync
    - correct video timing
    - no auto disable
    - audio latency auto 30 blocks (default)
    - directshow all unchecked.

    Considering the acquisition frame rate, I set to 25 (secam) also my VCR never achieve to go so fast.

    Then I :
    - acquire (I don't have any drop/insert mentioned and the frame rate is always below 25fps)
    - close the capture mode,
    - open the acquired file,
    - select the part I'm interested in, (through multiple acquisitions, the number of frames changes a lot ! I don't understand why, since I checked to correct video timing to avoid drops/inserts)
    - change the video frame rate control to source rate adjustment so that audio and video match (usually I get less than the expected 25fps). Maybe I shouldn't and only synchronise the audio one time at the end with the result of Median ?
    - change the video frame rate control to frame rate conversion "convert to 25fps". I hoped this could equal the number of frames but not.

    And the goal is then to median the clips...
    Quote Quote  
  16. Member
    Join Date
    Feb 2016
    Location
    Lyon, France, Europe, Earth, Sun microsystem, Oracle, in hands of God
    Search PM
    Well it seems that not reconverting to 25fps after sync of audio is better because differences between number of frames for the same clip decrease
    So I recorded 10 times the same (part of) tape : 5 with a VCR, 5 with another.
    Some duration are very different from the majority.
    Obviously, the tape was bad by itself.

    The reference :
    Click image for larger version

Name:	Reference.png
Views:	505
Size:	620.2 KB
ID:	35532

    MedianBlend all records but 1 (the one with the duration most different). Settings : low=2, high=2,sync=100, samples = 414720 (just for try...)
    You can see the logo is blurred
    Click image for larger version

Name:	MedianBlend.png
Views:	576
Size:	530.2 KB
ID:	35538

    Median only with all records but 1 (the one with the duration most different). Settings : sync=100, samples = 414720 (just for try...)
    Not that good too, logo also blurred (less than previous)
    Click image for larger version

Name:	Median.png
Views:	532
Size:	569.5 KB
ID:	35535

    If I allow only one frame difference between records :
    MedianBlend 7 records. Settings : low=1, high=1,sync=100, samples = 414720 (just for try...)
    I can't see the difference with the previous MedianBlend
    Click image for larger version

Name:	MedianBlendselect.png
Views:	467
Size:	532.2 KB
ID:	35536

    Median only 7 records. Settings : sync=100, samples = 414720 (just for try...)
    Can't see any difference either
    Click image for larger version

Name:	Median_select.png
Views:	576
Size:	573.7 KB
ID:	35537

    I'm surprised that the results are not so good. Have you got any clue ? I know I put sync and samples to big values, but I don't see why it could affect the results (quite the reverse).

    EDIT : forgot to say that I use composite. If I use svideo the result is very bad for any cap. But we could give it a try, just for the pleasure of seeing spikes removes by the plugin
    Quote Quote  
  17. Member
    Join Date
    Dec 2005
    Location
    Finland
    Search Comp PM
    Thanks for the samples! You can add "debug=true" to see whether the sync is working or not, but I think it is, since usually the result is total garbage if the clips don't match.

    I have attached a few screenshots. In the first one you can see that some noise is certainly being removed, but this tape does not seem to suffer from a lot of dropouts or other such non-repeating issues. If the signal is the same each capture, there are no errors to throw out

    The chroma channels have some odd striping going on, but since they aren't diminished much even after 7 captures, they are probably part of the signal on the tape, not a playback issue. A strong denoise algorithm on the chroma channels could help reduce the rainbow effect.

    Is the colour issue still there with s-video?
    Image Attached Thumbnails Click image for larger version

Name:	subtract.png
Views:	173
Size:	1.73 MB
ID:	35539  

    Click image for larger version

Name:	channels.png
Views:	175
Size:	471.1 KB
ID:	35540  

    Quote Quote  
  18. Member
    Join Date
    Feb 2016
    Location
    Lyon, France, Europe, Earth, Sun microsystem, Oracle, in hands of God
    Search PM
    Thanks for your quick reply. Thanks also for your examination of my case.

    I've added the debug info and, for the frame concerned, all clips were aligned with the 7 clip version, and -5 or -8 with the 9 clips version. Thanks for the tip !

    Here is an example of what I get with svideo :
    Click image for larger version

Name:	svideo noise.png
Views:	533
Size:	1.08 MB
ID:	35541
    You can easily understand why I switched to composite I think it's my svideo cable that is wrong but I haven't got an other one.

    I'm not troubled about the noise because it's a test-tape (that I don't fear to play and play again in various old VCR). When I'm sure of my settings, I have a few important tapes to convert.

    Have you got any advice about acquisition parameters and synchronizing the sound (my previous post) ?

    PS : in MedianBlend, there is no warning for even number of clips.
    Quote Quote  
  19. Member
    Join Date
    Dec 2005
    Location
    Finland
    Search Comp PM
    Okay... it shouldn't look like that

    I'm not sure about your audio question. The best settings tend to depend on the particular capture device. But it's easy to fix the sync with AudioDelay() as long as it stays constant over the entire capture.

    MedianBlend() can work with any number of clips. It will throw away as many dark and bright pixels as you have chosen, and then average the rest. Depending on your chosen parameters, it can become a median, an average, or something in between.
    Quote Quote  
  20. Member
    Join Date
    Feb 2016
    Location
    Lyon, France, Europe, Earth, Sun microsystem, Oracle, in hands of God
    Search PM
    Does your player provide constant frame rate ? If not, how do you manage to make it coincide with a fixed frame rate on the acquisition device ?

    Why do you think the logo is blurred, since the frame is supposed to be the same among records ?
    Quote Quote  
  21. Member
    Join Date
    Dec 2005
    Location
    Finland
    Search Comp PM
    I normally capture through a Panasonic DMR-ES15 which seem to keep the signal pretty stable. There are also some actual time-base correctors which aren't super expensive.

    I haven't seen your captures in motion, but I think the blurring comes from the scan lines jittering horizontally a bit. If the lines don't, well, line up exactly in all of the captures, the edges won't be in the same place, and therefore don't stay so crisp. A TBC might help with that a bit too, actually.
    Quote Quote  
  22. Member
    Join Date
    Feb 2016
    Location
    Lyon, France, Europe, Earth, Sun microsystem, Oracle, in hands of God
    Search PM
    Thanks !
    Quote Quote  
  23. Formerly 'vaporeon800' Brad's Avatar
    Join Date
    Apr 2001
    Location
    Vancouver, Canada
    Search PM
    Some comparisons. The first capture -vs- Median of 3 -vs- Median of 5. Overall, the medians are preferable, but in some cases they add back anomalies that didn't happen to show up in one given capture. The tape used is a relatively good "retail" SP source, but with frequent dropouts.

    Playback chain was: JVC HR-J693U <CVBS> Philips DVDR3575H <HDMI> I forget which HDMI capture card. The choice of using a plain VHS VCR was deliberate, to see the degree to which the Median function can offset the added Y/C crosstalk, etc.

    Here I outlined in green a dropout that was removed only by the median of 5. The head switching area is also cleanest in that one. The color there is almost 100% intact, and it has the least dot crawl.
    Click image for larger version

Name:	Green-dropouts-V0.png
Views:	1752
Size:	555.7 KB
ID:	36516Click image for larger version

Name:	Green-dropouts-Median3.png
Views:	1691
Size:	529.7 KB
ID:	36504Click image for larger version

Name:	Green-dropouts-Median5.png
Views:	1832
Size:	640.9 KB
ID:	36505

    Here both medians remove some dropouts, but the median of 5 actually looks the worst around the "O" in the "POWER". The first playback happened to perform better than the rest of them in that specific area, so the median function assumed that the more numerous captures were correct.
    Click image for larger version

Name:	PowerMusicVids-worsedropouts-V0.png
Views:	1738
Size:	485.4 KB
ID:	36512Click image for larger version

Name:	PowerMusicVids-worsedropouts-Median3.png
Views:	1702
Size:	601.9 KB
ID:	36511Click image for larger version

Name:	PowerMusicVids-worsedropouts-Median5.png
Views:	1785
Size:	446.9 KB
ID:	36513

    Outlined in red is Y/C crosstalk. Diminished with the medians. But both medians contain a dropout that doesn't show itself in the first capture. The bottom-right edge is cleanest with 5.
    Click image for larger version

Name:	Newspapers-crosstalk+worsedropout-V0.png
Views:	1632
Size:	781.1 KB
ID:	36510Click image for larger version

Name:	Newspapers-crosstalk+worsedropout-Median3.png
Views:	1665
Size:	756.4 KB
ID:	36507Click image for larger version

Name:	Newspapers-crosstalk+worsedropout-Median5.png
Views:	1636
Size:	742.3 KB
ID:	36509

    Weird color issue on this frame in the first capture (chroma AGC?). This very quick flash is supposed to be cyan, not green.
    Click image for larger version

Name:	CyanFlash-lolwut-V0.png
Views:	1717
Size:	473.3 KB
ID:	36517Click image for larger version

Name:	CyanFlash-lolwut-Median3.png
Views:	1600
Size:	451.9 KB
ID:	36508Click image for larger version

Name:	CyanFlash-lolwut-Median5.png
Views:	1655
Size:	433.7 KB
ID:	36514

    There were some brightness inconsistencies between the captures. Here you can see that the first capture is brighter than either median on this particular scene.
    Click image for larger version

Name:	Bloom-brightness-V0.png
Views:	1697
Size:	587.9 KB
ID:	36518Click image for larger version

Name:	Bloom-brightness-Median3.png
Views:	1562
Size:	567.9 KB
ID:	36515Click image for larger version

Name:	Bloom-brightness-Median5.png
Views:	1796
Size:	559.0 KB
ID:	36506
    Last edited by Brad; 11th Apr 2016 at 11:42.
    Quote Quote  
  24. Member
    Join Date
    Dec 2005
    Location
    Finland
    Search Comp PM
    Cool, thanks for the samples!

    In cases where a glitch is actually more likely to happen than not, statistical methods like median are of course not the most helpful. If a problem appears with a 51% or higher probability, it's not going to go away no matter how many captures one does. Fortunately, at least in my experience, such issues are not that common and mostly the effect is positive.
    Quote Quote  
  25. Member PuzZLeR's Avatar
    Join Date
    Oct 2006
    Location
    Toronto Canada
    Search Comp PM
    Yes, the >50%, or 51%+ integer, mark is indeed correct.

    If, say, 3/5 or even 8/15, captures have a glitch, that's over 50%, and the glitch will remain. Statistically speaking, that's fine, because median methods are retaining normality, not outliers. Also, if that's the case, and speaking only purely theoretically here, it's actually not a glitch.

    To get away from the glitch in less than half the captures is pure luck, and rarely happens in my captures either. There were a couple of cases where a tape got damaged during capture, and median methods kept the glitches, since they remained in subsequent captures and became the majority, so I just manually included the good few frames from one of the captures in this case.

    Again, it’s rather rare I would need to do this. And again, the "glitch" was actually part of the tape at the point, and not just a playback, error, or random, quirk.

    As per crosstalk, and removal, this is the one thing I do first and foremost, before anything else. Even if I'm capturing 5 times, I still apply reduction methods to all 5 captures before merging them with median methods. Crosstalk is way too much of a variable to leave it up to theoretical statistics IMO.

    My method:
    Capture 3 or 5 times. (More becomes asymptotically less beneficial IMO, especially for good tapes.)
    Remove crosstalk by resizing to half the vertical resolution.
    Apply median.
    (Then resize upwards later along with other processing. For VHS video, you lose very little sharpness when resizing as such.)

    BTW - What would be the proper verb when "merging" in median methods? Medianing?
    I hate VHS. I always did.
    Quote Quote  
  26. Member PuzZLeR's Avatar
    Join Date
    Oct 2006
    Location
    Toronto Canada
    Search Comp PM
    I came across some tapes to do for someone, so I had a chance to finally fully test this newer version (0.6). My feedback is 1) Great, 2) Not so Great, and 3) Amazing.

    1) Great:

    The plugin works smoothly and is very reliable. It reads drops/extras surprisingly accurately, even on bad captures, and even in still motion. The debugger works great, and sync works very well. No bugs or glitches to report AFAICS.

    2) Not so Great (or, rather, TOO Great):

    The fact that sync works so well is a problem – as expected, it even extends all the flaws from “Capture1”, the designated source, to all other captures, and this is an issue when other captures may not have this blemish – they still inherit it from Capture1, which, IMO, kind of kills the purpose of using median methods in the first place.

    As well, even on a clean tape, I still have issue with losing the frames that Capture1 lost, and also losing them in subsequent captures as a result. For interlaced, or fully progressive video, or for a few random drops here and there, this isn’t much of a big deal, but for telecine film interlaced, I’m especially not fond of this feature in that it can kill the IIPPP pattern.

    I know it’s too much to ask for a solution that corrects even Capture1 as this sounds much more complex, and even recursive, but still, I hate losing that frame at any rate, and have reverted back to a manual solution.

    However…

    3) Amazing:

    Thanks to the debugger, which has made a world of difference, the new “manual” solution feels like 50X faster now! Sometimes it takes longer setting up the Trim command at the beginning than the scanning of the video.

    I no longer run though captures looking for drops, massive trial and error and many eye squints to hone in that dropped frame. Now, I just scrub through, from beginning to end, “minutes at a time”, then back up a few seconds at a time when the debugger notices something. Zeroing in on the dropped frame is a breeze now, and oh so much quicker. I don’t even look at the video any more, I just watch when one of the zeroes changes, or changes back in navigation.

    Recently, I captured a four hour production 5 times. In the past, this would have taken an afternoon (at least) in merging the captures into a median script, syncing, aligning, and finding the drops, etc, especially if there were a lot of drops. Today, even with over 100 drops combined with all 5 captures, it took roughly 10 minutes just following the debugger (and with a few convenient macros into AvsPmod via Python code).

    I still recommend a manual solution, however the manual solution is much less “manual” now. I do thank you for this update.
    I hate VHS. I always did.
    Quote Quote  
  27. Member
    Join Date
    Dec 2005
    Location
    Finland
    Search Comp PM
    @PuzZLeR

    Thanks once again for the feedback! You have ended up largely with the same work flow I have been using, a "semi-automatic" solution if you will. By using the debug values to first manually fix any drops, the plugin can then do the rest of the work automatically.

    If I can devise a way to not just use one clip as a reference, but all of them, I will certainly implement it It just seems complicated to achieve with the way Avisynth video streams work internally, although I am by no means an expert so perhaps I am missing something obvious.

    One possibility might be to run the similarity scan as a separate step, which would provide offset information (e.g. a log file) that would be available to use when doing the actual median. Several plugins do use something like that for other purposes. Maybe lining up several captures would have uses outside of just median calculation, too. Maybe there already is such a plugin or script, haven't actually looked
    Quote Quote  
  28. Excellent! Thank you very much for your work, Ajk
    Quote Quote  
  29. I'm currently trying to deal with different sources that don't align pixel-perfectly.

    I ripped a DVD, but the quality of the video is terrible.
    I managed to get some other sources, and to my surprise some of them even had more data on top or on the sides.

    Right now I manually resize and shift the images to match,
    but because of the extra data that leaves me with a bit of letterboxing or pillarboxing on most sources.

    I could use the "low" parameter, it works for the black bars, but that just messes up the rest of the video.

    Is there some way to make Median ignore certain colours? Instead of consistently throwing away the lower values, just have it ignore #000000, especially if it's near the edges?
    Quote Quote  
  30. Member
    Join Date
    Dec 2005
    Location
    Finland
    Search Comp PM
    Hmm. There is no such functionality built-in, it's kind of designed on the premise that you are doing the captures and therefore they all match. But I'm sure your situation can be solved with with some masking using other filters. Can you post representative frames from each of your streams, or short video clips?
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!