VideoHelp Forum
+ Reply to Thread
Results 1 to 24 of 24
Thread
  1. Hiya!

    This has been bugging me for +48h now. I can't stop trying to construct math in my head, in class, anywhere. Need help to figure this one out, or I'll go nuts. It started with me reading the FAQ for the VirtualDub-filter "Spotremover" (http://spotremoverfilter.com/faq.html). This is the part that is bugging me.

    Because Resize is a 2D operation and SpotRemover smoothes in time, it doesn't matter for the resulting quality which one is done first.

    However, from my experience, shrinking should be done before applying SpotRemover because the spatial contrast (Intensity_difference / pixel_distance) becomes higher and this helps the spot detection.
    How is this? Afaik, A->Z is still A->Z even if route is (A->O->Z). What I mean is, offsets (start + end) is still the same levels before/after shrinking, so it shouldn't really matter? Gosh. Could someone try to sort out the math here in a simple manner? Btw, I never do VD-filters. I stick with Avisynth's internal TemporalSoften. Therefor, I must ask you: Is this quote something filter-specific just for SpotRemover? Or is that "rule" that the FAQ above states applied for temporal smoothers in general; i.e. also my TemporalSoften?

    Please, help me sleep!
    Drop dead gorgeous!
    Quote Quote  
  2. Member edDV's Avatar
    Join Date
    Mar 2004
    Location
    Northern California, USA
    Search Comp PM
    It seems to me spot removal is better done at high resolution where the edges are sharp. If you down size, you will filter the edges into a blur that may be more difficult to remove.

    I' not following the "temporal" issue. Are you frame rate converting? Or is this some form of frame averaging?
    Recommends: Kiva.org - Loans that change lives.
    http://www.kiva.org/about
    Quote Quote  
  3. Well, as for the term "spatial contrast"; I've painted this example on my interpretation(!).



    The "spatial contrast" on top is half the scale (width: 180px VS 360px).

    So they (SpotRemover FAQ*) say that shrinking your clip before applying temporal smoothing is preferable since [I quote again] "the spatial contrast (Intensity_difference / pixel_distance) becomes higher and this helps the spot detection.", and I just can't seem to find this very logical. Due to the fact (also stated) that a temporal smoother works in faces of the timeline, and thus the "temporal contrast" after any shrinking/scaling would be exactly the same, I can't seem to find how performing the temporal softening AFTER(!) frame shrink would [I quote again] "helps the spot detection".

    So. Could anyone fill me in on this? And (if so) tell me that this "function" is (somehow) bound to the specific SpotRemover filter, and not the temporal softening in general?

    I hope I've stated my question at issue understandable, English isn't my native tongue. Ty in advance.
    Drop dead gorgeous!
    Quote Quote  
  4. Okey, I'm still lost in the blue.

    Please, could you experienced people try to sort out the pieces of temporal smoothing?

    I have two questions in total.

    #1)I know that to achieve (roughly) the same spatial(!) smoothing results when processing a scale 2:1 frame size clip, you are also supposed to scale the ratio; not the threshold (ie. avisource("704x576.avi").temporalsoften(4,4,8) ~ avisource("352x288.avi").temporalsoften(2,4,8).

    Now, what's the deal with TEMPORAL smoothers (to achieve roughly similar results for scaled image)..? Taking the same clips (above) as simple examples? Are you still not supposed to change the threshold limit, but only the frame search radius? I've tried to "paint" the process in my head for about a week, but I'm a bit slow-minded so I always freeze and forget where I started from. So, a little help here would be much appreciated, especially since this feels like the last great piece in my denoising fundamentals puzzle


    #2) As describben in this (original) post way up above, I'm getting confuzed on the following text in the SpotRemover (a VDMod filter*) FAQ.

    "..from my experience, shrinking should be done before applying SpotRemover because the spatial contrast (Intensity_difference / pixel_distance) becomes higher and this helps the spot detection.."

    How do they mean? I hope that I can find a link between the answers on my two questions, 'cuz I feel that the answer may be somehow connected, so that I will understand #1 if I understand #2, or the other way around Also, I've drawn this little thing as an attempt to resemble the difference in "spatial contrast". You can see that I have turquoise-marked the area which the outer shades share, I'm trying to figure out if this could be somehow a clue? That the FAQ-author means that the "shared shades"-area will be more full of contast, making temporal smoothing not accidentally blur pixel areas - that are really just noise - together as one.



    Am I just making simple things complicated? A while there I thought I was actually beginning to see the logics, that was until I stumbled on to that ugly FAQ with the statement that I should shrink frame befoooooore applying smoother. Gah, I'm so lost here. Please, help me!
    Drop dead gorgeous!
    Quote Quote  
  5. I don't use SpotRemover but in general it will depend on the size of the spots, the sharpness of the spots, the contrast of the spots, and the resizing algorithm used, and spot remover algorithm.

    Very small, low contrast spots are best removed at higher resolution because they will nearly disappear after the frame is reduced, especially if the resizing algorithm isn't sharpening (bilinear). Medium sized (a few pixels), low contrast spots will will probably be removed better after size reduction with a sharpening resizing filter (bicubic, lanczos, etc.) because their contrast will increase. Medium sized, high contrast spots, might be removed equally well before or after the resizing. Large spots may be removed better after resizing because large spots may not bee seen as a spot to the spot remover.
    Quote Quote  
  6. Member
    Join Date
    Jul 2009
    Location
    Spain
    Search Comp PM
    Originally Posted by Gew
    Now, what's the deal with TEMPORAL smoothers (to achieve roughly similar results for scaled image)..? Taking the same clips (above) as simple examples? Are you still not supposed to change the threshold limit, but only the frame search radius?
    For temporal smoothing, the parameters should not change with frame size since they are working in a different 'dimension' (looking at changes in a given position over time).

    jagabo has expertly answered your 2nd question. I'll just add that the FAQ you quoted also says
    if you aim at removing very small spots (about 1-pixel size) then, of course, SpotRemover should go before the shrinking. Otherwise, the tiny spots can get smeared after the shrinking and may pass through the spot detection routines.
    Quote Quote  
  7. Thank you both for your very expertise-cut answers. I'm delightful. Thanks Gavino for giving me the push for a - final - closure on that question; just about every topic on the question aimed at the same (your) answer, so I was prolly just getting confuzed over that FAQ, starting to doubt what I'd already put as de facto in the back of my head. Well, now past by!

    Now, back to jagabo's answer. Very well drafted indeed, it makes sense. Only thing I'm a bit curious on now is the actual operandi on:

    " Medium sized (a few pixels), low contrast spots will will probably be removed better after size reduction with a sharpening resizing filter (bicubic, lanczos, etc.) because their contrast will increase. "
    I once again seek for answers in my illustration.



    Let's focus on one (any) of the scales. Are you referring to how the "outer shades" (turquoise-marked*) of the noise touch/cross ways over frames, I mean when you're speaking of "being removed better"..? Assuming that the "edge-crossing" lower- & upper shades in my image (from any of the scales of course) will have smoothing produce "blur averaging" in these areas, where eg. outer edges of not-noise-pixels are "touching" via simple frame 1-(2)-3. Gah, I hope you understand what I'm trying to.. say.. err.. thinking :P Am I on the right track? Or am I making things complicated again? Yesterday I thought that could pretty well the solution, but then it hit me that scaling the image would naturally scale the whole image, so that it would still become +/- 0 in the end, and I was once again lost in delusions.

    My tools: Avisynth - BicubicResize & TemporalSoften.


    (EDIT)
    I have always assumed that temporal smoothers compare pixels(!) over frames, not "full frames". For example, with TemporalSoften(1,2,2,10,2) the smoother will compare pixel x:1 y:1 with (the same) pixel x:1 y:1 in previous and next frame, then pixel x:1 y:2, and so on. I have never considered that temporal smoothers could in some way compare frames by some algoritm of "sweeping", without actual pixel-by-pixel examination.

    In easy words explained - temporal smoothers sees (compares) the frames "as whole's" - not pixel-by-pixel like. This would then explain why different spatial contrasts (by scaling) would cause different possibilities of spot detection (using the same threshhold/radius settings).

    Hmm, after further thinking. Not very likely. The easiest way for the CPU to render the frame would after all be in pixels, so naturally it would (like I've always assumed) after all examine the frames by every each pixel. Just a thought.
    (/EDIT)
    Drop dead gorgeous!
    Quote Quote  
  8. Gosh, feels like my last post (above) became a big mess. Muddle-headed, to say the least.

    To make myself clear now. The only thing that is still bugging/confuzing me is:

    " Medium sized (a few pixels), low contrast spots will will probably be removed better after size reduction with a sharpening resizing filter (bicubic, lanczos, etc.) because their contrast will increase. "
    How will increased 2D contrast in the frames be helpful to the 3D smoother?

    I'm sure there are raw and simple logics in it, I just need a little help on my way finding it
    Drop dead gorgeous!
    Quote Quote  
  9. Again, I don't know the particular filter you are using. But generally, a spot remover is designed to recognize small spots that appear for a single frame. They look both at the size and obviousness of the spot in the current frame and also compare with the contents of frames before and after. Spot removers aren't necessarily designed to remove film grain or static noise. A spot that exists through mulitiple frames is considered part of the picture. A spot that appears for only one frame but is not very different from the area around it may not be considered a spot. There is typically a threshold difference in contrast that must be surpassed before it is removed. A sharpening resize filter may increase the contrast of the spot (not the entire video) enough to make the spot cross that threshold.
    Quote Quote  
  10. Originally Posted by jagabo
    Again, I don't know the particular filter you are using. But generally, a spot remover is designed to recognize small spots that appear for a single frame. They look both at the size and obviousness of the spot in the current frame and also compare with the contents of frames before and after. Spot removers aren't necessarily designed to remove film grain or static noise. A spot that exists through mulitiple frames is considered part of the picture. A spot that appears for only one frame but is not very different from the area around it may not be considered a spot. There is typically a threshold difference in contrast that must be surpassed before it is removed. A sharpening resize filter may increase the contrast of the spot (not the entire video) enough to make the spot cross that threshold.
    The constant variables in my workflow are TemporalSoften & BicubicResize.

    I may have misunderstood the whole thing then. I've always thought that a temporal (3D) smoother is examines the same "pixel latitude" in neighbor frames. For instance, TemporalSoften(1,3,3,10,2) would "smoothen" at pixel location x:263 y:129 by averaging the luma/chroma in the same pixel in prev. + next frame, presupposed that their diff. are all below(!) the threshold param. One general description on the threshold param states: "determines how close to the current pixel's value a temporal neighbor pixel has to be in order to be included in the averaging", which seem(ED!) logical.

    I thought I had just started to understand my temporal cleaner and it's params. Simple and easy. Frame radius, luma thresh, chroma thresh, scene change. However, now you're giving me one more aspect to consider - sort of an opposite side threshold, if I understand this correctly. Like I stated, I was not aware that TemporalSoften() did any(!) "spatial investigation" at all, but only.. neighbour frames/same pixel ..like I stated above. As I don't see any documentation on this threshold I take it that TemporalSoften has a fixed value on this.


    Say I have a 704x576px clip with a few 5x5px spots. Would you apply TemporalSoften after or before BicubicResize(576,384)..? Which one would achieve the best spot detection?

    Regards~
    Drop dead gorgeous!
    Quote Quote  
  11. You're describing a temporal smoother, not a spot remover. A spot remover removes things like specks of dust the appear on a single frame.
    Quote Quote  
  12. Oh my godness. Have I spent 1½ week on headache and sleepless nights of math, colorimetry, and god what-not, all over a great misunderstanding?!

    Please add to your remark that with a temporal smoother there would be (roughly) no difference in smoothing result by BicubicResize(720x576).TemporalSoften(3,4,4,10,2) from BicubicResize(512x384).TemporalSoften(3,4,4,10,2), which then would put all the logic-schematics [I thought I had concluded] back in order.
    Drop dead gorgeous!
    Quote Quote  
  13. You should see a little difference between BicubicResize() before or after a temporal smoother. BicubicResize() is a sharpening resizer so it will enhance noise a little bit (ie, increase the contrast of the noise). If you resize first, noise right below the threshold for temporal smoothing may be bumped up just above the threshold.
    Quote Quote  
  14. Originally Posted by jagabo
    If you resize first, noise right below the threshold for temporal smoothing may be bumped up just above the threshold.
    Makes perfectly sense. In fact, when I was thinking deeply over that thingie/misunderstanding that particular scenario did cross my mind, prolly just adding more confusion at the time. Anyways, as you indeed boldly marked, a little difference. Slightly nominal, so to speak. Best crib sheet to use with temporal smoothers would prolly be to do smothing/denoising before (sharpening) shrink.

    Thanks a lot. I did finally get a closure on my dazed confusion.
    Drop dead gorgeous!
    Quote Quote  
  15. Or, might I add, as a final note, use BilinearResize() for non-sharpening resize/shrink,
    and that "little difference" should presumably be reduced to "zero difference"
    Drop dead gorgeous!
    Quote Quote  
  16. I think there will still be a little difference because BilinearResize() will cause some of the noise just above the temporal filter's threshold to fall below the threshold. Again, the differences will be very minor.
    Quote Quote  
  17. Duly noted!

    Thanks!!
    Drop dead gorgeous!
    Quote Quote  
  18. Sorry for re-up, but this has been haunting me day in / day out ever since last week when you made it all so clear. It's like this, I am (obviously) somewhat a control freak (yes, and also a bit neurotic). I'm also easily confuzed, therefor I love having a "rule of thumb" when it comes to just about anything there is to think about here in life.

    The way you explained it with those tiny bumps just above/below the threshold in regards to bicubic/bilinear was superb.

    So, I just want to know this..

    Bicubic downsize will alter the "threshold catch" with something between 0 (zero!) and -0,01% (~sort of, the negative value was really foobar, just to simplify my question). Now, since the algoritm is sharpening it feels logical that this is the absolute "altering spectrum" for bicubic; ie. this barely noticeable difference could never be something between 0 and +0,01%.

    I'm pretty confident this far. But what about Bilinear? Would it be as safe to say that this "possible-but-barely-noticeable-change" after bilinear-downsize be in the spectrum of 0 and +0,01% (never the other way around)..?

    Oh gosh. I have hesitated (and tried to keep myself from..) dropping this all week now, but I just had to. Please don't think I'm intrusive/insolent, it would just feel so daaarn good if you could confirm/corroborate on this, then I would probably - finally - be able to rest my case and "let go".

    Again, you're explanation with the "bumps" was superb, even to a muddle-headed lad like myself. I just need to know if (and I really do hope the answer will be yes, since that would relief me plenty) I could take these figures you gave me as consistent


    Ty in advance.
    Deepest Regards~
    Drop dead gorgeous!
    Quote Quote  
  19. Banned
    Join Date
    Oct 2004
    Location
    New York, US
    Search Comp PM
    That's really complicated. My experience and that of others is to resize last. Do everything else first. It always worked better than doing it the other way around, so I didn't ask why.
    Last edited by sanlyn; 20th Mar 2014 at 16:18.
    Quote Quote  
  20. Originally Posted by Gew
    Bicubic downsize will alter the "threshold catch" with something between 0 (zero!) and -0,01% (~sort of, the negative value was really foobar, just to simplify my question). Now, since the algoritm is sharpening it feels logical that this is the absolute "altering spectrum" for bicubic; ie. this barely noticeable difference could never be something between 0 and +0,01%.

    I'm pretty confident this far. But what about Bilinear? Would it be as safe to say that this "possible-but-barely-noticeable-change" after bilinear-downsize be in the spectrum of 0 and +0,01% (never the other way around)..?
    I hate to say "never". I would agree with you in general though.
    Quote Quote  
  21. Banned
    Join Date
    Oct 2004
    Location
    New York, US
    Search Comp PM
    Some time back I saw resizing tests using a multitude of algorithms, from Lancozc(sp?) to bilinear to bicubic and other labels used by every graphics app the testers got their hands on. Bicubic won every time. For really big or really tiny work, bicubic was tried in multiple stages and still won. The tests didn't furnish numbers; what they looked at were the results. Could this be why many pro apps use bicubic as default?

    Adobe tutorials I've seen advised that resizing and sharpening should be separate steps. Be that as it may, I've found it never fails that strict methodologies break down in the face of the countless variables encountered in graphics processing. One method might work 85% of the time. It's that 15% that requires inspiration, intuition, and even trial and error, over science.
    Last edited by sanlyn; 20th Mar 2014 at 16:18.
    Quote Quote  
  22. Video Restorer lordsmurf's Avatar
    Join Date
    Jun 2003
    Location
    dFAQ.us/lordsmurf
    Search Comp PM
    People like you make it possible for people like me. Keep it up!
    Want my help? Ask here! (not via PM!)
    FAQs: Best Blank DiscsBest TBCsBest VCRs for captureRestore VHS
    Quote Quote  
  23. Originally Posted by jagabo
    Originally Posted by Gew
    Bicubic downsize will alter the "threshold catch" with something between 0 (zero!) and -0,01% (~sort of, the negative value was really foobar, just to simplify my question). Now, since the algoritm is sharpening it feels logical that this is the absolute "altering spectrum" for bicubic; ie. this barely noticeable difference could never be something between 0 and +0,01%.

    I'm pretty confident this far. But what about Bilinear? Would it be as safe to say that this "possible-but-barely-noticeable-change" after bilinear-downsize be in the spectrum of 0 and +0,01% (never the other way around)..?
    I hate to say "never". I would agree with you in general though.

    Had to make some further digging, especially since I reckoned fixed qscale to be a pretty neat way to easily discover these tiny differences. Here's what I made. I took a few (4 different) clips from DV-capturing (720x576, 16/9 "squeezed" rec @PAL 25 fps), and made four simple .avs files for each sample clip.

    #1) smoothdeinterlace().bicubicresize(640,360).tempora lsoften(4,6,8,10,2)
    #2) smoothdeinterlace().temporalsoften(4,6,8,10,2).bic ubicresize(640,360)
    #3) smoothdeinterlace().bilinearresize(640,360).tempor alsoften(4,6,8,10,2)
    #4) smoothdeinterlace().temporalsoften(4,6,8,10,2).bil inearresize(640,360)


    Then I ran the following batch cmd on these:

    ffmpeg.exe -i dv~.avs -an -vcodec libxvid -qscale 5 -aspect 16:9 dv~.avi

    The clips were of different lengths (16sec, 32sec, 16sec, 14sec; respectively).
    The result was somewhat ambiguous, leaving me further dazed, I must confess

    16sec clip #1->
    bicubic_then_smooth.avi - 2 691 044 b
    smooth_then_bicubic.avi - 2 752 978 b
    bilinear_then_smooth.avi - 2 638 970 b
    smooth_then_bilinear.avi - 2 700 448 b


    32s clip->
    bicubic_then_smooth.avi - 5 639 624 b
    smooth_then_bicubic.avi - 5 621 916 b
    bilinear_then_smooth.avi - 5 545 076 b
    smooth_then_bilinear.avi - 5 528 500 b


    16s clip #2->
    bicubic_then_smooth.avi - 3 156 560 b
    smooth_then_bicubic.avi - 3 154 852 b
    bilinear_then_smooth.avi - 3 101 430 b
    smooth_then_bilinear.avi - 3 101 808 b


    14s clip->
    bicubic_then_smooth.avi - 2 326 492 b
    smooth_then_bicubic.avi - 2 372 718 b
    bilinear_then_smooth.avi - 2 283 536 b
    smooth_then_bilinear.avi - 2 329 016 b


    From this to judge, I don't seem to have a rule of thumb here at all ;(

    Or have I overlooked something important (like I usually do)..? I.e. if you told me that I cannot foresee the actual amount of smoothing made by looking on how the qscale has worked it's way, eg. that smoother picture doesn't _necessarily_ mean higher compression ratio, or something about how this is 3D smoothing after all, which _really_ doesn't directly affect the quant, but is really more about how a (better) temporal smoothened video may present better reference frames and suchlike.. Please, fill me in.

    Like I said, it's my great wish that I have overlooked some essential detail of the coding procedure that will make the yielded results of my so called "tests" above _not_ properly suited to the purpose, so that that rule of thumb I almost(!) got myself ("I hate to say 'never'. I would agree with you in general though") will _still_ be to taken account for. That would be to my great satisfaction, oh yeah!
    Drop dead gorgeous!
    Quote Quote  
  24. Are there any pro's out there that could shed some light on this? I'm still left in the blue and this damn puzzle havn't stopped bugging me. You could almost say it haunts me. I'm trying to find the connection, sort of. I have come to conclude that _it's safe to say_ that #1(bicubic_then_smooth) is always > #3(bilinear_then_smooth); and likewise #2 always > #4.

    However, that was not my original plan with this test. I had wishful hopes on finding a connection that would make #1 always either >/< of #2, and same with #3 & #4.

    I have spent so many nights thinking about this. I've come to the insight that I had (at first) been blind to the fact that resizing algorithms - per se - are either a "smoother" (if bilinear) or "shapener" (bicubic/lanchoz); which would then of course have affect on the result; meaning a constant aspect to take into consideration in this math. But still, where's the true connection? Will this "odd" difference between #1/#2 always be mariginal, but unpredictable.

    Or, what have I missed?
    Please, fill me in. I so much need closure.
    My head hurts.
    Drop dead gorgeous!
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!