VideoHelp Forum




+ Reply to Thread
Results 1 to 15 of 15
  1. Sometimes you can't find totally solid areas and want a big selection but you end up with a sample of uneven brightness. I used neuron2's windowed equalizer which I'm not familiar with and didn't really like how it messed with the contrast and sharpened the already-sharp noise. I still had to increase the overall brightness to match the overall luminance of the original. Is this essentially the best one can do or is there a better method?
    Image Attached Images    
    Quote Quote  
  2. Not sure what algorithm or how that plugin works

    One way you might accomplish this task is with a mask (nice rhyme! ) - an ellipsoid, feathered mask - apply the effect (like levels or curves) through the mask . You might use something like photoshop, after effects, gimp. You could composite them together with overlay in avisynth or video editor of choice

    Another way might be to crop to a smaller section and use inpainting/ context aware fill to generate a larger dimension noise pattern
    Quote Quote  
  3. Problem with that is, there's a very blurry and not straight line between the bright and dark areas. I can't just select a large area and illuminate with a fixed number. I usually do the content-aware fill on the dark/bright corners but this doesn't perfectly fix the problem at hand is not always applicable. This windowed equalizer filter is okay with the right settings but I noticed it removed the chroma, not to mention that it didn't completely fix the localized brightness disparity. Running the filter many times did flatten the luminance, but now the chroma is completely gone. You know any filter that can separate the chroma from the original so I can add it to the equalized photo?
    Quote Quote  
  4. Run a big Gaussian blur, subtract it from the original, add an offset to restore the average.

    original, blur, subtract
    Click image for larger version

Name:	three.png
Views:	1037
Size:	250.0 KB
ID:	15124
    Last edited by jagabo; 11th Dec 2012 at 10:59.
    Quote Quote  
  5. Originally Posted by Mephesto View Post
    Problem with that is, there's a very blurry and not straight line between the bright and dark areas. I can't just select a large area and illuminate with a fixed number.
    It's not a problem - that' s what a "feathered mask" is for . Different values of luminance in your mask represent different transparency values . 100% white (or RGB 255,255,255) is 100% transparent . 100%black (RGB 0,0,0) is 100% opaque. Values in between are intermediate in trasparency

    In the example below, I just quickly "eyeballed" it. If you spent more time I'm sure you can do a better job. Or maybe you wanted it to be the "brighter" section? Anyways, I'm sure you get the idea...

    You know any filter that can separate the chroma from the original so I can add it to the equalized photo?
    In avisynth , you can process chroma separately with utoy, vtoy (u and v planes can be processed separately) . These will represent them as greyscale, then you can do whatever processing on the U, V planes and merge them back into Y,U,V video . Or maybe you want to merge U plane from another video ... you get the idea...

    http://avisynth.org/mediawiki/Swap
    Image Attached Images  
    Quote Quote  
  6. That's pretty good. What did you use to do that? Photoshop? Can you quickly detail the procedure? The word "feather" only brings up memories of trying to make accurate selections of objects that are hard to accurately select. I could never do it right, but all those pros on YouTube could.

    Your photo has a T-shaped bright spot in the middle-right but as you said, you did it quickly so I'll leave that be.

    I couldn't figure out how to separate and merge chroma with the info you gave so I did this:

    Code:
    equalized=imagesource("C:\noiseWindowEQextreme.PNG").greyscale.converttoYV12
    original=imagesource("C:\noiseEQtest.png").converttoYV12
    
    b=original.greyscale()
    
    chroma=subtract(original,b)
    
    color=chroma.invert
    
    subtract(color,equalized)
    invert
    Don't laugh. I'm unaware of any "add" function and I forget how to use Overlay.

    Here's the result. I think I like your method better. Mine looks too unsaturated and the localized equalization is too atomic. This is due to the small (16x10) window I set in the tool, because if I set higher then the edges would be too patchy.

    TBH, I think this aggressive equalizing would do more harm than good for the NV noise profile. What do you think?
    Image Attached Images  
    Quote Quote  
  7. Originally Posted by Mephesto View Post
    That's pretty good. What did you use to do that? Photoshop? Can you quickly detail the procedure? The word "feather" only brings up memories of trying to make accurate selections of objects that are hard to accurately select. I could never do it right, but all those pros on YouTube could.

    Your photo has a T-shaped bright spot in the middle-right but as you said, you did it quickly so I'll leave that be.
    I did it in AE - It's the most powerful tool for masking manipulations for video by far . You just draw whatever shape you want with the pen tool . You can composite different layers, different masks, variable mask feathering . For example if you wanted to fix the "T" (or any other shape) you just draw it . Photoshop is great for single images, but AE can use "moving masks" or rotoscoping for video


    Here's the result. I think I like your method better. Mine looks too unsaturated and the localized equalization is too atomic. This is due to the small (16x10) window I set in the tool, because if I set higher then the edges would be too patchy.
    This looks ok. So I assume you did want the "dark" instead of the "bright" ? You can play with the saturation if you feel it's undersaturated (just add saturation filter, e.g. tweak(sat=?) )


    TBH, I think this aggressive equalizing would do more harm than good for the NV noise profile. What do you think?
    I dont know, I never tried this approach in NV before. You'll have to do some tests - the proof is in the end results . It looks like the noise pattern might be "flattened" a bit ?
    Quote Quote  
  8. Originally Posted by jagabo View Post
    Run a big Gaussian blur, subtract it from the original, add an offset to restore the average.

    original, blur, subtract
    Image
    [Attachment 15124 - Click to enlarge]
    I can't believe I missed jagabo's post. This is a damn good idea. Isolating the noise from the luma since noise is in the higher frequency. Why didnt I think of this...

    What script/filter did you use for the Gaussian blur?

    This is perfect. Your method returned the best result.

    PDR, look at the first and third image. Do you see what I meant by the equalization being too atomic? It destroyed the smallest luma irregularities in the noise when I only intended the large ones gone. I'll do NV video tests later and post screenshots if anyone's interested.
    Image Attached Thumbnails Click image for larger version

Name:	noiseEQ.png
Views:	172
Size:	300.0 KB
ID:	15131  

    Quote Quote  
  9. yes I'm interested, post the final results
    Quote Quote  
  10. Originally Posted by Mephesto View Post
    What script/filter did you use for the Gaussian blur?
    First I tried something like 50 Blur(1.0) but I didn't think that was blurry enough. So I used the Gaussian blur in a paint program, radius 15. You should be able to use VariableBlur() in AviSynth.
    Quote Quote  
  11. Originally Posted by poisondeathray View Post
    yes I'm interested, post the final results
    http://www.sendspace.com/file/xj3dbk NoEQ is denoised with that original noise profile, flatEQ is the window EQ'd one and perfect EQ is the final one I put together with jagabo's method. The BPPs of each video starting with the source encoded at CRF18 are 0.645 0.233 0.299 and 0.256. NoEQ denoised the best and had the least entropy, but it also destroyed some legit low-frequency detail like crease in clothing. All the others retained low-freq detail but skipped more noise, usually the brighter sharper noise. The FlatEQ was the worst. It left a lot of noise and destroyed some legit detail. I'm forced to conclude that this kind of tinkering with the noise profiles is unnecessary. The untouched profile denoised the best because it was given both bright and dark grain to work with. Flattening this made it perform worse. On the other hand, the profile could've been equalized and brightened rather than darkened. Maybe this might save the very low-frequency detail AND kill the noise.
    Quote Quote  
  12. Originally Posted by Mephesto View Post
    Originally Posted by poisondeathray View Post
    yes I'm interested, post the final results
    http://www.sendspace.com/file/xj3dbk NoEQ is denoised with that original noise profile, flatEQ is the window EQ'd one and perfect EQ is the final one I put together with jagabo's method. The BPPs of each video starting with the source encoded at CRF18 are 0.645 0.233 0.299 and 0.256. NoEQ denoised the best and had the least entropy, but it also destroyed some legit low-frequency detail like crease in clothing. All the others retained low-freq detail but skipped more noise, usually the brighter sharper noise. The FlatEQ was the worst. It left a lot of noise and destroyed some legit detail. I'm forced to conclude that this kind of tinkering with the noise profiles is unnecessary. The untouched profile denoised the best because it was given both bright and dark grain to work with. Flattening this made it perform worse. On the other hand, the profile could've been equalized and brightened rather than darkened. Maybe this might save the very low-frequency detail AND kill the noise.
    http://www.sendspace.com/file/xj3dbk

    NoEQ is denoised with that original noise profile, flatEQ is the window EQ'd one and perfect EQ is the final one I put together with jagabo's method.

    The BPPs of each video starting with the source encoded at CRF18 are 0.645 0.233 0.299 and 0.256.

    NoEQ denoised the best and had the least entropy, but it also destroyed some legit low-frequency detail like crease in clothing. All the others retained low-freq detail but skipped more noise, usually the brighter sharper noise.

    The FlatEQ was the worst. It left a lot of noise and destroyed some legit detail.

    I'm forced to conclude that this kind of tinkering with the noise profiles is unnecessary. The untouched profile denoised the best because it was given both bright and dark grain to work with. Flattening this made it perform worse. On the other hand, the profile could've been equalized and brightened rather than darkened.

    Maybe this might save the very low-frequency detail AND kill the noise.

    Wow, forbidden from editing my own post. I think this thread is bugged.
    Quote Quote  
  13. Well thanks for sharing your experiences, at least it was an interesting little thought experiment .
    Quote Quote  
  14. Sure. I just tried the brighter EQ'd sample and it did slightly better than the dark one but still visibly worse than the untouched original.

    I guess NV already analyzes and fine-tunes the samples it is given. Unchecking "very low freq noise" mitigates the problem of creases in clothing being oversmoothed.

    NV is a damn fine denoiser of the decade.
    Quote Quote  
  15. Member 2Bdecided's Avatar
    Join Date
    Nov 2007
    Location
    United Kingdom
    Search Comp PM
    What jagabo suggested is a good idea, and of course only keeps the high frequency noise. It's idiotic that NV can't do this itself - but it'll do almost this if you just give it a small box for the noise sample. That's the quick and dirty solution, and I wonder if the results are any worse?

    Cheers,
    David.
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!