VideoHelp Forum


Try DVD-Fab Video Downloader and rip Netflix video! Or Try DVD-Fab and copy Blu-rays! or rip iTunes movies!
+ Reply to Thread
Results 1 to 14 of 14
Thread
  1. I am trying to inverse upscale a blu-ray with a filter called debilinear and then rescale it with nnedi3 to get a much more clear and sharp image.

    This usually works wonders, especially on clean sources. Most of the time this filter has a certain side effect with artifacts that are usually minimized with a denoiser afterwards or not very noticeable. The problem is, this source is mostly dark anime, and its making it look awful or like I added a thick grain to it. The blocky/grainy artifacts are too prevalent or obvious to clean with the light denoise cleaning for this source, and I do not wish to destroy it with a very powerful DNR.

    Here is two comparison screenshots of the source vs after using debilinear. (Use keyboard arrows or mouse click to switch images)

    It looks like the filter is adding really thick grain or something to the video.

    Am I stuck with this if I use debilinear? I did not see any parameter to adjust that might fix this. I tried doing a mergeluma as well to see if it would help, but it pretty much defeated the purpose of using debilinear and made it look like the source again.
    Last edited by killerteengohan; 9th Nov 2020 at 06:00.
    Quote Quote  
  2. I tried using an alternative method I just found, which looks like it will work in 16 bits instead of 8.

    Code:
    w = (1280)  # Original size
    h = (720) #
    BilinearResize (w * 3 / 2, h * 3 / 2)
    Dither_convert_8_to_16 ()
    Dither_resize16 (w, h, kernel="bilinear", invks=true)
    DitherPost ()
    nnedi3_rpow2(2, cshift="LanczosResize", fwidth=1920, fheight=1080, nns=4)
    I was hoping the higher bits would help, but it looked about 95% the same and the artifacts are still there. The debilinear result was sharper.
    Quote Quote  
  3. I almost forgot, here's a sample video from the source if needed.

    Upload Deleted
    Last edited by killerteengohan; 9th Nov 2020 at 06:00.
    Quote Quote  
  4. I would expect a de-bilinear filter to increase noise -- because a bilinear resize naturally decreases noise. That video has a bit of noise (view Histogram(mode="luma")) so what you're getting isn't surprising.
    Quote Quote  
  5. Yeah, I tried that histogram you showed, and I can see some noise like things. I'll have to remember that handy histogram mode.

    It looks like it is adding splotches in places that were not visible to me in the source. Solid areas have now become specks in some parts.
    I expect the increase in noise, but I did not expect it to seem to appear in places where it looked like it did not exist before or turn from tiny to huge marks. It doesn't look as nice and neat as it did in the luma histogram before debilinear was applied.

    Here is luma before and after debilinear
    https://slow.pics/c/EqOSdwE3



    Any suggestions on reducing visibility of those artifacts? Perhaps an alternative way to get the sharper image with less of the side effect of debilinear or a nice cleaning method that will not wipe away every speck of detail? It almost sounds like a noise cleaner is my only option if I insist on using debilinear.


    McTemporalDenoise I usually like to use was not getting rid of it for me. I tried TNLMeans for cleaning which came out much better looking, but it was creating banding, so I used the debanding built in mcdenoise to counter it. It looks like it helps, but I think you might be able to do better if you try something on the sample video. I gave comparisons below.


    Source vs My script attempt
    https://slow.pics/c/7FvHKZYz
    https://slow.pics/c/DGVaHW1S

    Debilinear + NNEDI3 vs My script attempt
    https://slow.pics/c/di2jiYBw


    I'm also not a fan of the debanding smothering out things in really dark areas of the video which can be seen really easily if you look at the bottom right corner background, or on top of the gun in the second comparison above. The blacks kind of disappear or get smeared around unless I turn of debanding and make it GFthr=1.0. Being able to reduce without needing to deband would be a plus.


    Is there any kind of mask that can keep the sharpness gain from debilinear, and overlay some of the luma from the source to help with the artifacts visibility? That sounds like it might be a very good option if it is one.
    Last edited by killerteengohan; 9th Nov 2020 at 06:01.
    Quote Quote  
  6. If you look at the size of the noise in your source you'll see that the noise was added after upscaling. You should be looking at doing the same. I would try downscaling, smoothing with TNLMeans (with edge mask if necessary), upscaling with nnedi3_rpow2, then use a filter like GradFun3() or GradFun2DBMod() to recreate noise.

    Another approach might be to build an edge mask of the source and only replace those edges with the DeBilinear() results. Something along the lines of:

    Code:
    emask = Blur(1.0).mt_edge(mode="cartoon", thY1=2, thY2=2, chroma="-128").mt_expand(chroma="-128").Blur(1.0)
    Debilinear(1280,720)
    nnedi3_rpow2(2, cshift="LanczosResize", fwidth=1920, fheight=1080, nns=4)
    Overlay(source, last, mask=emask)
    Quote Quote  
  7. Thanks! I will give your suggestion a try right now and see how it works out for me.


    I came up with this idea just before going to bed last night. This actually worked very well. It's only about 90-95% as sharp/detailed looking, but it looks so much better and is still much more acceptable.
    I tried using luma mixing with mergeluma and setting it to about half influence.

    (Luma View Debilinear vs New)
    https://slow.pics/c/uWl8pZqi

    (Full View Debilinear vs New)
    https://slow.pics/c/0empYbxY

    (TNLMeans script vs MergeLuma script)
    https://slow.pics/c/MD77SJ3w

    (Luma Source vs New)
    https://slow.pics/c/4JX2RRev



    While its not a 100% fix, it looks much better and retains more detail than my TNLMeans cleaning attempt. The noise looks so much more reduced that it doesn't really bother me anymore. I could reduce it further by going up to 0.6 influence which looked even better, but then it was starting to get too soft for my liking.

    Thats about the best I could think of and come up with. I might just go with that unless you know of a better way to achieve the results I got by doing the close to the same thing with alternative filters.
    Last edited by killerteengohan; 9th Nov 2020 at 06:01.
    Quote Quote  
  8. I thought this looked pretty good:

    Code:
    source = last
    emask = Blur(1.0).mt_edge(mode="sobel", thY1=2, thY2=2, chroma="-128").mt_expand().mt_inpand(chroma="-128").Blur(1.0)
    Debilinear(1280,720)
    LimitedSharpenFaster(ss_x=1.0, ss_y=1.00, strength=22, overshoot=0, undershoot=0, soft=0, edgemode=0)
    nnedi3_rpow2(2, cshift="LanczosResize", fwidth=1920, fheight=1080, nns=4)
    Overlay(source, last, mask=emask)
    It sharpens most edges and leaves the rest of the image intact. Note the mt_edge() mode changed to "sobel" to find more edges and get better alighnment. I used your LSF settings. Moving LSF after the Overlay may sharpen a few more edges.
    Quote Quote  
  9. Yeah, it does look nice.

    Its a tough call for me and both have their ups and downs. Your suggestion looks sharper for most of the things, and it looks cleaner even with no denoising used at all. Those are two positives to me and overall I like it. It also looks softer in other areas like backgrounds and some lines. Some of the lines appear be thicker and blurrier and more like the source before debilinear was used. Like the black lines on his fingers for example. Its almost as if it only decided to go with half the line in some places for the mask. Slight white and dark artifact spots appear in other random places and kind of looks like halo artifacts in some spots. You can see what I'm talking about on this guys hood in the small shadow area where it meets the lighter area. Just to the left of the red square.

    (Yours & Mine)
    https://slow.pics/c/0H8WLtRx

    I tried cartoon mode, and it did fix the artifacts I mentioned, but it got a lot softer overall. You can easily see the artifacts I was mentioning in this comparison when they go away if you couldn't see them in the previous example.

    (Yours & Yours Cartoon Mode)
    https://slow.pics/c/XBfX1KBg

    Dark and white spots I was referring to can be seen around cheekbone shadows and left eyebrow in this one.

    (Yours & Mine)
    https://slow.pics/c/Hj4poTmI

    (Yours & Yours Cartoon mode)
    https://slow.pics/c/KF4FJ2oB


    I'm not an expert on the different modes, but is there a specific number that can be changed to get your result with the artifacts either gone or harder to notice?

    Sobel = "0 -1 0 -1 0 1 0 1 0"
    Cartoon = Says it behaves like Roberts "0 0 0 0 2 -1 0 -1 0" but only takes into account negative edges.

    I dont know a lot about the modes, but after looking on wiki, I assume perhaps it would be adjusting one of the sobel kernel parameters since that seems to be the only difference between the modes? It might be easier for me to understand if I knew what the numbers meant. Pixel Count? Radius? On, Off? Strength to adjust pixels?
    Last edited by killerteengohan; 5th Nov 2020 at 17:47.
    Quote Quote  
  10. Those nine numbers refer to a 3x3 convolution kernel . The first 3 numbers are the top row, the next 3 are the middle row, last is the bottom row

    https://en.wikipedia.org/wiki/Kernel_(image_processing)
    Quote Quote  
  11. Note that cartoon mode only sees dark to light transitions (left to right, top to bottom). So on a horizontal black line it will see the bottom edge of the line but not the top edge. On a vertical black line it will see the right edge but not the left edge. Sobel sees both light-to-dark and dark-to-light transitions. With my sequence you can play around with the initial Blur() value and the mt_edge() thresholds to get more or less lines/edges.

    I recommend you compare the mask by interleaving it with the source and output video. After Overlay(source, last, mask=emask) add:

    Code:
    Interleave(source,last,emask)
    That way you'll be able to see exactly which edges are being used in the mask, and how well they align with the lines/edges.

    By the way, do you have a 64 bit version of DeBilinear?
    Last edited by jagabo; 5th Nov 2020 at 18:41.
    Quote Quote  
  12. Originally Posted by poisondeathray View Post
    Those nine numbers refer to a 3x3 convolution kernel . The first 3 numbers are the top row, the next 3 are the middle row, last is the bottom row

    https://en.wikipedia.org/wiki/Kernel_(image_processing)
    Yeah, I know that much from the reading already, but I don't know what they do exactly. That's why I said Id probably understand better if I knew.

    There's 9 numbers/slots, what does each slot in the 3 rows change or do? What happens if I put a 1 or a 5 for one of the 9 number areas, what would the difference be? What does the number do? What does higher or lower number change exactly? Whats the difference between a positive and negative number? What is the number changing in those 9 slots of the 3x3 area actually doing to the output? I could change numbers and settings with ease, but I wouldn't have a clue what I would be adjusting exactly.

    I'll go read the link you gave me now and see if it helps me.
    Last edited by killerteengohan; 5th Nov 2020 at 20:04.
    Quote Quote  
  13. Originally Posted by jagabo View Post
    By the way, do you have a 64 bit version of DeBilinear?
    No, I was not aware that one existed. I simply have the r6 version I could find in wiki page.
    Quote Quote  
  14. Originally Posted by killerteengohan View Post
    Originally Posted by jagabo View Post
    By the way, do you have a 64 bit version of DeBilinear?
    No, I was not aware that one existed. I simply have the r6 version I could find in wiki page.
    Same here -- I was only able to find a 32 bit version. I was hoping you had found a 64 bit build!
    Quote Quote  



Similar Threads