VideoHelp Forum
+ Reply to Thread
Results 1 to 13 of 13
Thread
  1. Hello, I just started learning avisynth and up-scaling. I started with this video (1 minute excerpt) of a lossless rip from a DVD9. My current script I'm using in AvsPmod is the following.

    Code:
    SetFilterMTMode ("QTGMC", 2)
    FFmpegSource2("Live At The Quick.mkv")
    assumetff().converttoyv12(interlaced=true)
    ColorMatrix(mode="Rec.601->Rec.709",interlaced=true)
    qtgmc(preset="Slower",tr2=0)#
    Blur(0,0.2)
    crop(8,64,-32,-66)
    nnedi3_rpow2(rfactor=4,cshift="lanczosresize",fwidth=1920,fheight=1080)
    sharpen(1.0,1.0)
    AddGrainC(var=25.0, uvar=0.0, hcorr=0.0, vcorr=0.0, seed=-1, constant=false, sse2=true)
    prefetch
    
    # Extract audio tracks 1 and 2
    a1 = FFmpegSource2("Live At The Quick.mkv", atrack=1)
    
    # Add the audio tracks to the video
    AudioDub(last, a1)
    This is the resulting file. I'm satisfied with the result which I find great since it's coming from a DVD9. I downloaded the trial of Topaz Video AI and the result is inferior to my script even though I tried many settings.

    I'd like to know what you think and if you have any recommendations to enhance what I've achieved so far.

    Thank you very much!

    BEFORE | AFTER
    Quote Quote  
  2. The bobbed and upscaled video has clipped whites and the file size is 4x larger compared to the original. The superwhites of the original got clipped, means any details/gradations in there are lost.
    What's the purpose (and benefit?) of upscaling?

    Edit:
    Here 2 pictures showing the loss of details. Left from the original with superwhites recovered. Right is from your upscaled version.
    Image Attached Images    
    Last edited by Sharc; 2nd May 2023 at 05:41.
    Quote Quote  
  3. Captures & Restoration lollo's Avatar
    Join Date
    Jul 2018
    Location
    Italy
    Search Comp PM
    Sharc, I am not sure the details are lost because the levels. There is no RGB conversion in the processing so a full YUV range 0-235 is adequate. Supposing the output files are properly handled and generated in YUV colorspace.

    original:

    Image
    [Attachment 70687 - Click to enlarge]


    upscaled (maybe not exactely the same frame):

    Image
    [Attachment 70688 - Click to enlarge]


    I suspect more the QTGMC() or blur() processing. It may be interesting to process step by step the script and see where the problem arises.

    In general, I agree with you and I'm not a big fan of upscale (except to avoid youtube heavy lossy encoding).

    To OP, while comparing with Topaz VEAI (there is nothing it can do that cannot be matched/beaten by AviSynth/VapourSynth), you should consider that Topaz VEAI is not able to deinterlace with quality, so you should feed it with an already deinterlaced video, upscale, and then compare with AviSynth/VapourSynth approach.
    Mixing the best of the 2 worlds is often what gives the best results.
    Quote Quote  
  4. @lollo
    What I mean is this:
    The top picture is the original with superwhites in the Y=235...255 range. These won't be seen on a RGB monitor or TV, but can be recovered by bringing the luma into the 16....235 range, revealing all details. This is what I did in post#2, left picture.
    The bottom picture is the upscaled variant of the OP with crushed whites at around Y=235. Converting to RGB shows clipped R,G,B waveforms. So I would recommend to adjust the levels before encoding to bring them into the 16....235 limited luma range.

    (Being more nitpicking one would actually have to ensure RGB gamut compliance rather than just the luma.)

    Doesn't a YUV-> RGB conversion take place when decoding and playing the video on TV, for example?
    Image Attached Thumbnails Click image for larger version

Name:	original (qtgmc-ed).png
Views:	16
Size:	638.9 KB
ID:	70690  

    Click image for larger version

Name:	upscaled.png
Views:	15
Size:	323.0 KB
ID:	70691  

    Quote Quote  
  5. Captures & Restoration lollo's Avatar
    Join Date
    Jul 2018
    Location
    Italy
    Search Comp PM
    The top picture is the original with superwhites in the Y=235...255 range. These won't be seen on a RGB monitor or TV, but can be recovered by bringing the luma into the 16....235 range, revealing all details.
    Correct. I am watching the video here on my computer. Any display format dealing with limited range must take into account that. We agree.

    The bottom picture is the upscaled variant of the OP with crushed whites at around Y=235. Converting to RGB shows clipped R,G,B waveforms. So I would recommend to adjust the levels before encoding to bring them into the 16....235 limited luma range.
    For what I see, there are not crushed whites (accumulated values), but whites in the 235-255 range. Again, if no RGB conversion is performed for displaying or further processing, that's fine. Otherwise the range must be reduced to 16-235 prior to any RBG conversion.

    So I would recommend to adjust the levels before encoding to bring them into the 16....235 limited luma range.
    That's always a good suggestion. But it's a safe and generic approach, we may not apply it for specific cases as above.

    Doesn't a YUV-> RGB conversion take place when decoding and playing the video on TV, for example?
    Yes, as far as I know.

    To summarize, I think the loss of details you properly noticed IMO is not related to levels ("parameter" of the video where all your/mine comments applies anyhow)
    Quote Quote  
  6. Originally Posted by lollo View Post
    To summarize, I think the loss of details you properly noticed IMO is not related to levels ("parameter" of the video where all your/mine comments applies anyhow)
    There is a clear accumulation of luma IMO around Y=235 with much less 'swing' compared to the original. Looks like a luma compression in that range, for whichever reason. Even when we shift the level of the upscaled version down, it is still accumulated at a lower Y. Means details and shades are lost in that area. And my doubt remains: Why upscaling, just blowing the file size 4x up and loosing details. What's the purpose?
    Quote Quote  
  7. As I said in your other thread on this exact same subject:

    https://forum.videohelp.com/threads/409386-Need-help-with-my-DVD-upscale-project#post2687979

    you probably are not going to see any improvement and -- no surprise to me -- you may have actually made the video worse. I say "no surprise" because upscaling is not some magical process that is going to make the video look sharper, clearer, crisper, or better in any way. Your monitor is already doing the upscaling and therefore the software upscaling is going to have to be significantly better in order to justify the rather large (days of work) amount of time it will take you to do this for a large number of titles.
    Quote Quote  
  8. Captures & Restoration lollo's Avatar
    Join Date
    Jul 2018
    Location
    Italy
    Search Comp PM
    Even when we shift the level of the upscaled version down, it is still accumulated at a lower Y. Means details and shades are lost in that area.
    IMO they are accumulated because the processing in the script, not because the levels. But that's not so important.

    And my doubt remains: Why upscaling, just blowing the file size 4x up and loosing details. What's the purpose?
    It make sense only if the upscaler of the TV or Monitor is poor compared to nnedi3_rpow2(), which is not common, contrarly to the on-board deinterlacers (I know your different opinion on deinterlacing ).

    Your monitor is already doing the upscaling and therefore the software upscaling is going to have to be significantly better in order to justify the rather large (days of work) amount of time it will take you to do this for a large number of titles.
    I agree. However the script has other filtering, improving the quality of the final results (QTGMC and sharpen); i would limit the processing only to that removing the upscale and check the final result versus the upscaled version...
    Quote Quote  
  9. Originally Posted by Sharc View Post
    The bobbed and upscaled video has clipped whites and the file size is 4x larger compared to the original. The superwhites of the original got clipped, means any details/gradations in there are lost.
    What's the purpose (and benefit?) of upscaling?
    I've deactivated every lines of my code and the only thing that seems to change something in that direction is

    Code:
    ColorMatrix(mode="Rec.601->Rec.709",interlaced=true)
    Link for easier comparison: https://imgsli.com/MTc3MTkx

    Although I doubt this should modify the superwhites, I may be wrong. Any idea on how to fix this?

    Originally Posted by johnmeyer View Post
    ...you probably are not going to see any improvement and -- no surprise to me -- you may have actually made the video worse. I say "no surprise" because upscaling is not some magical process that is going to make the video look sharper, clearer, crisper, or better in any way. Your monitor is already doing the upscaling...
    I don't agree with this statement as even though I clipped the superwhites my version is looking better than what the monitor is showing me with the DVD9 playing on VLC with a 16/9 crop. See yourself here I made a frame comparison.

    https://imgsli.com/MTc3MTk0
    Quote Quote  
  10. Originally Posted by G22 View Post
    Although I doubt this should modify the superwhites, I may be wrong. Any idea on how to fix this?
    You have to shift the superwhites below luma 235 to recover what's in there. Otherwise they get lost when your player makes the YUV-> RGB conversion for your monitor or TV (your monitor and TV screen are RGB devices).
    Quote Quote  
  11. Originally Posted by Sharc View Post
    Originally Posted by G22 View Post
    Although I doubt this should modify the superwhites, I may be wrong. Any idea on how to fix this?
    You have to shift the superwhites below luma 235 to recover what's in there. Otherwise they get lost when your player makes the YUV-> RGB conversion for your monitor or TV (your monitor and TV screen are RGB devices).
    Could you help me with how I could add this to my script?

    I've looked in the ColorMatrix documentation and couldn't find a flag for this.

    The only thing I could find is tvopt from Dither tools that says the following;

    Can increase the actual overlap for a fixed number of slices in presence of TV-scale values (luma in the 16–235 range, and chroma in 16–240) by reducing the slice covering to the visible range. This means super blacks and super whites are clipped. The option is useful with small overlap rates.
    Although I'm a bit reluctant to add Dither tools, MaskTools and RgTools plugins if I'm not gonna use them. My plugins folder is now full of crap since I tried so many things that ended up not being the right way. Everytime I ask here I get better recommendations and I'm suggested with totally different plugins than what Google searches is showing. I feel like Google is showing outdated information.

    Thank you for replying!
    Quote Quote  
  12. Study the functions "Levels" and "Histogram()" in Avisynth.
    Insert in your script after the source filter something like
    Code:
    Levels(0,1.0,255,0,235,coring=false)
    It will shift the superwhites of the source from Y>235 to Y<235.

    Or with similar effect you may use
    Code:
    Tweak(0,0.92,0,0.92,coring=false)
    Or with similar effect:
    Code:
    ColorYUV(levels="PC->TV")
    Levels(16,1.0,235,3,235)  #restore the black level

    Use Histogram() to visualize and check the waveform. Or try jagabo's commandline
    Code:
    ffplay.exe -hide_banner -loglevel 24 -stats -loop 0 -i "Live At The Quick.mkv" -an -sn -color_range 2 -vf "split=2[a0][b0];[a0]waveform=filter=lowpass:scale=digital:graticule=green:flags=numbers+dots:components=1:display=stack:envelope=instant[a0o];[b0][a0o]vstack[out]"


    Sidenote:
    The issue with the levels becomes aggravated in the scene from frames 867 to 946 where the levels makes a sudden jump upwards of about 16 steps (shift of the "black" level).
    Last edited by Sharc; 10th May 2023 at 08:42. Reason: jagabo's commandline added
    Quote Quote  
  13. Originally Posted by Sharc View Post
    Code:
    Levels(0,1.0,255,0,235,coring=false)
    [/CODE]
    Thank you very much for your help, I finally got the histogram to work as I wanted and could verify what you said. The one quoted above is the one that gave me the best results even though I'm still uncertain since it depends on which part of the video I'm playing. Some did better in dim scenes whereas other worked better on bright scenes. The one quoted above was generally giving me a better result all along the video. Although it might just have been my mind playing games and "I think it's better" is more appropriate than "It is better".

    Originally Posted by Sharc View Post
    Insert in your script after the source filter
    Regarding this, I'm curious as to why we adjust the luma levels before converting the colorspace and colormatrix. Is it because that after the color conversion the luma data is lost and can't be adjusted from Levels()?
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!