VideoHelp Forum
+ Reply to Thread
Page 1 of 2
1 2 LastLast
Results 1 to 30 of 42
Thread
  1. ...without having years of experience or having to spend months learning everything about Avisynth or Hybrid and spending hundreds of hours trying and failing and tweaking settings?

    I'm currently using Topaz Video Enhance AI, but it can't deliever any kind of acceptable results when upscaling source files @ 480p and i've been trying to learn Avisynth and Hybrid, but my head's just spinning trying to take in all the information and making any sense of it.
    Quote Quote  
  2. Why do you want to do this and what are your expectations? Upscaling will not make the video look any better. However if you need to match your 480p to other HD footage and mix them together, you can simply use one of almost a dozen AVISynth resizing built-in functions or plugins. You can also easily do this in almost any NLE.
    Quote Quote  
  3. I'm a Super Moderator johns0's Avatar
    Join Date
    Jun 2002
    Location
    canada
    Search Comp PM
    If you upscale your 480 to 1080 you will most likely start to see artifacts that weren't apparent and the quality won't be any better.
    I think,therefore i am a hamster.
    Quote Quote  
  4. If you share a small sample folks can suggest settings for upscaling&co, this might give you an idea what can be done.
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  5. I use topaz and I can tell you that I got some decent results using the proteus fine tune setting. Select Proteus and use the these number settings. 55, 28, 30, 30, 0 ,0. I've gotten decent results when taking my material off DVD and using those numbers. Not starting a debate on this, just helping the OP.
    Quote Quote  
  6. Topaz costs 200$.

    Best way to upscale is to do a very light Denise, then NNEDI3RPOW it to 1080p and then use a tiny bit of warpsharp and then like 0.2 sharpening in the end. Not the best way but video will most certainly look sharper. If it was a good source tho. Can't really tell with no sample provided
    Quote Quote  
  7. Member
    Join Date
    Aug 2004
    Location
    PA USA
    Search Comp PM
    I've used Hybrid and taken some of my best 480P footage to 1440 X 1080, results are good, not great though. Anytime you upscale you are going to lose some quality.
    It's not important the problem be solved, only that the blame for the mistake is assigned correctly
    Quote Quote  
  8. Member Cornucopia's Avatar
    Join Date
    Oct 2001
    Location
    Deep in the Heart of Texas
    Search PM
    "Good", "Decent"? According to what criteria, using what sensitivities? At what distance and with what FOV, and in what kind of contrast environment? Using what kind of material? Too much of what has been said positively about this is subjective, and without objective references and benchmarks.

    I have a solid field of red throughout the screen. I can upscale, downscale, whatever, and the image will look identical. Success! (But zero detail at start and at end)
    I now have a screen with multiple objects including splashing water with multicolored lights refecting in sparkles on those splashes, and something important to the story is happening in those spashes, so it is in detail, in focus, shot with a narrow shuttter angle. Upscaling almost always ruins this, even compared to a 480 original.

    Here's a good rule to remember in media (video and audio):
    It's not possible to create something out of nothing, and you don't get a perceptual improvement in one category without a cost/loss elsewhere. If it looks too good to be true, that's because it is and there's a catch somewhere.


    Scott
    Quote Quote  
  9. Scott,

    That was a great post. Should be a sticky. You concisely provide the details for why upscaling, as a way of making something look better, simply cannot work.

    Now some of the posts above also mentioned sharpening. This is something which can improve the apparent detail, and sometimes quite remarkably, as in this example I provided over in doom9.org twelve years ago (look at the porch posts):



    This example also proves Scott's points about how the results of any enhancement effort depend a LOT on the source material: those posts on the porch railing were the ideal candidate for sharpening.

    However, sharpening is one thing, and up-resn'g is another. About the only thing you can get from increasing resolution is a smoothing of jaggies and defined diagonal lines, and even that will depend on the algorithm you use, the nature of the line, how that line moves from frame-to-frame, etc.

    So, IMHO, if you increase resolution thinking the result will look better, you are on a fool's errand.
    Quote Quote  
  10. When you watch 480p video full screen on an HD TV the video is going to be upscaled somewhere along the line. So the issue becomes what can do the better upscaling (and for 480i, what can perform the better deinterlacing). And if you're willing to provide enough bitrate (file size) for the upscaled video to prevent compression artifacts.

    With your TV or media player doing the upscaling -- how much control do you have of the algorithms used? What if they are not appropriate for what you're watching? Upscaling yourself gives you control over how it's done.

    This is not the simple issue some are making it out to be.
    Quote Quote  
  11. Originally Posted by Cornucopia View Post
    Here's a good rule to remember in media (video and audio):
    It's not possible to create something out of nothing, and you don't get a perceptual improvement in one category without a cost/loss elsewhere. If it looks too good to be true, that's because it is and there's a catch somewhere.


    Scott
    At first i agree with you but my two cents to express second side point of view.

    You can do "educated guess" i think - resynthesize missing features based on the nature of the object - this is i think what neural networks (machine learning) will do usually - they (re?)creating "missing" features. Of course this up to discussion whether this is "original" or not.
    As many consumers don't care about "originality" and "pleasant", sufficiently close to human "natural" perception is highly appreciated.

    For example Super Resolution GAN (but not only) may deliver consumer expected (subjectively desired) video improvements.
    Quote Quote  
  12. Image
    [Attachment 66580 - Click to enlarge]


    my not so nicely scripted method did work pretty well with a 480p source.

    FFmpegSource2("example.mp4")

    sharpen(0.1)
    aWarpSharp2(depth=4)
    nnedi3_rpow2(2)
    aWarpSharp2(depth=10)
    sharpen(0.2)
    CAS(sharpness=0.6, y=3, u=2, v=2, opt=-1)
    nnedi3_rpow2(2)
    Spline36resize(1440,1080)
    Image Attached Files
    Quote Quote  
  13. Just for the fun of it I also did a few quick reencodes of s-mps example.mp4
    1. RealESRGAN
    2. SWINIR_real_sr
    3. Waifu2x_photo.mp4
    4. Spline144 + LumaSharpening(1.5)
    I didn't tweak any of these, nor did I look at the preview or changed the chroma sampling like s-mp did (no clue why). Just wanted to show what the different resizers do when simply applied.

    Cu Selur
    Image Attached Files
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  14. I too don't know why I've changed the chroma sampling. Nonetheless these reencodes loom wonderful
    Quote Quote  
  15. Should've also posted the 4k sample, the original video I've downscaled
    Image Attached Files
    Quote Quote  
  16. Originally Posted by s-mp View Post
    my not so nicely scripted method did work pretty well with a 480p source.
    The results are exactly what I would expect and what I described above: the upscaling does reduce the jaggies on the strong diagonal lines on the street sign, and the sharpening is what is responsible for the increase in apparent clarity of the tree branches.

    However, I'm not sure I would notice any difference from the upscaling while just casually watching the video.
    Quote Quote  
  17. Originally Posted by johnmeyer View Post
    the upscaling does reduce the jaggies on the strong diagonal lines on the street sign, and the sharpening is what is responsible for the increase in apparent clarity of the tree branches.

    However, I'm not sure I would notice any difference from the upscaling while just casually watching the video.
    And less oversharpening halos. These things are pretty obvious with anime and cartoons. Which tend to have little real detail but lots of sharp edges and lines.
    Last edited by jagabo; 4th Sep 2022 at 20:22.
    Quote Quote  
  18. The answer is usually NO.

    But it's very source dependent, and there are cases where upscaling can show a definite improvement over what typical media players or TV's use. (High end TV's use "AI" chips and heavy processing, often good results)

    As a general rule, certain types of animation/cartoons/anime can upscale well , and clean sources tend to upscale better than noisy sources

    If you have a "480p" source, but it's noisy, low quality - then the actual effective resolution might only be 240p or lower, so getting something resembling "1080p" would be a stretch



    I've posted from this "textbook" clean source example using basicvsr++ before in other posts.

    A) 16:9 720x480 test source as lagarith RGB (originally derived from UHD , bicubic downsample).

    B) 4x (non par corrected) upscale to 2880x1920 using basicvsr++ (reds model) as FFV1, (RGB or actually bgr0)

    C and D) MP4 demo/stack previews are a central crop to a 1280x360 section (top/bottom stacked to 1280x720). Non PAR corrected. It's slowed down about 3x for the demo. The temporal aliasing (shimmering) , and fuzzy lines are what stand out for older upscalers. NNEDI3 is improved in single images in terms of the aliasing over lanczos3, but temporal shimmering is still there - NNEDI3 is spatial only. You could add temporal smoothing such as QTGMC in progressive mode, but that will reduce the effective resolution even more

    (NNEDI3 is a first generation "AI" scaler, people tend to forget that. The "NN" stands for neural network.)

    src frame @ 720x480
    Image
    [Attachment 66596 - Click to enlarge]


    non PAR corrected screenshots @ 2880x1920

    basicvsr++(reds)
    Image
    [Attachment 66597 - Click to enlarge]


    lanczos3
    Image
    [Attachment 66598 - Click to enlarge]


    nnedi3_rpow2(4, fwidth=2880, fheight=1920, cshift="spline36resize")
    Image
    [Attachment 66599 - Click to enlarge]


    cropped apng comparison (crop of 2880x1920, this should animate in browsers like chrome, firefox)



    "Higher resolution" usually means :

    1) presence of fine details such as grass blades, fine textures on the stone, etc... instead of "fuzzy" coarse lines. If you had to boil it down to a single distinguishing factor, it's fine details, high frequency. Machine learning algorithms can synthesize "false details" . It can range from looking terrible, to very good.

    2) no temporal aliasing - although other factors can cause aliasing, upscaling is a common cause. Deinterlacing is a form of "upscaling" in the sense that you are interpolating the missing scan lines




    This reds basicvsr++ model only deals with very clean sources - it was trained on a clean downscale, with no compensation for noise or typical crap sources issues . It has limited real world value because you rarely come across pristine SD sources. More models are required that emulate noise, compression artifacts - but training takes a long , long time and abundant hardware resources

    Many "AI" models "guess" wrongly and you get ugly artifacts. A big killer of many "AI" frameworks and models is text/numbers. They really mess it up. basicvsr++ is one of the few that is pretty consistent with text

    I didn't upload everything like nnedi3 or lanczos3 2880x1920 upscales - they are pretty easy to do yourself - if anyone wants any other versions or assets uploaded let me know
    Last edited by poisondeathray; 5th Sep 2022 at 15:05.
    Quote Quote  
  19. Video Restorer lordsmurf's Avatar
    Join Date
    Jun 2003
    Location
    dFAQ.us/lordsmurf
    Search Comp PM
    @poisondeathray
    Impressive.
    How long did reds basicvsr++ take?
    Want my help? Ask here! (not via PM!)
    FAQs: Best Blank DiscsBest TBCsBest VCRs for captureRestore VHS
    Quote Quote  
  20. @poisondeathray: What settings did you use for basicvsr++ with REDS?
    When I use:
    Code:
    clip = core.resize.Bicubic(clip=clip, format=vs.RGBS, range_s="limited")
    # resizing using BasicVSR
    from vsbasicvsrpp import BasicVSRPP
    clip = BasicVSRPP(clip=clip, model=0, tile_x=128, tile_y=128, tile_pad=2, fp16=True) # 2880x1920
    I get some nasty artifacts on the right side of the image.


    Same with 'fp16=False' and a with different tile sizes.
    Using 1 = Vimeo-90K (BI) or 2 = Vimeo-90K (BD) I don't get these artifacts. My results look like yours when I use 'Vimeo-90K (BI)'.

    -> did you perhaps use ' Vimeo-90K (BI)' ?

    Cu Selur
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  21. Originally Posted by lordsmurf View Post
    How long did reds basicvsr++ take?
    Very very slow. It makes QTGMC placebo look instantaneous. It's not the model that is slow - The entire basicvsr++ architecture for 4x is very slow, regardless of which model is used.

    Memory is a big issue. The larger the window size (radius), the slower, the more memory required. It has "tiling" with overlap to reduce memory, but it reduces the quality (deterioration not really visible if you use enough overlap)

    Originally Posted by Selur View Post
    @poisondeathray: What settings did you use for basicvsr++ with REDS?

    -> did you perhaps use ' Vimeo-90K (BI)' ?
    Definitely REDS. radius=14, tile_x=144, tile_y=96, tile_pad=10, model=0

    This was last year with an older version but it shouldn't make a difference
    Quote Quote  
  22. Radius? vs-basicvsrpp has no radius:
    Code:
    def BasicVSRPP(
        clip: vs.VideoNode,
        model: int = 1,
        interval: int = 30,
        tile_x: int = 0,
        tile_y: int = 0,
        tile_pad: int = 16,
        device_type: str = 'cuda',
        device_index: int = 0,
        fp16: bool = False,
        cpu_cache: bool = False,
    )
    source: https://github.com/HolyWu/vs-basicvsrpp/blob/master/vsbasicvsrpp/__init__.py
    Probalby the same as interval in vs-basicvsrpp. -> maybe some incompatiblity with my old Geforce GTX 1070ti and vs-basicvsrpp.

    The entire basicvsr++ architecture for 4x is very slow, regardless of which model is used.
    Yup, upscaling is really slow with basicvsr++.

    Cu Selur
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  23. Whoops, that was from the original basicvsr . Remember I repeated that test when ++ came out ? It's interval=30 for ++

    interval=30, tile_x=144, tile_y=96, tile_pad=10, model=0

    I found ++ to be very slightly better in a few tests, but the difference is almost negligible

    Interestingly the interval can sometimes result in something similar to "I-frame popping" in long GOP compression - as it comes to a new "window", there can be a quality difference between the last frame of previous "interval" and 1st frame of new "interval"

    Some other observations - it can fwd/bwd propogate and align on these sorts of pristine clips . If you do some small tests on a few frames e.g. if some object was clear at frame 10 , but small and unclear at frame 0, it can take the data frame frame 10 and use it for frame 0. But if you cut it off at 9 by the interval, it's not as good. ie. it's a true temporal SR implementation, and it really is similar to the "GOP" concept in compression.
    Quote Quote  
  24. Actually looking more closely video "B" was from the original BasicVSR, not BasicVSR++. I think I messed up the screenshots too, but there's not that much difference between ++ in most frames, and the difference between BasicVSR vs. lanczos or nnedi3_rpow2 is pretty clear

    But ++ is quite a bit better on some details, like license plate details on certain frames. It seems to have better tracking and deals with occlusions better. I'll upload it later

    This is a cropped apng of frame 62 - so placed near the beginning of a new interval in both versions . As the foreground pole occlusion passes by on the right, there is more blurriness in the plate BasicVSR version than BasicVSR++ . The left plate is also slightly more clear

    Last edited by poisondeathray; 5th Sep 2022 at 15:03.
    Quote Quote  
  25. Even with your settings:
    Code:
    clip = core.resize.Bicubic(clip=clip, format=vs.RGBS, range_s="limited")
    # resizing using BasicVSR
    from vsbasicvsrpp import BasicVSRPP
    clip = BasicVSRPP(clip=clip, model=0, interval=30, tile_x=144, tile_y=96, tile_pad=10) # 2880x1920
    I get these artifacts. :/

    => no problem, not worth investigating it further
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  26. Originally Posted by Selur View Post
    Even with your settings:
    Code:
    clip = core.resize.Bicubic(clip=clip, format=vs.RGBS, range_s="limited")
    # resizing using BasicVSR
    from vsbasicvsrpp import BasicVSRPP
    clip = BasicVSRPP(clip=clip, model=0, interval=30, tile_x=144, tile_y=96, tile_pad=10) # 2880x1920
    I get these artifacts. :/

    => no problem, not worth investigating it further

    I uploaded the proper bvsrpp clip above

    Not sure why you're getting artifacts , looking at the old files this was my script. I usually write out image sequences but I doubt that's the reason why

    Code:
    import vapoursynth as vs
    core = vs.get_core()
    from vsbasicvsrpp import BasicVSRPP
    
    clip = core.avisource.AVISource(r'input.avi')
    bvsrpp = core.resize.Bicubic(clip, format=vs.RGBS)
    bvsrpp = BasicVSRPP(bvsrpp, interval=30, tile_x=144, tile_y=96, tile_pad=10, model=0)
    bvsrpp = core.resize.Bicubic(bvsrpp, format=vs.RGB24)
    bvsrpp = core.imwri.Write(bvsrpp, "PNG", "basicvsrpp_test2_model0_int30_%03d.png",firstnum=0)
    bvsrpp.set_output()
    Quote Quote  
  27. How do I install basicvsr
    Quote Quote  
  28. Tried, whether using imwri did change anythin, but no.
    Artifact is still there.
    Also switchted to AviSource, but that also didn't change anything. (used LWLibavSource before)
    -> no clue. ¯\_(ツ)_/¯

    -----
    How do I install basicvsr
    see: https://github.com/HolyWu/vs-basicvsr
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  29. @selur - I just ran it again for a few frames and I get the same results as before using the same lagarith src as uploaded

    Your images are also "darker", maybe some sort of props issue ?


    The repeated test was from R54 ie. <API4 , notice I had core = vs.get_core()

    Trying again from API4 R57 (core = vs.core) still same results, no issues with those (? alignment) artifacts using REDS


    It's I-frame lagarith src, so there should be no problems with mixing up frames (and if there was you'd expect the same issue on the vimeo models)
    Last edited by poisondeathray; 5th Sep 2022 at 14:49.
    Quote Quote  
  30. Thank you selur
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!