VideoHelp Forum

+ Reply to Thread
Page 8 of 12
FirstFirst ... 6 7 8 9 10 ... LastLast
Results 211 to 240 of 340
Thread
  1. @pm-s-: according to Topaz their software is based on machine learning,...

    ---
    for the fun of it here's what happens when applying some ml models:







    Cu Selur
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  2. Member
    Join Date
    Sep 2009
    Location
    Brazil
    Search Comp PM
    Originally Posted by Selur View Post
    @pm-s-: according to Topaz their software is based on machine learning,...

    ---
    for the fun of it here's what happens when applying some ml models:

    Cu Selur
    Friend Selur what is ML Models?

    Att.

    Druid.
    Quote Quote  
  3. ml = machine learning
    model = a set accumulated instructions/rules what to do when that was created with a base algorithm and training data
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  4. Video Restorer lordsmurf's Avatar
    Join Date
    Jun 2003
    Location
    dFAQ.us/lordsmurf
    Search Comp PM
    Originally Posted by Selur View Post
    for the fun of it here's what happens when applying some ml models:
    So ... nothing?

    If those are before/after, they look the same to me. Was that your point? The Topaz upscale scaled nothing?

    What are we looking at here?
    Quote Quote  
  5. If those are before/after, they look the same to me.
    They are not. And yes, on the left you always see the original on the right your see the filtered output.

    The Topaz upscale scaled nothing?
    I don't have Topaz (trial is long over).

    What are we looking at here?
    I just wanted to show some examples of different ml models applied. (used VSGAN and some of the models from https://upscale.wiki/wiki/Model_Database)


    Cu Selur
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  6. Member
    Join Date
    Sep 2009
    Location
    Brazil
    Search Comp PM
    Originally Posted by Selur View Post
    If those are before/after, they look the same to me.
    They are not. And yes, on the left you always see the original on the right your see the filtered output.

    The Topaz upscale scaled nothing?
    I don't have Topaz (trial is long over).

    What are we looking at here?
    I just wanted to show some examples of different ml models applied. (used VSGAN and some of the models from https://upscale.wiki/wiki/Model_Database)

    Cu Selur
    It's always really funny to see the blood in your eyes when the cognitive dissonance makes people hostage to their own little world .

    Friend Selur, when people use acronyms they usually don't say much about machine learning, well I'm from the Information Technology area and I know very well what it is, what I find funny as I said above is whether people or not they read what we write and get carried away by their passions charged with strong emotion to the point of distorting everything they read, even if you are agreeing with them on several points and disagreeing on only 1 or 2 points, because we all have our cognitive dissonances , but when you get carried away by being ego, then you become a fool and a real puppet of it, of course I don't mean directly to you as you well know .

    Thank you for your answer, I understood your explanation perfectly, since acronyms mean practically nothing, except as a way to avoid having to write their meaning out of sheer laziness, but when I'm in a place where everyone understands the terms, acronyms help not waste time, but here is a public place and being a forum it is normal to have people who do not know much and that is why they are here looking for help, and you have enough didactics, as you have shown in other posts, to put yourself in the position of people more noobies, apprentices, neophytes.

    Att.

    Druid.
    Quote Quote  
  7. To me the pics on the left are much clearer and detailed than the supposedly enhanced ones on the right.
    Quote Quote  
  8. I agree, in general the blurring&co that is needed for the artifact removal through does not really help with the overall image quality.
    (some can be lessened by filtering content before and after)
    Do not get me wrong, I do see potential in machine learning, but unless you have models and algorithms behind that models which are trained for your kind of content they usually have such effects in one way or another. (like the 'plastic' like look in Topaz)
    -> So depending on the content these general approaches might help, but they often come at a price.

    @DruidCha: I'm sorry, that me using ml as short for machine learning, one sentence after speaking of machine learning, hindered your reading experience.

    Cu Selur
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  9. Member
    Join Date
    Sep 2009
    Location
    Brazil
    Search Comp PM
    Originally Posted by Selur View Post
    Notice here how the harp above the violinist's head at the back is quieter and more defined on the VEAI with the Dione Interlaced Robust V4 friend Selur preset.
    I think that can probably be adjusted, didn't really tweak anything, just enabled a few things I would start with.
    Main advantage of Avisynth/Vapoursynth over ML based filtering is:
    a. the possibility to tweak settings. (+ color control; tv vs pc scale and color matrix)
    b. no need for huge amounts of intermediate png files depending on the tool (which can cause color issues, depending on what the tool expects and how RGB<>YUV conversions are done)

    And they also say that you can't keep installing versions of avisynth indistinctly, this is true, how should I install and which should I install in order to work with it round, as I believe your friend Selur is?
    By default Hybrid uses Vapoursynth.
    You can switch between Avisynth and Vapoursynth usage, by changing "Filtering->Support (lower right corner)".
    You can configure filters for Vapoursynth anlog to how you would for Avisynth under "Filtering->Vapoursynth".
    And you can change the filter order under 'Filtering->Vapoursynth->Misc->Filter Order/Queue'.
    (if you understand more about Vapoursynth you can also look use 'Fiterling Queue' which allows to use most filters multiple times)
    For detailed setting adjustment I would also recommend to enable:
    - "Filtering->Vapoursynth->Preview->Split View" and set it to 'interleaved'
    - "Filtering->Vapursynth->Filter view"
    and open the script and filter "Vapoursynth Preview" (lower right corner).

    What I did is:
    • Start Hybrid
    • load input file
    • made sure Hybrid uses Vapoursynth (setting "Filtering->Support" to "Vapoursynth")
    • made sure the Preview settings are as mentioned above (Split View + interleaved + Filter View)
    • I also have usually both 'Filtering->Vaporusynth preview' and 'Filtering->Vapoursynth->Script view' enabled. (both in the lower right corner)
    • configured the deinterlacing
      • set "Filtering->(De-)Interlace/Telecine->QTGMC Vapoursynth->Preset" to "Slow"
      • enabled "Filtering->(De-)Interlace/Telecine->QTGMC Vapoursynth->Bob" for bobbed output
      • enabled "Filtering->(De-)Interlace/Telecine->QTGMC Vapoursynth->OpenCL" for a bit of GPU accelleration
      • did not tweak any additional settings for denoising and sharpening
    • enable cropping (Crop/Resize->Base->Picture Crop)
    • start crop detection (Crop/Resize->Base->Picture Crop->Auto crop)
    • tell Hybrid that the output should have a PAR of 1:1, like it's custom for HD content
      • enable "Crop/Resize->Base->Pixel Aspect Ratio (PAR)->Convert output to PAR"
      • set "Crop/Resize->Base->Pixel Aspect Ratio (PAR)->Convert output to PAR" to "Square Pixel"
    • adjust the resizing resolution:
      • setting "Crop/Resize->Base->Picture Resize->Auto adjust" to 'width' (since I want to specify the height)
      • setting "Crop/Resize->Base->Picture Resize->Targe resolution->Height" to 1080
    • Adjusted the letterboxing to add black border to the output to reach the target resolution (1920x1080) that I wanted
      • enable letterboxing (Crop/Resize->Base->Letterbox)
      • set out resolution (Crop/Resize->Base->Letterbox->Width to 1920, Crop/Resize->Base->Letterbox->Height to 1080)
    • configure the resizer:
      • enable "Filtering->Vapoursynth->Resize->Resizer"
      • set "Filtering->Vapoursynth->Resize->Resizer" to "NNEDI3"
      • adjusted the "Filtering->Vapoursynth->Resize->Resizer" to "NNEDI3" settings abit (enable GPU, change Neighbourhood and Neurons count)
    • enabled DFTTTest as Denoiser (without tweaking any settings)
    • enabled CAS (=contrast adjusted sharpening; enabled "Filtering->Vapoursynth->Sharpen->CAS") and set "Filtering->Vapoursynth->Sharpen->CAS->Sharpness" to "0.85"
    • moved the sharpenig Filter under the Resize filter under 'Filtering->Vapoursynth->Misc->Filter Order/Queue'
    • I then used the 'Vapoursynth Preview' to check the results by flipping a bit between the results and since I didn't see any real problems I kept the settings to get things started here.
    • Configured the x265 encoder (sat the Preset to slow and applied it)
    • sat the output, created a the job queue entries and started the job queue processing

    Cu Selur

    Ps.: if you run into a bug let me know and I can send you a link to my current dev version, chances are good that I might have fixed it already or that I can fix it.

    PPs.: using Avisynth is basically the same in Hybrid you just need to:
    a. set Filtering->support to Avisynth
    b. switch 'Config->Internals->Avisynth->Avisynth type' to 64bit if you want to use Avisynth 64bit.
    c. adjust the filters under 'Filtering->Avisynth' instead of 'Filtering->Vapoursynth'
    Friend Selur had a problem right here, because my Hybrid does not have this Resize option, now what?

    Image
    [Attachment 59839 - Click to enlarge]


    Att.

    Druid.
    Quote Quote  
  10. Your screenshot shows 'Filtering->Vapoursynth->Misc->Script' which I never mentioned.
    Also sorry, instead of 'Filtering->Vapoursynth->Resize->Resizer' it should be 'Filtering->Vapoursynth->Frame->Resize->Resizer"

    Cu Selur
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  11. In the last picture 'Filter(s)' is not important.
    In the list at the left, where SSIQ is selected atm. scroll to 'CAS' and then use the 'arrow down' button to move 'CAS'-entry down until it's below the 'Resize'-entry.
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  12. Banned
    Join Date
    Jan 2021
    Location
    PAL
    Search Comp PM
    Selurs "Hybrid" is one of the best programs I have used
    Quote Quote  
  13. happy you like it.
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  14. What about upscaling stuff with Cupscale, Waifu, or Topaz up to 16K or so and shrinking?
    Quote Quote  
  15. What about upscaling stuff with Cupscale, Waifu, or Topaz up to 16K or so and shrinking?
    Probably a problem with RAM of the GPU at least with 8GB I can't get 8K and 16K. (CPU based Waifu works, since I got enough system ram, but it's hell slow)
    Also what do you expect to archive with that? (unncessary smooth/plastic surfaces?)
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  16. Banned
    Join Date
    Jan 2021
    Location
    PAL
    Search Comp PM
    Upscaling to 16k for what? 8k?
    Anything beyond 4k is useless
    Quote Quote  
  17. Upscaling to 16k to return to 4k using compromise settings - sharpening but not creating much artefacts/plasticity.
    Quote Quote  
  18. No benefit unless you use a 16K model trained appropriately for your specific source. Go ahead and try some small tests (but it will be a waste of time)
    Quote Quote  
  19. Originally Posted by pm-s View Post
    So is it machine learning or not? Half of people are telling me it is, half are telling me it isn't
    Machine learning (ML) is a subfield of artificial intelligence (AI). ML breaks down in supervised learning, unsupervised learning, and a few other methods like reinforcement learning.

    Supervised means you have a given target = the desired output for a given input. Unsupervised means you just have inputs and let the model figure out a structure. In reinforcement learning, an agent takes actions to maximize reward, which is used for training robots.

    Within supervised learning, you have classification and regression. In the former, your targets are any number of classes, for example true or false, or house/tree/road. In the latter, your target is a number.

    Neural networks (NN) can do all of the above. By definition, NNs are always part of ML and are always part of AI. If Topaz uses a neural network, then it is ML as well as AI. From using Topaz, I have reasons to believe what happens under the hood is indeed the application of a user selected neural net, and no reasons to believe it doesn't use neural nets at all.

    Neural networks further break down in shallow and deep learning. A shallow net can have just a few "neurons". The learning from data is in setting the weights of the neurons in the training phase. A classic algorithm for learning the weights is backpropagation. (Modern learning algos are more elaborate and much more robust.) With a small NN, you can hand-calculate its output on any one input sample.

    Deep learning is using a large NN with many layers, each layer consisting of many neurons. Such a network can have millions of neurons and billions of weights = its learned parameters.

    Deep learning is used commercially pretty much everywhere these days. For example, Nvidia's DLSS 2.0 upscales rendered game images to higher resolutions. It was and is a huge success for Nvidia and a selling point over the competition. In some games, DLSS is faster and looks better than the default anti-aliasing, e.g. in Cyberpunk 2077. AMD's FSR was introduced just recently, uses a more manual approach (no AI) and isn't as good as DLSS.
    Quote Quote  
  20. As the person who wrote the Deep Space Nine article that kicked off this thread:

    Topaz absolutely can be used to enhance AI and detail. It's just the last step in a multi-step process.

    I can't speak to the quality of artifacts in any other project but my own, and there's plenty of room for taste when it comes to what output you do or do not like. Nevertheless, the idea that Topaz doesn't or can't enhance detail is... well, wrong. It does not help every piece of content, and how you pre-process your footage makes a huge difference, but Topaz absolutely improves image quality when used intelligently.

    I'm happy to prove it.

    First, here's an original frame of Deep Space Nine as compared to post-AviSynth output, using the "Defiant" encode model I've published.

    https://imgsli.com/NjI2NDY

    Now, here's that same frame of DS9, comparing the AviSynth output to the Artemis-HQ upscaled output.

    https://imgsli.com/NjI2NDY

    Is Artemis HQ perfect? Nope. But is the output there better than previous? Sure is.

    If you don't like output from one model, change to a different one. The old Gaia-CG 1.5.3 model offered very sharp output if you injected artificial noise into the image using QTGMC's "NoiseRestore" function, though one needs to remove ChromaNoise=True if you intend to push above NoiseRestore=0.5, or else you'll start injecting a green tint into your image.

    There are plenty of artifacts in Deep Space Nine that are present in the source. My own published work from September 2020 generated errored output, but that was because of my own deinterlacing process, not because of Topaz. My more recent work, published in June and July of 2021, fixes these issues.

    For something a little more dramatic: Here's a frame from later in the same episode. First, original frame extracted from the demuxed M2V file versus QTGMC output:

    https://imgsli.com/NjI2NTg

    Finally, the QTGMC output compared against color-corrected, upscaled output:

    https://imgsli.com/NjI2NTk

    There is a clear, obvious improvement in quality between each of these images.

    TVEAI is not a magic bullet. It does not free the author from the need to carefully compare footage. It does not automatically yield better results, and AviSynth pre-processing can be a requirement of getting good output out of Topaz. The application is sometimes cranky and does not play well with other GPU-using programs. You're better off remuxing your audio than trusting Topaz to output it properly in certain cases, and I use the application's image output mode for that reason.

    But it works.
    Quote Quote  
  21. Because I realize that single images are not the be-all, end-all of video comparison, I've uploaded a couple of short clips.

    https://1drv.ms/v/s!AphTLFRW13WMkC8tc95eDg74ju7Q?e=zIVgfO

    This is a short sample of the demuxed M2V file.

    https://1drv.ms/u/s!AphTLFRW13WMkDB2ZHydicU47p-N?e=vUNlz6

    This is an upscaled, color-corrected version of that output. QTGMC and AviSynth were used to pre-process the footage before it was run through Topaz.

    I'm sure there are people who prefer the M2V file output. That's perfectly fine by me. Hopefully these two clips illustrate the benefit of TVEAI and AviSynth when used in conjunction with one another. One application is not "steak" to the other's "hamburger." Both have been vital to my work.
    Quote Quote  
  22. Banned
    Join Date
    Jan 2021
    Location
    PAL
    Search Comp PM
    Topaz is just a paid BSRGAN
    Quote Quote  
  23. Banned
    Join Date
    Jan 2021
    Location
    PAL
    Search Comp PM
    And QTGMC gives way better results than TVEAI.
    Quote Quote  
  24. Member
    Join Date
    Sep 2009
    Location
    Brazil
    Search Comp PM
    Originally Posted by JoelHruska View Post
    Because I realize that single images are not the be-all, end-all of video comparison, I've uploaded a couple of short clips.

    https://1drv.ms/v/s!AphTLFRW13WMkC8tc95eDg74ju7Q?e=zIVgfO

    This is a short sample of the demuxed M2V file.

    https://1drv.ms/u/s!AphTLFRW13WMkDB2ZHydicU47p-N?e=vUNlz6

    This is an upscaled, color-corrected version of that output. QTGMC and AviSynth were used to pre-process the footage before it was run through Topaz.

    I'm sure there are people who prefer the M2V file output. That's perfectly fine by me. Hopefully these two clips illustrate the benefit of TVEAI and AviSynth when used in conjunction with one another. One application is not "steak" to the other's "hamburger." Both have been vital to my work.
    Friend JoelHruska could you put your step by step with avisynth and then with VEAI, which you used for this project?

    I would like to test here with Yanni's 1999 Tribute 480i video.

    Thanks a lot for the help .

    Your work was very good, I liked the results, congratulations .

    Att.

    Druid.
    Quote Quote  
  25. Member
    Join Date
    Jul 2018
    Location
    Italy
    Search Comp PM
    Topaz absolutely can be used to enhance AI and detail. It's just the last step in a multi-step process.
    Sorry, in your images comparison I do not see more details, just a different sharpening
    Quote Quote  
  26. Originally Posted by johns0 View Post
    Nobody here cares about it cause its just another ai so called up sizer.
    I have good reason to disagree with this comment along with the one which implied AI was a sales ploy.

    It's a shame that some people who may well have a great deal of expertise in the field of video and audio yet they seem to be willing to write off a product without apparently trying it for themselves or see if anyone has had positive results with it.

    So to that end I present this link from a movie that's only available in standard definition. The first part is the original DVD video and audio. The second part is after the video was upscaled using Topaz Video Enhance AI. The audio in the second part was just reprocessed using Audacity. I hope this encourages people to actually try the product before jumping to conclusions about it.

    https://d.pr/v/lDSGtA

    Matt.
    Quote Quote  
  27. Member
    Join Date
    Jul 2018
    Location
    Italy
    Search Comp PM
    without apparently trying it for themselves
    I tried it on my videos and was not impressed (an example was posted in the restoration subforum).

    The first part is the original DVD video and audio. The second part is after the video was upscaled using Topaz Video Enhance AI
    Good result. But the comparison between the Topaz Video Enhance AI result and an "equivalent" AviSynth script (that is the heart of this discussion) is missing
    Quote Quote  
  28. Originally Posted by lollo View Post
    without apparently trying it for themselves
    I tried it on my videos and was not impressed (an example was posted in the restoration subforum).

    The first part is the original DVD video and audio. The second part is after the video was upscaled using Topaz Video Enhance AI
    Good result. But the comparison between the Topaz Video Enhance AI result and an "equivalent" AviSynth script is missing
    The last time I checked the thread title was "so where's all the Topaz Video Enhance AI discussion?". My example does not use AviSynth so I did not mention it.
    Quote Quote  
  29. Member
    Join Date
    Jul 2018
    Location
    Italy
    Search Comp PM
    Ok, you just read the title.

    To my understanding, the key point was that the expert people you mentioned stated that you can obtain an equivalent/superior result with AviSynth filters. I agree with them.
    Quote Quote  



Similar Threads