VideoHelp Forum
+ Reply to Thread
Results 1 to 25 of 25
Thread
  1. I read and heard many times that hardware upscaling is more quality than any existing software upscaling. However, I still can't comprehend what the main advantage of hardware upscaling is. Please, anyone who has some knowledge about this be it practical or just theoretical, give some input to this topic.
    Quote Quote  
  2. Member Kakujitsu's Avatar
    Join Date
    Jun 2011
    Location
    United States
    Search Comp PM
    Software upscaling uses different algorithms, some are better than others with things like interpolation and thats what causes details to be lost to things like fuzziness or blur when upscaling. Here is a small example of what some of the different methods look like.

    Image
    [Attachment 58160 - Click to enlarge]


    Neither hardware or software is better than the other persay because internally they all use the same math, how ever things like A.I. are a game changer when it comes to upscaling because it can predict what an image should look like when upscaled.

    How are images being upscaled usually?
    Image upscaling algorithms solve the task of filling missing pixels that contain information during the image enlargement process. The most common is a bicubic interpolation.

    This method doesn't draw any details. It just simulates new pixels from the values of the surrounding ones using a formula. It works fast enough and looks more or less appealing to the human eye comparing with big chunky "pixels" created by more straightforward nearest neighbor or bilinear interpolations.

    Images upscaled this way do not contain any "new" detail and appear blurry because of the formula which tries to find a tradeoff between pixelization artifacts and smoothing. There are several different bicubic kernels that may have different results for different kinds of images. But in general, they act the same.

    There are more advanced algorithmic methods.
    E.g. widely used Lanczos interpolation or even more complicated fractal interpolation methods. They try to make out and preserve edges with borders on the image but all of them lack generality. In their focus is to find universal mathematical formulae to deal with different content and texture, and not to add much meaningful detail to the upscaled image to make it look realistic.

    Moreover, bicubic and any other algorithmic interpolation do not help with the image compression artifacts like JPEG blocks, even worse - it enlarges them and makes more visible.
    Last edited by Kakujitsu; 31st Mar 2021 at 21:09.
    Quote Quote  
  3. @Kakujitsu I appreciate your input but I'm already familiar with all this. What I want to know is why are the videos upscaled by TV stations using expensive hardware upscalers better than any existing software upscaling method? What is the main advantage of hardware upscalers that makes upscaled videos look better, not blurry, etc.?
    Quote Quote  
  4. A hardware scaler is just a software scaler that has been comitted to silicon for processing speed. The main advantage is speed. Hardware will never be better in terms of quality than the software algorithm that it was developed from. But general public might not have access to that specific software, or that specific hardware for that matter. Software can improve (for example, "AI" or neural net algorithms can be trained and improved farther to generate better models) , but hardware is largely "fixed" at that level (sometimes firmware can offer minor improvements)
    Quote Quote  
  5. Member Kakujitsu's Avatar
    Join Date
    Jun 2011
    Location
    United States
    Search Comp PM

    "Neither hardware or software is better than the other per say because internally they all use the same math"
    The only rational advantage for using dedicated hardware for upscaling is not having the need for full computers and software.
    They both do exactly the same thing, say you have a dedicated box that only has one function, upscaling. It might actually cost less to buy, less money to run and maintain, and possibly less setup or user interaction.
    Last edited by Kakujitsu; 31st Mar 2021 at 22:39.
    Quote Quote  
  6. So, you both are saying that hardware itself doesn't affect the quality of upscaling. It means that everything depends on the quality of a source video and upscaling algorithm that's being used. Some people were trying to convince me that hardware upscaling always is better than software upscaling. That would mean that algorithms implemented in hardware are better than algorithms used in various software solutions. If that's true I really wonder how is it possible that there are no available software implementations of algorithms that perform better regarding the upscaling quality than algorithms used in hardware...
    Quote Quote  
  7. Originally Posted by Santuzzu View Post
    It means that everything depends on the quality of a source video and upscaling algorithm that's being used.
    Yes, a hardware upscaler is just an implementation of software upscaling running on a dedicated fixed function chip

    Some people were trying to convince me that hardware upscaling always is better than software upscaling.
    I'd avoid terms like "always" without qualifying it

    Some hardware scalers are good, some are bad. Some software scalers are good, some are bad....


    That would mean that algorithms implemented in hardware are better than algorithms used in various software solutions. If that's true I really wonder how is it possible that there are no available software implementations of algorithms that perform better regarding the upscaling quality than algorithms used in hardware...
    It could be the companies developing the hardware scalers don't want to sell the software, at least not to general consumer. It's easier to pirate or steal, or reverse engineer software. But the software revision is ahead of the hardware in their labs - there is no other way to implement a hardware solution, without software first. So the next gen chips are being developed in software, they are being trained right now, before they make it to hardware

    The majority of TV's use low quality scaling algorithms in the HW scaler ( also they use low quality deinterlacing algorithms.) It costs more to implement higher quality scaler, so you don't see those in budget and midrange TV's. The quality is bad and you certainly can get better results in commonly available software

    But some of the "best" consumer HW scalers are very good. Check out the X1 Extreme chip on the high end Sony's or the Quantum processor on the high end Samsung's. They both use machine learning databases (neural net , or "AI") . You can get similar results in some situations, on some types of content in available software, but nowhere near in realtime like the HW chip on a typical computer or even a high end one with multiple high end GPU's - that's the benefit of dedicated silicon - processing speed for a specific task
    Quote Quote  
  8. Dedicated fixed function hardware is likely faster/more power efficient than a software only implementation.
    Quote Quote  
  9. Originally Posted by Santuzzu View Post
    I read and heard many times that hardware upscaling is more quality than any existing software upscaling. However, I still can't comprehend what the main advantage of hardware upscaling is. Please, anyone who has some knowledge about this be it practical or just theoretical, give some input to this topic.
    Untrue claim, usually SW scaling offer higher quality than HW scaling as it is less limited by available resources (HW limitations + time constrains).
    Main advantage of the HW scaling is real time operation i.e. it is usually performed on the fly - image is scaled immediately as data arrive to scaler/resizer.

    All above is based on traditional approach i.e. where pixels (signal) are considered as function, more or less most of the comments describe this case.
    There is also alternative approach (image segmentation, shape detection, neural networks etc) where image is no longer considered as function but treated as static image or video - in such case HW may (but not necessary) offer quality comparable to SW quality.

    Usually HW resizing is performed in real time and thus SW is outperformed - time can be critical factor, sometimes favored over quality.
    Quote Quote  
  10. OP mentioned that videos (in his opinion) upscaled by TV stations look "better" than existing software methods. That's because they not only upscale, they also apply sharpening and edge enhancement filters, maybe even change contrast of the lines to make it "better" to regular viewers.

    But I also noticed many times that SD content upscaled by TV station often has visible halo artifacts from sharpening which is annoying.
    Quote Quote  
  11. Originally Posted by poisondeathray View Post
    Originally Posted by Santuzzu View Post
    It means that everything depends on the quality of a source video and upscaling algorithm that's being used.
    I'd avoid terms like "always" without qualifying it

    It could be the companies developing the hardware scalers don't want to sell the software, at least not to general consumer. It's easier to pirate or steal, or reverse engineer software. But the software revision is ahead of the hardware in their labs - there is no other way to implement a hardware solution, without software first. So the next gen chips are being developed in software, they are being trained right now, before they make it to hardware
    When I wrote "always" I meant that those people say for the same input video a quality hardware upscaler will always produce a better upscaled video than any available computer software solution.

    One thing that comes to my mind is that those algorithms implemented in hardware solutions are machine learning algorithms with specialized training datasets and feature selection for a specific source video category. After watching these videos I clearly doubt that source video quality is crucial here. Some source videos seem to be of average quality but it looks like upscaling doesn't upscale those artifacts. As badyu17 wrote, they most likely use sophisticated sharpening and edge processing approaches to improve the overall visual quality.

    Anyway, I uploaded two 16:9 (with cutting) upscaled videos. First one is a compilation that contains some upscaled old recordings from 90s and the second one is from 2005. To me it looks like upscaling approach doesn't introduce any additional blurring.

    https://www.mediafire.com/file/8o5uh6xpil022ap/111.ts/file
    https://www.mediafire.com/file/cqljwts1wotejko/222.ts/file
    Quote Quote  
  12. Originally Posted by Santuzzu View Post

    One thing that comes to my mind is that those algorithms implemented in hardware solutions are machine learning algorithms with specialized training datasets and feature selection for a specific source video category. After watching these videos I clearly doubt that source video quality is crucial here. Some source videos seem to be of average quality but it looks like upscaling doesn't upscale those artifacts. As badyu17 wrote, they most likely use sophisticated sharpening and edge processing approaches to improve the overall visual quality.
    Source characteristics , including noise, is very important. Machine learning algorithms work ideally only the content they were trained on. When you stretch the definition, the results are not ideal, they tend to do more poorly. eg. If you have a coarse grain source, but the training set didn't include coarse grain, then the results won't be as good

    When you see demos, they are from exact videos/images used in the training set. ie. they are perfect marketing material. The results typically won't be as eyepoppingly good on some random videos, unless they had similar characteristics to those in the training set

    The advanced scaling chips do more than just interpolate and create details - they "fix" some types of problems like noise, fix contrast, smooth over motion. So it's not just traditional "upscaling" per se; or it's really a combination of processes . But an involved software process does the same thing. When you upscale, you typically perform those other corrective operations as well (or you should be)


    Anyway, I uploaded two 16:9 (with cutting) upscaled videos. First one is a compilation that contains some upscaled old recordings from 90s and the second one is from 2005. To me it looks like upscaling approach doesn't introduce any additional blurring.

    https://www.mediafire.com/file/8o5uh6xpil022ap/111.ts/file
    https://www.mediafire.com/file/cqljwts1wotejko/222.ts/file

    You don't provide the original source(s), so you can only make some assumptions on what the SD source looked like.
    Quote Quote  
  13. Video Restorer lordsmurf's Avatar
    Join Date
    Jun 2003
    Location
    dFAQ.us/lordsmurf
    Search Comp PM
    Originally Posted by pandy View Post
    Untrue claim, usually SW scaling offer higher quality than HW
    This isn't true, because of this:
    Originally Posted by poisondeathray View Post
    But general public might not have access to that specific software
    So in theory, yes, software and hardware are the same. But in practice, no. A company like Faroudja (now part of STM) wasn't using open-source software, or even software that is available to anybody else. Because that's what gave them the edge in quality. Those algorithms ("software") are patented. At best, those may be reverse engineered -- or not.

    To think that "anything possible in hardware is possible in software" is naive. There are still decades-old on-chip processing that is still not possible in software. For example, video games emulators still cannot emulate all hardware processes on all games.

    Main advantage of the HW scaling is real time operation i.e. it is usually performed on the fly - image is scaled immediately as data arrive to scaler/resizer.
    Yep. However, again, sometimes there is no equal in the software world. What usually happens is that newer software can supersede older hardware quality, but again newer hardware will set a new higher bar.

    Usually HW resizing is performed in real time and thus SW is outperformed - time can be critical factor, sometimes favored over quality.
    And very often, it's not a mere case of "extra time". It's not adding another 30 minutes overall, or anything like that. We're often talking days of processing for mere minutes of footage. This is almost more R&D use, as those same insanely long times will be cut back using cloud/farm resources, GPU, ASIC, etc.

    Also very often, such things are done for forensic recovery, not watching for enjoyment. So other aspects of the image are ignored.

    I used to follow "hardware v. software" quite closely in the 00s and into the mid 10s. Mostly deinterlace and encoding, but also scaling. In that time, nothing ever changed. New hardware outdid software, software tried to recreate. It was often a case of $$$$$ hardware, or $$$ software, so money was a main difference as well. I doubt the 2020s has radically altered this, where software and hardware is "the same". Laughably doubtful.
    Want my help? Ask here! (not via PM!)
    FAQs: Best Blank DiscsBest TBCsBest VCRs for captureRestore VHS
    Quote Quote  
  14. Originally Posted by lordsmurf View Post
    Originally Posted by pandy View Post
    Untrue claim, usually SW scaling offer higher quality than HW
    This isn't true, because of this:

    Originally Posted by poisondeathray View Post
    But general public might not have access to that specific software
    So in theory, yes, software and hardware are the same. But in practice, no. A company like Faroudja (now part of STM) wasn't using open-source software, or even software that is available to anybody else. Because that's what gave them the edge in quality. Those algorithms ("software") are patented. At best, those may be reverse engineered -- or not.

    To think that "anything possible in hardware is possible in software" is naive. There are still decades-old on-chip processing that is still not possible in software. For example, video games emulators still cannot emulate all hardware processes on all games.
    Usually is a key word to my sentence and also to remaining part - at this level of generalization at least, beside to this also (at least doubtful) claim about HW signal processing impossible in SW - from my experience each HW implementation is preceded by SW model/prototype - HW usually is optimized (resource wise due high cost) SW algorithm. So we can say SW is first and later if justified HW implementation created this is even more important for very complex algorithms where even HW mean in real silicone at least fixed microcode (and usually flexible microcode that can be altered by proper firmware - valid even for modern CPU's where some aspects of silicone can be altered or trough firmware or trough reprogram-able FPGA-like structures).
    Provided examples means only that proprietary algorithm(s) was not recreated by third party software implementation - and reason for this can be different than algorithm complexity - perhaps some SW alternatives provide better results - emulation is tricky area where proper emulator need to emulate HW/SW design flaws - something what we trying in real life to remove from final product.


    Originally Posted by lordsmurf View Post

    Main advantage of the HW scaling is real time operation i.e. it is usually performed on the fly - image is scaled immediately as data arrive to scaler/resizer.
    Yep. However, again, sometimes there is no equal in the software world. What usually happens is that newer software can supersede older hardware quality, but again newer hardware will set a new higher bar.
    Yes and no - common image (video) resizer HW implementation is something called polyphase filter ( https://www.ti.com/lit/an/spraai7b/spraai7b.pdf ) - this is approx 90% or more commercially and commonly available video resizers... Situation change when neural networks are involved - IMHO SW and HW implementations offer similar quality and resource cost - difference is only time - of course this is generalization - in extreme cases SW can be still higher quality but speed significantly slower.

    Originally Posted by lordsmurf View Post
    Usually HW resizing is performed in real time and thus SW is outperformed - time can be critical factor, sometimes favored over quality.
    And very often, it's not a mere case of "extra time". It's not adding another 30 minutes overall, or anything like that. We're often talking days of processing for mere minutes of footage. This is almost more R&D use, as those same insanely long times will be cut back using cloud/farm resources, GPU, ASIC, etc.

    Also very often, such things are done for forensic recovery, not watching for enjoyment. So other aspects of the image are ignored.

    I used to follow "hardware v. software" quite closely in the 00s and into the mid 10s. Mostly deinterlace and encoding, but also scaling. In that time, nothing ever changed. New hardware outdid software, software tried to recreate. It was often a case of $$$$$ hardware, or $$$ software, so money was a main difference as well. I doubt the 2020s has radically altered this, where software and hardware is "the same". Laughably doubtful.
    HW is crucial when you need to process huge amount of data (4k, 8k etc) in reasonable time (maximum latency like 1 second or 1 GOP or 15 - 60 frames) - if you can "waste" within your workflow: minutes, hours, days, months then real HW is not important and can be substituted with some limitations by CPU's, GPGPU's, FPGA's etc.

    Allow me to use as example emulation of the real HW - modern CPU's (and usually other silicone) is nowadays modeled (developed) not as HW but as SW (trough dedicated HDL like Verilog, VHDL etc) - if CPU can be emulated by SW but emulation speed will 10^7...10^9 slower than target HW but there are dedicated HW emulators (for example https://www.cadence.com/en_US/home/tools/system-design-and-verification/acceleration-a...ladium-z1.html ) where emulation is speed is approx within 10^3...10^5 times slower than target silicone but this cost (tens k $ vs milions $).

    Similar situation is on video algorithms development.
    Quote Quote  
  15. Well what changed in 2010-2020 is that graphics cards came into play (plus the needed frameworks to utilize them), allowing e.g. one of the best available hardware scalers and framerate converters for broadcasters "Alchemist" to deliver a software Version that's able to process 1 FullHD stream in realtime using 3 GPU's paralell. You know, it's not like they "first" developed their stuff on "Software" and ported it later to Hardware but the other way around. They needed to wait until there's enough processing power available that can be utilized via Software to port what they had.

    The key part for scaling was always deinterlacing, it requires expensive logic and therefore huge amounts of processing power. The scaling itself after deinterlacing was never the huge topic. E.g. Lanczos delivers definitely the same result as expensive Hardware upscalers do.
    In all the years i played with related stuff, i only found one solution that delivers results that can be compared with Alchemist: QTGMC. Unfortunately it runs extremely slow but the results are very good. In my opinion it's in some scenarios (huge small details like bird swarms) it's A LOT better than Alchemist.
    Last edited by emcodem; 2nd Apr 2021 at 06:57.
    Quote Quote  
  16. Originally Posted by emcodem View Post
    Well what changed in 2010-2020 is that graphics cards came into play (plus the needed frameworks to utilize them), allowing e.g. one of the best available hardware scalers and framerate converters for broadcasters "Alchemist" to deliver a software Version that's able to process 1 FullHD stream in realtime using 3 GPU's paralell. You know, it's not like they "first" developed their stuff on "Software" and ported it later to Hardware but the other way around. They needed to wait until there's enough processing power available that can be utilized via Software to port what they had.
    lol - so you claim that they created signal processing algorithm in TTL logic and after success (verifying it work correctly) they transferred it to software?
    Every modern HW is created in Hardware Description Languages that are founded around programming languages such as C or ADA.
    And research is usually performed purely in software sometimes accelerated in FPGA...

    Originally Posted by emcodem View Post
    The key part for scaling was always deinterlacing, it requires expensive logic and therefore huge amounts of processing power. The scaling itself after deinterlacing was never the huge topic. E.g. Lanczos delivers definitely the same result as expensive Hardware upscalers do.
    In all the years i played with related stuff, i only found one solution that delivers results that can be compared with Alchemist: QTGMC. Unfortunately it runs extremely slow but the results are very good. In my opinion it's in some scenarios (huge small details like bird swarms) it's A LOT better than Alchemist.
    Deinterlacing for progressive video? What for...? Scaling will be always big topic as mathematically perfect scaling is on opposite place to subjective quality of image (video). Traditional math correct kernel based approach is always suboptimal from subjective perspective...

    Deinterlacing is something similar but worse to scaling (due temporal signal dependencies) - we trying to address pathological signal (undersampled signal with half of data brutally removed) - more or less there is no universal math to address this case and that's why we need all psychovisual approaches to this problem (and science is still far from creating simplified but modestly accurate model of human vision and perception) - that's why AI (Neural Network) may shine on this area - more or less we trying to do vaguely educated guess how to add lost information so it will fit within subjectively expected constrains.

    It is not possible to correctly add data that are friendly (correct resize) or unfriendly (interlacing) removed from signal without understand nature of signal - resizer or deinterlacer have no clue if grass is grass, leaf is leaf or bird is bird - this is something that can be done by brain and only if brain learns to recognize grass, leaf or birds earlier... Neural network may try to follow some human brain path but without understanding similar to brain so it will be limited at some point...
    Quote Quote  
  17. Originally Posted by pandy View Post
    lol -
    Writing lol as response to what someone said appears to be very unfriendly.

    Originally Posted by pandy View Post
    Every modern HW is created in Hardware Description Languages...
    Sure my point was that it was not developed to run as software initially, it took about 15 years until it was possible to run it as software and deliver acceptable speed.

    Originally Posted by pandy View Post
    Deinterlacing for progressive video? What for...? ...
    Correct, of course you don't deinterlace progressive stuff. I should have mentioned that i refer to what broadcasters do/did. Most of them still produce all stuff in interlaced. Only a small amount of them changed completely to progressive for FHD or changed to 720P. Sure in UHD things will get a lot easier as there is no interlacing anymore but then we still have years worth of interlaced material SD and HD material in the national archives that wants to be upscaled whenever it is retrieved from the archive, so the deinterlacing topic will stay for a very long time.
    The reason why i referred to deinterlacing is because we need it before scaling to retain most possible quality, we don't just throw away the second field. And as it as you say is a lot worse than the scaling itself, i referred to it as one of the base problems when we compare hardware vs software scaling.

    Sure AI stuff will be the future but it is not yet really there in for typical broadcaster's, so it is not yet a topic for me.
    Last edited by emcodem; 2nd Apr 2021 at 07:50.
    Quote Quote  
  18. forgot to apologize for lol - it was impulse not intention to insult you - sorry for this.

    Originally Posted by emcodem View Post
    Originally Posted by pandy View Post
    Every modern HW is created in Hardware Description Languages...
    Sure my point was that it was not developed to run as software initially, it took about 15 years until it was possible to run it as software and deliver acceptable speed.
    Perhaps i was not precise but from my perspective it was kind of obvious that prototype of algorithm at R&D stage is cheapest on software and from business perspective software means lowest cost risk if algorithm fail. Of course for HW target software is used only to simulate and describe prescription for HW but at some point HW is created (with all HW related things like electric physical dependencies that not occurs in software domain but can be serious headache in HW world).

    Originally Posted by emcodem View Post
    Originally Posted by pandy View Post
    Deinterlacing for progressive video? What for...? ...
    Correct, of course you don't deinterlace progressive stuff. I should have mentioned that i refer to what broadcasters do/did. Most of them still produce all stuff in interlaced. Only a small amount of them changed completely to progressive for FHD or changed to 720P. Sure in UHD things will get a lot easier as there is no interlacing anymore but then we still have years worth of interlaced material SD and HD material in the national archives that wants to be upscaled whenever it is retrieved from the archive, so the deinterlacing topic will stay for a very long time.
    The reason why i referred to deinterlacing is because we need it before scaling to retain most possible quality, we don't just throw away the second field. And as it as you say is a lot worse than the scaling itself, i referred to it as one of the base problems when we compare hardware vs software scaling.

    Sure AI stuff will be the future but it is not yet really there in for typical broadcaster's, so it is not yet a topic for me.
    If AI is for you or not depends of course what is your workflow. I can imagine GPGPU performing AI upscaling so next iteration of software may bring neural network also in real life (not only as parot of TV set signal part).
    Also i believe that deinterlacing is special case for resize and resize it is still huge topic where lot of things are not addressed yet.
    Last edited by pandy; 2nd Apr 2021 at 09:13. Reason: appologies for being rude
    Quote Quote  
  19. Video Restorer lordsmurf's Avatar
    Join Date
    Jun 2003
    Location
    dFAQ.us/lordsmurf
    Search Comp PM
    Originally Posted by pandy View Post
    from my perspective it was kind of obvious that prototype of algorithm at R&D stage is cheapest on software
    Not really. Different architecture. You can't easily port from platform to platform. What runs on ASIC or GPU (or otherwise) doesn't necessarily run on your standard PC. In fact, PC chips are probably the minority of chips made now.

    and from business perspective software means lowest cost risk if algorithm fail.
    R&D isn't about costs. Production is about costs.

    Most of them still produce all stuff in interlaced.
    Produce? No.
    Broadcast? Sure.

    The reason why i referred to deinterlacing is because we need it before scaling to retain most possible quality, we don't just throw away the second field.
    Not necessarily. It depends on factors. Ideally, yes. Practically, sometimes no.

    Originally Posted by emcodem
    Sure AI stuff will be the future
    I need to be blunt here: THERE'S NO F'ING AI.

    That term is abused, misused, raped, molested. I get so tired of seeing it tossed around. You may as well just call it pixie dust, or unicorn horn, or magic. It's getting to be ridiculous, not too different from "TBC". My toaster can probably have both a TBC and AI, with the weak user-made definitions that are being used these days. "AI" is too often a term used by people that don't understand what's actually happening (technological magic!), and not actual artificial intelligence.

    Perhaps somebody we'll have actual AI for video. But right now we have a mix of really basic/stupid algorithms that are marketed as "AI". I've yet to see any "AI" not turn a video into an artifacts-laden near-unwatchable distracting mess. Actual AI would recognize it's creating artifacts, and not do it. What I do see work well is advanced resizers and sharpeners ... but none are "AI". It's certainly A, but not I.

    I've written on this exact topic already: https://forum.videohelp.com/threads/399360-so-where-s-all-the-Topaz-Video-Enhance-AI-d...e3#post2602672
    Last edited by lordsmurf; 2nd Apr 2021 at 08:38.
    Want my help? Ask here! (not via PM!)
    FAQs: Best Blank DiscsBest TBCsBest VCRs for captureRestore VHS
    Quote Quote  
  20. Originally Posted by lordsmurf View Post
    Originally Posted by pandy View Post
    from my perspective it was kind of obvious that prototype of algorithm at R&D stage is cheapest on software
    Not really. Different architecture. You can't easily port from platform to platform. What runs on ASIC or GPU (or otherwise) doesn't necessarily run on your standard PC. In fact, PC chips are probably the minority of chips made now.
    ASIC or GPU perform algorithm and way how algorithm is described (trough software or gate array) is same - HW add to SW only lot of other issues (like electromagnetic compatibility, thermal management etc). Developing algorithm in software, testing it validity is simply easier...
    Example is video coding - every modern video codec is prototyped in software and deployed as HW substantially later... same rule apply to other algorithms.

    Originally Posted by lordsmurf View Post
    and from business perspective software means lowest cost risk if algorithm fail.
    R&D isn't about costs. Production is about costs.
    R&D is also under strict cost optimization as such before you get money for HW implementation first you need to prove that your idea works and deserve to be put on silicone...
    Quote Quote  
  21. Originally Posted by lordsmurf View Post
    To think that "anything possible in hardware is possible in software" is naive. There are still decades-old on-chip processing that is still not possible in software. For example, video games emulators still cannot emulate all hardware processes on all games.
    This thread topic is about scaling. All scaling is developed and prototyped in software. This is an indisputatable fact.

    It's equally naive to make vague , unqualified broad based claims

    "Hardware scaling is always better than Software scaling" is a pretty ignorant claim.



    Originally Posted by emcodem View Post
    The key part for scaling was always deinterlacing, it requires expensive logic and therefore huge amounts of processing power. The scaling itself after deinterlacing was never the huge topic. E.g. Lanczos delivers definitely the same result as expensive Hardware upscalers do.
    In all the years i played with related stuff, i only found one solution that delivers results that can be compared with Alchemist: QTGMC. Unfortunately it runs extremely slow but the results are very good. In my opinion it's in some scenarios (huge small details like bird swarms) it's A LOT better than Alchemist.
    Yes, deinterlacing was (unfortunately still is) a large part of scaling, because of consumer and broadcast content

    It's funny, I had access to Alchemist and other solutions back then and posted my findings at doom9 . Severely overrated.
    Quote Quote  
  22. Originally Posted by poisondeathray View Post
    It's funny, I had access to Alchemist and other solutions back then and posted my findings at doom9 . Severely overrated.
    Not sure how VMAF is correct but more less confirm your observations.

    https://jonfryd.medium.com/comparing-video-upscaling-methods-on-stock-footage-78fc81914567
    Quote Quote  
  23. Video Restorer lordsmurf's Avatar
    Join Date
    Jun 2003
    Location
    dFAQ.us/lordsmurf
    Search Comp PM
    Originally Posted by poisondeathray View Post
    This thread topic is about scaling. All scaling is developed and prototyped in software. This is an indisputatable fact.
    I'm not sure that's true. At one point in the past, I know it was false. But as I mentioned, I've not kept up with "hardware v. software" for some years now. I guess it's true that all hardware chips have software on it, but I think in the context here it's a safe assumption that "software=PC" (Windows, Mac, Linux, whatever), not software on chips. So I'm not sure we're disagreeing, just commenting on different things.

    "Hardware scaling is always better than Software scaling" is a pretty ignorant claim.
    I never claimed that (at least no time recently). There's both good and bad, hardware and software.
    Want my help? Ask here! (not via PM!)
    FAQs: Best Blank DiscsBest TBCsBest VCRs for captureRestore VHS
    Quote Quote  
  24. Surely the main advantage of hardware scaling is that it's probably going to improve as you upgrade your equipment over time, whereas once you've done your software upscale then it's basically fixed - unless you keep the originals and periodically re-do the upscales to whatever the new standards are? Imagine how annoyed you'd be if you upscaled all your favourite vids from SD to HD only to then upgrade to 4k, and then maybe to 8/16k a few years later . . .
    "Well, my days of not taking you seriously are certainly coming to a middle." - Captain Malcolm Reynolds
    Quote Quote  
  25. Originally Posted by pandy View Post
    Originally Posted by poisondeathray View Post
    It's funny, I had access to Alchemist and other solutions back then and posted my findings at doom9 . Severely overrated.
    Not sure how VMAF is correct but more less confirm your observations.

    https://jonfryd.medium.com/comparing-video-upscaling-methods-on-stock-footage-78fc81914567

    I remember that article.

    I don't put too much weight in VMAF 1.x - there are several issues - but a big issue is you can demonstrate sharpening a picture scores higher. ie. oversharpening halos score higher. It's been partially addressed in VMAF 2.x, but VMAF - in general - is very much a WIP metric.

    Other metrics necessarily aren't great either - especially when you include neural network / deep learning scalers. They can add and alter details on purpose - but sometimes the "goal" isn't similarity to the source used for downscaling, but rather something perceptively "better". Difficult for a metric to accurately measure


    But don't need VMAF when you have eyes - You can download the zip package with the upscales and you can compare them against each other. (The src isn't provided but if you want to pay Shutterstock, you can test against the src)

    I was going to dig up some old Alchemist tests (from the station, not watermarked like these) , but you can see right away how "bad" some of the scalers are, including Alchemist. It would be a complete waste of time.

    Of interest is the 2x set, because there is a nearest neighbor upscale, which can be perfectly reversed. Then you can test other scaling methods.

    That article published in 2019 - I'm not sure if Pixop has been improved or trained farther, but for that snapshot - Pixop has issues with oversharpening (and not surprisingly has boosted VMAF 1.x scores), and some aliasing (Yes, it has less aliasing than the scalers compared in that test, but not compared to more modern ones). You can get significantly better results with modern machine learning scalers without all that haloing, aliasing and fuzzy lines


    Originally Posted by TimA-C View Post
    Surely the main advantage of hardware scaling is that it's probably going to improve as you upgrade your equipment over time, whereas once you've done your software upscale then it's basically fixed - unless you keep the originals and periodically re-do the upscales to whatever the new standards are? Imagine how annoyed you'd be if you upscaled all your favourite vids from SD to HD only to then upgrade to 4k, and then maybe to 8/16k a few years later . . .

    For that usage scenario - definitely. Also factor in how long it takes for proper pre/post processing and machine learning scalers in you're doing it on a home PC or similar, and electricity cost.

    But you should be keeping the originals anyways, regardless

    Scaling is one area where there is lots of active R&D, with significant tangible improvements over the last few years - and will definitely improve more in the future

    If you have a good hardware scaler, it might not be worth doing anything. But typical hardware does not perform scaling as well as expensive higher end equipment.
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!