VideoHelp Forum




+ Reply to Thread
Page 1 of 2
1 2 LastLast
Results 1 to 30 of 37
  1. Hi I was just wondering if my 480p videos would look better upscale to 4k or 1080p or would they look better as 480p on a 4k Tv? I've been using AVCLabs video software
    Quote Quote  
  2. You have to test it, compare, and find out

    It depends on the specific video (some types of videos might scale better with some algorihtms) , that specific TV's scaler algorithm (up or down), AVCLab's scaler

    Some TV's have excellent scalers, with "AI" chips . Some TV's have terrible scaling and using another method might be better
    Quote Quote  
  3. Capturing Memories dellsam34's Avatar
    Join Date
    Jan 2016
    Location
    Member Since 2005, Re-joined in 2016
    Search PM
    For viewing purposes on TV and not uploading to Youtube, Anything beyond 1080 will be simple line doubling, But I could be wrong. I also think that the higher the resolution you upscale to the less the mathematical rounding errors. The key is to choose the best upscaling algorithm, there are some out there but I wouldn't know better than Poisondeathray, But I guess it's worth to experiment and see for yourself. I've owned a lot of TV's from LCD, LED to now OLED, none of them gave good results for 480i/p materials, they all sucked, It may have to do with real time processing for TV's vs few frames a second for software.
    Quote Quote  
  4. Banned
    Join Date
    Nov 2022
    Search PM
    Youtube does not support 480p60 or 576p50, it will show them at 30 or 25 fps, respectively. For 50/60 fps you need to upscale to at least 720p. Whether it is worth to upscale even more - there are different opinions. I know several Youtube channels that upload SD video in high quality upscaled to 1080p.
    Quote Quote  
  5. I found that using a healthy bitrate at 480p upscaled the same on the TV vs upscaling the video file beforehand. The only difference was a lot bigger filesize.
    Quote Quote  
  6. As I have always said upscaling is only good for production when you need to fit the footage into the project and don't want black bars. For archive purposes such as your video library it is not recommended or future proof. As on the fly upscaling hardware technology improves, it's best to let the hardware do it. Also you always lose quality when upscaling it's impossible not to (hardware or software). Despite what anyone thinks or says video always looks best at native res. Bigger up-res is optical tricks and other techniques that make it seem to be a better image but not in reality. So I leave all my files native res, unless I am creating a video project that requires footage to be upscaled.
    Quote Quote  
  7. Capturing Memories dellsam34's Avatar
    Join Date
    Jan 2016
    Location
    Member Since 2005, Re-joined in 2016
    Search PM
    Hardware upscaling is not improving, I've seen upscaling artifacts on $3k OLED panels, No one cares about SD anymore and it will be left behind in the dark, Software upscaling and de-interlacing at slow speed is the way to go, Therefore future proofing is necessary but only up to 1080 (a.k.a 2k), Anything beyond that is just a matter of line doubling, 4k, 8k, 16k ... displays, So the most lossy step is getting the footage into HD.
    Quote Quote  
  8. Member
    Join Date
    Aug 2018
    Location
    Wrocław
    Search PM
    Originally Posted by CarnageKing1337 View Post
    Hi I was just wondering if my 480p videos would look better upscale to 4k or 1080p or would they look better as 480p on a 4k Tv? I've been using AVCLabs video software
    Leave it native, or at least keep the original one. Someday there will probably be smarter AI upscalers than today and the effect will be better than with current tools.
    Quote Quote  
  9. Originally Posted by dellsam34 View Post
    Hardware upscaling is not improving, I've seen upscaling artifacts on $3k OLED panels, No one cares about SD anymore and it will be left behind in the dark, Software upscaling and de-interlacing at slow speed is the way to go, Therefore future proofing is necessary but only up to 1080 (a.k.a 2k), Anything beyond that is just a matter of line doubling, 4k, 8k, 16k ... displays, So the most lossy step is getting the footage into HD.
    I disagree. What are you comparing? A hardware upscale to? A 4K OLED vs a 1080 LED? Can't compare apples and oranges. 480 to 1080 is going to lose less quality than 480 to 2160. You need to compare software upscaling to hardware in time.

    So a better test to prove you wrong is upscale a 480 to 1080 using old software surely you can find it, and see how the latest 1080 TV (if they even make them anymore) upscales it.

    Or better yet if you happen to have a video that you kept the native and upscaled 10-15 years ago, try it today and compare. My iphone would blow it away.

    Software upscaling is only good for production, not for archival.
    Quote Quote  
  10. Capturing Memories dellsam34's Avatar
    Join Date
    Jan 2016
    Location
    Member Since 2005, Re-joined in 2016
    Search PM
    Why you bring 1080 LED to this? We are comparing modern hardware vs modern software, Hardware do it in real time, software do it at their pace, hence software is better. This is from personal experience not from google search.


    480 to 1080 is going to lose less quality than 480 to 2160. You need to compare software upscaling to hardware in time.
    WTF does this even mean? Even if there is a difference in the two approaches 480 to 2160 is better because the rounding errors will be smaller, Besides, where did I mention in my previous post upscaling from 480 to 2160?
    Quote Quote  
  11. Originally Posted by dellsam34 View Post
    Why you bring 1080 LED to this? We are comparing modern hardware vs modern software, Hardware do it in real time, software do it at their pace, hence software is better. This is from personal experience not from google search.
    Sure I guess if you put it that way though will vary on hardware using the software to do the upscale (i.e. not all computers will be equal).

    480 to 1080 is going to lose less quality than 480 to 2160. You need to compare software upscaling to hardware in time.
    WTF does this even mean? Even if there is a difference in the two approaches 480 to 2160 is better because the rounding errors will be smaller, Besides, where did I mention in my previous post upscaling from 480 to 2160?
    not true at all. You always lose quality by upscaling (adding a bunch of fake pixels) you can't make pixels out of thin air can you? The more you upscale, the more quality is lost as the original keeps getting buried away. This is why when you keep zooming on a single pixel it progressively gets worse never better.

    I was just talking about being future proof nothing else. Go ahead and upscale all your videos and throw away the originals. Keep doing that every time a new standard comes out.

    480 upscale to 720. Take that 720 upscale to 1080. Take that 1080 upscale to 2160. Take that 4k and upscale to 8k> you will keep making it worse and worse as opposed to leaving it native and letting 8k TV upscale it. Plus the video isn't permanently baked in w/ FAKE pixels and all the other processing you will have to add.
    Quote Quote  
  12. Originally Posted by TubeBar View Post
    You always lose quality by upscaling (adding a bunch of fake pixels) you can't make pixels out of thin air can you? The more you upscale, the more quality is lost as the original keeps getting buried away.
    100% true is the fact that you bury the original by upscaling. No doubt. So each upscaling, with or without A.I. will be lossy when you reference the original. That is also true for each and every other transformation.
    But:
    The picture sometimes can LOOK BETTER in spite of this fact. "Better" means in this case, that you really can see more details. Often this works by simply using a simple unsharpmask algorithm, alwas then, when the original picture "contains" more details, that are just HARD TO SEE. So you can make more visible, what is already there. You just adapt it better to human viewing. But I always would do this with slight sharpening.
    Concerning upscaling... I would not do it, in my opinion it NEVER makes any sense. I would just sharpen, remove dirt if necessary, correct colors, and some other things. Upscaling does not make any sense to me. Just modern...

    And: For archive purposes ALWAYS keep the original.
    Quote Quote  
  13. Captures & Restoration lollo's Avatar
    Join Date
    Jul 2018
    Location
    Italy
    Search Comp PM
    Go ahead and upscale all your videos and throw away the originals.
    TubeBar, dellsam34 will never do that, he knows his stuff.

    We all keep the original untouched raw capture as master.

    For distribution/watching, we choose to upscale by software or leaving the operation to the Player/TV, depending on who does it better (same as deinterlacing).

    For YouTube upload we always upscale to reduce losses introduced by its compression.

    Upscale techniques are different according to the source, and for best results sometimes intermediate steps (with light sharpening) are required.
    Quote Quote  
  14. Yes, archive the original. Always. Some better algorithms in the future

    Hardware is improving - every few years a new "AI" chip is released (or revision), definitely better each generation. The next gen "AI" scaling chips are being software trained/prototyped right now in labs with massive GPU farms assisting

    For the OP - test which is better. There are cases where software is better, cases where some TV's are better. It's source specific, scenario specific, upscaling algorithm specific - that's the final answer. End of story.



    In general -

    Low quality video tends not to upscale as well. There are additionally other issues that need to be addressed in terms of artifacts. TV's obviously don't do it very well, they are too generic to handle specific problems. They don't have an array of filters to "fix" things

    Many types of cartoons and anime scale well . There are a bazillion types of models for specific types of content, source problems (dot crawl, halos, rainbows, etc...). It's nearly impossible to find a TV that scale better than some combination of software processing (especially with software machine learning models) - because TV's are generic and cannot handle specific problems



    Originally Posted by TubeBar View Post
    You always lose quality by upscaling (adding a bunch of fake pixels) you can't make pixels out of thin air can you? The more you upscale, the more quality is lost as the original keeps getting buried away. This is why when you keep zooming on a single pixel it progressively gets worse never better.
    .
    Not "always."

    An exception is you can losslessly upscale, by using nearest neighbor algorithm on powers of 2 (in the absense of lossy compression). It just duplicates pixels, and you can discard the pixels you just duplicated. The upscale will look like crap, but it's lossless and reversible.


    Originally Posted by Quint View Post
    100% true is the fact that you bury the original by upscaling. No doubt. So each upscaling, with or without A.I. will be lossy when you reference the original. That is also true for each and every other transformation.
    But the goal isn't about the immediate original anymore. Who cares. People want something better than the original these days

    These days with machine learning it's about the Ground Truth - the original, original. ie. the version before the original. The master. This is what a lot of the active academic research, R&D in companies for computer vision goes into.

    You can, in some cases, improve the current "original" by also using data from adjacent frames. "Temporal Super Resolution" . You are combining data , using alignment, forward and backward propogation. e.g. If an object is more clear in other frames, you can combine them and increase the resolution. It's analogous to image stacking in photoshop, which has been a proven technique for 20 years in increase resolution for images, decreasing noise (increasing PSNR), but for now video. Ideally you are generating something better than the current "original" . So you can measure PSNR, SSIM, VMAF, whatever metric , subjective evaluations - because you have the GT to measure it against. If you have a higher dB than the "original", well, by that metric it's "better" than the original because it's closer to the GT (the original, original)

    In some cases software is significantly better - measureable lines of resolution are higher on test patterns - this is the gold standard test for broadcast engineers - but they are flawed tests these days because of "AI" can be trained to perform great on a synthetic test, but not be applicable or generalizable to generic content - ie. so what if resolution is increased on a chart. They no longer necessarily have high positive predictive value on some "random" video. The rules have changed, and test charts have less value. I've posted examples of test charts before, and the increase in measureable resolution using machine learning

    In some textbook/ideal cases, you can read signs more clearly , license plate is now clear. I've posted several reproducible examples before, inferencing on a different different test set than the training set. No temporal aliasing or flicker. Not even NNEDI3 derivatives can claim that. No TV can produce even remotely close to the results in the demos I posted for BasicVSR++ . It's way too HW intensive for the current generation of AI chip silicon for TV's (TV's need real time processing, a serious limitation), it will never happen for at least a few years, but this technology is on the horizon. PSNR/SSIM/VMAF etc.. are much higher than the "original". Objectively better. 10/10 people will say it's subjectively better too
    Quote Quote  
  15. Yes, yes...
    You overread my "But:"...
    But thank you for your detailled explanations, some things I didn't know - too long out of business.

    In spite of these, I keep saying: Pixels that are generated new, with or without A.I., are generated, and will not improve in many cases with low resolution sources.
    Exception: You mentioned the adjacent frames method. That's of course a source for more real details, if you refer to the most original original - the reality, that had been shot.

    A.I. new generated pixels come in the last consequence of deep knowledge of other clips. Might often "work", might look awful.
    You mentioned license plates: I say: Those can be made visible, exactly then when you could make it visible also without A. I.!
    If not, that means, if there is not enough information, you cannot reconstruct it, also not with A. I., and also not in the future. Because this is UNIQUE information. Analog to this you also can sometimes reconstruct faces quite well, sometimes they will look like monsters, it's a matter of uniqueness. A.I. can't do magic.
    Quote Quote  
  16. Originally Posted by Quint View Post

    In spite of these, I keep saying: Pixels that are generated new, with or without A.I., are generated, and will not improve in many cases with low resolution sources.
    Yes, in general low resolution, low quality and/or noisy sources will not improve much .

    But you can still get some improvement. It depends on what you are comparing it against, and the source, and the algorithms being used.

    "Upscaling" is more than just simple resampling of pixels. A bunch of other processes go into "upscaling" - antialiasing, antiringing to name a few. TV sets often have other things you can disable like local contrast enhancement. And "AI" chips fill in missing pixels (but it doesn't always work so well, it depends on the source as well)

    If you have large upscale that doesn't have antialiasing it's going to flicker in motion, very annoying. Much worse than a TV that scales with antialiasing . It's similar to deinterlacing algorithms - some TV's do a good job (motion adaptive, very clean and "QTGMC like") ,but some do a terrible job

    There is a large gap between high end and low end TV sets in terms of processing (what do you think you're paying for?) . The lowest end set (maybe some $100 HDTV set from Walmart), does almost no processing. Scaling is very close to a cubic resample . Very flickery.


    A.I. new generated pixels come in the last consequence of deep knowledge of other clips. Might often "work", might look awful.
    Yes, hit or miss. Many of the artifacts you see on high end TV's come from "AI" misinterpolation. But overall, across multiple types of content, the high end sets are very good, and very rarely are the artifacts distractingly bad or unwatchable

    You mentioned license plates: I say: Those can be made visible, exactly then when you could make it visible also without A. I.!

    If not, that means, if there is not enough information, you cannot reconstruct it, also not with A. I., and also not in the future. Because this is UNIQUE information.
    Yes, and not necessarily "AI" - there are traditional deconvolution methods that can make some image very blurry, suddenly clear. Many examples of deblurring out there (some of the are machine learning) , that don't even use temporal analysis . But none of those single image methods have clean motion on video. They all flicker. Lack of temporal consistency - they perform well on single image metrics, but penalized on VMAF (which takes into account motion) . This is the main distinction between a "video" algorithm and a single image algorithm.

    But if you can improve with any method ("AI" or not), over what you have right now, it's still an improvement. That's that bottom line and the point many people seem to forget. It has does not necessarily have anything to with "AI" - If you can get a better result, a better PSNR, a better "whatever", compared to what you get with your TV set now - isn't worth doing ? or at least considering ?

    Many machine learning offers improvements and complements traditional methods in many tasks - SR, denoising, deblurring. 100% of high end TV's all have "AI" chips, there is a reason for that (and it's not just marketing). They are far from perfect, but have reached a point in maturity good enough that they can sell. And it only gets better each generation
    Last edited by poisondeathray; 22nd Jul 2023 at 15:14.
    Quote Quote  
  17. Capturing Memories dellsam34's Avatar
    Join Date
    Jan 2016
    Location
    Member Since 2005, Re-joined in 2016
    Search PM
    Originally Posted by TubeBar View Post
    not true at all. You always lose quality by upscaling (adding a bunch of fake pixels) you can't make pixels out of thin air can you? The more you upscale, the more quality is lost as the original keeps getting buried away. This is why when you keep zooming on a single pixel it progressively gets worse never better.
    Show me where did I say upscaling is lossless. Besides, do you have a 480 monitor? I sure and most of us don't, so upscaling is happening whether you want it or not. My point was, when upscaling is needed, software upscaling is always better than hardware upscaling, Software improving and does not cost much, hardware costs, so not worth improving since no one cares about 480/576 anymore.


    Originally Posted by TubeBar View Post
    480 upscale to 720. Take that 720 upscale to 1080. Take that 1080 upscale to 2160. Take that 4k and upscale to 8k> you will keep making it worse and worse as opposed to leaving it native and letting 8k TV upscale it. Plus the video isn't permanently baked in w/ FAKE pixels and all the other processing you will have to add.
    This is the most idiotic idea that I've ever read. Keep circling around spewing nonsense.
    Quote Quote  
  18. Originally Posted by poisondeathray View Post
    But you can still get some improvement. It depends on what you are comparing it against, and the source, and the algorithms being used.
    Yes. But not because of "upscaling", but because of good deinterlacing/antialiasing a.s.o. BEFORE "upscaling".

    "Upscaling" is more than just simple resampling of pixels. A bunch of other processes go into "upscaling" - antialiasing, antiringing to name a few. TV sets often have other things you can disable like local contrast enhancement.
    A question of terminology. I think you mix several things up. I use the term "upscaling" for the pure adapting to a higher scale of resolution.

    The rest of what you wrote: Again very interesting to learn. Thanks.

    But if you can improve with any method ("AI" or not), over what you have right now, it's still an improvement. That's that bottom line and the point many people seem to forget. It has does not necessarily have anything to with "AI" - If you can get a better result, a better PSNR, a better "whatever", compared to what you get with your TV set now - isn't worth doing ? or at least considering ?
    That's where I agree with you! And it's exactly then an improvement, if the human eye sees more details and less disturbing noise than before.
    I only doubt that this improvement has anything to do with "upscaling".
    That's what I wrote above.
    Quote Quote  
  19. nnedi rpow upscaling makes sense, as it also serves as anti aliaser. everything else is a line doubler
    Quote Quote  
  20. Yes, yes... So you get a doubled line with antialiased edge. And? Where are the new details?
    Quote Quote  
  21. Originally Posted by Quint View Post
    Originally Posted by poisondeathray View Post
    But you can still get some improvement. It depends on what you are comparing it against, and the source, and the algorithms being used.
    Yes. But not because of "upscaling", but because of good deinterlacing/antialiasing a.s.o. BEFORE "upscaling".
    Not only "before" . If you antialias before, then upscale, you get aliasing, jaggy edges . Also during and after

    Also, many processes cannot be separated from the upscaling algorithm. They are part of the algorithm.


    "Upscaling" is more than just simple resampling of pixels. A bunch of other processes go into "upscaling" - antialiasing, antiringing to name a few. TV sets often have other things you can disable like local contrast enhancement.
    A question of terminology. I think you mix several things up. I use the term "upscaling" for the pure adapting to a higher scale of resolution.
    See above - "upscaling" is no longer a simple math equation like bilinear, bicubic, lanczos, or spline, etc....That might have been valid 20 years ago. Upscaling these days involves many processes, and often you cannot separate out individual steps.

    For example, if you wanted to upscale with "model x" , but disable the deblurring involved - you cannot. You'd have to train another model without deblurring


    That's where I agree with you! And it's exactly then an improvement, if the human eye sees more details and less disturbing noise than before.
    I only doubt that this improvement has anything to do with "upscaling".
    That's what I wrote above.
    Sure, see above . The problem is you cannot separate out the processes, or not very easily. So effectively, it has everything to do with "upscaling" .

    eg. If you use an anime model that handles dot crawl, rainbows, minor noise, and it happens to be 2x model, you cannot "upscale" only without the other effects.

    That''s what machine learning models do. They are essentially a collection of filters, a decision tree, that has been "baked" to finite decisions. It's the same as an "AI" TV chip - the processes that it's using for the upscaling, edge enhancement, deblurring etc... cannot be separated out. It's part of the neural net. It's part of the upscaling routine
    Quote Quote  
  22. Originally Posted by rrats View Post
    nnedi rpow upscaling makes sense, as it also serves as anti aliaser. everything else is a line doubler
    What is "everything else" ?

    How are you defining "line doubler" ?


    NNEDI3_rpow2 tends not do well on 4x or higher scales - too much temporal aliasing , coarse lines, and edge artifacts. Spatial anti aliasing might look ok on a still frame, but in motion there are temporal inconsistencies and flicker. NNEDI3_rpow2 is still better than say, a plain lanczos 3 or 4 tap (spatial only too)

    NNEDI3 was good 10 years ago, before "AI" chips came out that do it faster and typically better on higher end TV's. NNEDI3 is "neural net edge directed interpolation", there is a weights model - but it's an old and small neural net - you can't expect too much compared to newer neural nets
    Quote Quote  
  23. Originally Posted by poisondeathray View Post
    Originally Posted by Quint View Post
    Originally Posted by poisondeathray View Post
    But you can still get some improvement. It depends on what you are comparing it against, and the source, and the algorithms being used.
    Yes. But not because of "upscaling", but because of good deinterlacing/antialiasing a.s.o. BEFORE "upscaling".
    Not only "before" . If you antialias before, then upscale, you get aliasing, jaggy edges . Also during and after
    Also, many processes cannot be separated from the upscaling algorithm. They are part of the algorithm.
    As I said: It's a question of terminology. You are right in what you say, and I understand your point. But I would rather use the term "Improving procedure incl. upscaling"... Whatever you do inside... You claim that f. e. Anti-Alaiasing gives "better" results when upscaling at the same time. I still doubt this. You will get the same or worse things if you do anti-aliasing with an upscaled source (or WHILE upscaling), than just using the original. If the algorithm is optimised for the relevant situation. The mixing has no effort. A I. or not A.I. It's just made that way, and you can't separate the single steps any more, if you use A. I. and created your model.

    See above - "upscaling" is no longer a simple math equation like bilinear, bicubic, lanczos, or spline, etc....That might have been valid 20 years ago. Upscaling these days involves many processes, and often you cannot separate out individual steps.
    As I said: Just terminology. If you like to use this term like that, your decision, and of course the terminology of marketing. But it's less precise.
    Quote Quote  
  24. Capturing Memories dellsam34's Avatar
    Join Date
    Jan 2016
    Location
    Member Since 2005, Re-joined in 2016
    Search PM
    Most likely he means anything to 1080 is nnedi, After 1080 is line doubler since future resolutions will be multiple integer of 1080 (2k). But the inherit problem of SD is not just resolution, it's interlacing. De-interlacing is the biggest lossy step in the process. For the ones who work with analog tapes know what I mean, Especially when you look at the date and time of a camcorder footage at the bottom right of the frame before and after de-interlacing.

    I have yet to see an experiment comparing SD upscaling to HD vs. directly from SD to 4k/UHD, I would love to see a member here with script skills doing that comparaison for the community. I know NTSC 480 upscale nicely to 1440 vertically but there is no native panels that can display 1440 vertically, so a hardware upscale at the panel level is an avoidable.
    Quote Quote  
  25. Originally Posted by Quint View Post
    [ You claim that f. e. Anti-Alaiasing gives "better" results when upscaling at the same time.
    I still doubt this. You will get the same or worse things if you do anti-aliasing with an upscaled source (or WHILE upscaling), than just using the original.
    To clarify - During, if the antialiasing is part of the upscaling algorithm . An example is NNEDI3_rpow2 - the antialiasing is applied with the upscaling (resampling to increase pixel dimensions)

    Or after, to fix the aliasing problem caused by the upscaling. e.g. you upscale using Lanczos3 - it produces aliasing. You apply an antialiasing filter to improve the "jaggies"

    Defintely not before - because if the upscaling was the cause that produced aliasing, you need to apply antialiasing fiilter(s) afterwards . "Before" only applies, if the source had aliasing in the first place, before upscaling



    See above - "upscaling" is no longer a simple math equation like bilinear, bicubic, lanczos, or spline, etc....That might have been valid 20 years ago. Upscaling these days involves many processes, and often you cannot separate out individual steps.
    As I said: Just terminology. If you like to use this term like that, your decision, and of course the terminology of marketing. But it's less precise.

    But I would rather use the term "Improving procedure incl. upscaling"
    Yes it's less precise, but it's implied in this discussion. "Improving procedure incl. upscaling" is too wordy and yet still vague

    That's what a modern TV does: "Improving procedure incl. upscaling"

    When the OP uses AVCLabs video to "upscale", that also is "Improving procedure incl. upscaling" - this is what he is deciding on - and that's what this thread is about when the term "upscaling" is used.
    Quote Quote  
  26. Originally Posted by dellsam34 View Post
    But the inherit problem of SD is not just resolution, it's interlacing. De-interlacing is the biggest lossy step in the process.
    Sure, but the OP is dealing with "480p" , why would he deinterlace ?

    Another way to think of interlace (during motion) is it's half spatial resolution - so I would say resolution is also a big part. But the even/odd scanlines are not aligned, so that creates additional issues

    I have yet to see an experiment comparing SD upscaling to HD vs. directly from SD to 4k/UHD, I would love to see a member here with script skills doing that comparaison for the community. I know NTSC 480 upscale nicely to 1440 vertically but there is no native panels that can display 1440 vertically, so a hardware upscale at the panel level is an avoidable.
    If I can read between the lines, you're implying that there is little to no advantage gained from a SD source to HD vs. UHD.

    I agree mostly, but the other part of the equation is - it depends on how the 1920x1080 (or 1440x1080 for 4:3, it can be pillarboxed to 1920x1080) is then upscaled to UHD. Is that TV doing a good job or not ? Is it creating aliasing artifacts (spatial and temporal) ? or other problems ? You can get "AI" scaling issues too from TV sets

    You're generally not going to get real effective resolution (as in measurable lines of detail) beyond 1920x1080 (or 1440x1080 for 4:3) from a SD source. Not even close. Basically a SD source can only contain so much info. In real world situations with low quality source video , compression artifacts - 100% definitely not.
    Quote Quote  
  27. Originally Posted by poisondeathray View Post
    Or after, to fix the aliasing problem caused by the upscaling. e.g. you upscale using Lanczos3 - it produces aliasing. You apply an antialiasing filter to improve the "jaggies"

    Defintely not before - because if the upscaling was the cause that produced aliasing, you need to apply antialiasing fiilter(s) afterwards . "Before" only applies, if the source had aliasing in the first place, before upscaling
    So you spoke of an SD source that has NO problem with jaggy edges? I thought we talked of the typical pulldowned-then-slightly-compressed-then-IVTCed-NTSC-source, where aliasing is baked in before you touch it...
    You spoke of a non-aliased source, and this whole anti-aliasing is only necessary to fix what the upscaling caused? THEN I would prefer the original source even more...

    Yes it's less precise, but it's implied in this discussion. "Improving procedure incl. upscaling" is too wordy and yet still vague

    That's what a modern TV does: "Improving procedure incl. upscaling"

    When the OP uses AVCLabs video to "upscale", that also is "Improving procedure incl. upscaling" - this is what he is deciding on - and that's what this thread is about when the term "upscaling" is used.
    Yes, and that's a pity... Because the real upscaling-part does nothing...

    Don't take it bad, please. Just fun to talk about this.
    Quote Quote  
  28. Originally Posted by Quint View Post
    Originally Posted by poisondeathray View Post
    Or after, to fix the aliasing problem caused by the upscaling. e.g. you upscale using Lanczos3 - it produces aliasing. You apply an antialiasing filter to improve the "jaggies"

    Defintely not before - because if the upscaling was the cause that produced aliasing, you need to apply antialiasing fiilter(s) afterwards . "Before" only applies, if the source had aliasing in the first place, before upscaling
    So you spoke of an SD source that has NO problem with jaggy edges? I thought we talked of the typical pulldowned-then-slightly-compressed-then-IVTCed-NTSC-source, where aliasing is baked in before you touch it...
    Well, all the bases are covered. It's pretty common sense. Nothing tricky.

    If you have aliasing to begin with, filter it before (obvious)

    If upscaling was the cause for aliasing (this is almost always the cause, or it almost always contributes to additional aliasing, unless you have a vector scaling), you need to filter it during or after

    When you upscale by large factors - 4x or more, you're going to create massive aliasing using most approaches, both spatial and temporal . That's why all modern upscaling involves : "Improving procedure incl. upscaling" . The buzzing fuzzy edges are very annoying, especially in motion


    You spoke of a non-aliased source, and this whole anti-aliasing is only necessary to fix what the upscaling caused? THEN I would prefer the original source even more...
    How would you "watch" the original non-aliased source ?

    So when a certain TV upscales and causes aliasing, you would prefer that over a proper upscale without the added aliasing ?
    Quote Quote  
  29. Capturing Memories dellsam34's Avatar
    Join Date
    Jan 2016
    Location
    Member Since 2005, Re-joined in 2016
    Search PM
    Originally Posted by poisondeathray View Post
    Sure, but the OP is dealing with "480p" , why would he deinterlace ?.
    Yes, but most of 480p contents came from video interlaced tapes (analog and digital alike) with the exception of film contents that were scanned directly from film into a progressive format not into an interlaced format such as Betacam or SD telecine machines, I've seen bad de-interlaced contents using hardware processing throughout the years of SD broadcasting when we moved to digital TV and DVD, My point was, most losses occur during that de-interlacing step, This further proves my point that computer processing is better than hardware processing.
    Quote Quote  
  30. Originally Posted by dellsam34 View Post
    Originally Posted by poisondeathray View Post
    Sure, but the OP is dealing with "480p" , why would he deinterlace ?.
    Yes, but most of 480p contents came from video interlaced tapes (analog and digital alike) with the exception of film contents that were scanned directly from film into a progressive format not into an interlaced format such as Betacam or SD telecine machines,
    More common when someone says "480p" in 2023 would be a DVD source, Youtube, or some SD stream. "Video interlaced tape" would be near the very bottom of my list for guesses.

    I've seen bad de-interlaced contents using hardware processing throughout the years of SD broadcasting when we moved to digital TV and DVD, My point was, most losses occur during that de-interlacing step, This further proves my point that computer processing is better than hardware processing.
    I agree bad deinterlacing causes many issues, in the past and now - but there are many assumptions that you're making about what the OP's source was.

    I don't see how those assumptions + observations "prove" that "computer processing is better than hardware processing." ? Where is the "proof"? I don't see the connection ? "computer processing" can be VERY bad too...anyone that reads a few threads on video forums will learn that quickly

    But I can make educated guesses about what you're referring to - QTGMC in terms of deinterlacing is very good - that is your "computer processsing" example (QTGMC has some issues , but still the best overall on most sources).

    The high end TV processing chips have very good deinterlacing too, I daresay they are getting very close , maybe even surpass soon (and low end displays are still terrible, similar to a bob deinterlace) . Deinterlacing does not get high priority for development, because nobody cares. 99.99% of video is progressive now. Think of all the cell phones, go pros, point and shoot cameras, dslrs - those are the majority of consumer video. Consider that TempGaussMC_beta2 (QTGMC's precursor) was released around 2008. That's a long run for an open source software.

    Theoretically, computer processing for *any* video processing task will always be potentially "better", because of the limitations of fixed hardware. Sure you can get some firmware revisions and minor improvements for a chip. But a software setup potentially has more resources, doesn't need real time deadline. A software solution can be customized to a specific situation. Besides - all hardware is prototyped in software first anyways, before commiting to silicon . HW is always going to be 1 or more generations behind state of the art developments. But you, me, and Joe Public do not necessarily have access to that state of the art computer processing. eg. I don't have access to Sony or Samsung's training or models for their next gen chips. The only public assess for their state of the art processing is you buy their next TV.
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!