VideoHelp Forum

Our website is made possible by displaying online advertisements to our visitors. Consider supporting us by disable your adblocker or try DVDFab Passkey and copy Blu-ray and DVDs! :)
+ Reply to Thread
Page 2 of 2
FirstFirst 1 2
Results 31 to 44 of 44
Thread
  1. Member
    Join Date
    Jan 2007
    Location
    Canada
    Search Comp PM
    Originally Posted by KarMa View Post
    I took a look at the x264 and QS AVC videos, which I made into a lossless comparison. Due to me using lossless video I got rid of 29 out of 30 frames and made it play at 1fps. It comes out to 192 frames. I probably should have done 1 out of 24 frames but whatever. The top left and bottom right squares are QS, and the top right and bottom left is the x264 sample. Encoded with lossless FFV1.

    As a AVC Hardware encoder it certainly looks like one of the best around and would certainly use it for realtime streaming or whatever. But if you have the time, x264 will still look better at any given bitrate (assuming you use some of the slower x264 presets).


    Code:
    LoadPlugin(".........LSMASHSource.dll")
    A=LWLibavVideoSource("............S05-E06 - Unbowed, Unbent, Unbroken x264-19.9.mkv").crop(962,0,0,-542).AddBorders(2,0,0,2).subtitle(align=9, "x264", text_color=$7CFC00)
    B=LWLibavVideoSource("............S05-E06 - Unbowed, Unbent, Unbroken.mkv").crop(0,0,-960,-542).AddBorders(0,0,0,2).subtitle(align=7, "QS")
    
    AA=LWLibavVideoSource("...........S05-E06 - Unbowed, Unbent, Unbroken x264-19.9.mkv").crop(0,540,-960,0).AddBorders(0,0,2,0).subtitle(align=1, "x264", text_color=$7CFC00)
    BB=LWLibavVideoSource("..........S05-E06 - Unbowed, Unbent, Unbroken.mkv").crop(962,540,0,0).subtitle(align=3, "QS")
    
    
    
    N=StackHorizontal(B,A)
    S=StackHorizontal(AA,BB)
    
    
    StackVertical(N,S).SelectEvery(30, 0).AssumeFPS(1)
    I'd still put every penny I own on the simple fact that most normal people would simply select either movie 1 or movie 2 as posted on #8 here: https://forum.videohelp.com/threads/390413-x264-vs-Quick-Sync-Comparison#post2531190

    They'd sit down, watch the movie and move on with their lives. They wouldn't apply any scripts to gloriously exaggerate any slights to any movie WHILE WATCHING it.

    Put another way, it's the exact same thing as converting a lossless audio track to any other audio format. During the process, some of the details that virtually nobody would notice under normal circumstances are removed, thus compressing it.

    (Mind you, my explanation will probably bring another uprising and the end to the natural world, but hey, who's perfect, except the select few who seem to think they are in this thread.)

    I wonder, really, how this entire thread had gone had I posted both examples of x264 with just 2 different runs, yet still titled it as "x264 vs Quick Sync Comparison"? I wonder how many experts here could've actually figured it out that they both the same clip with the same settings?

    Just for the fun of it, why don't the experts here take the source clip found here and convert it to any lossless codec they choose, then apply the same scripts they used here on this thread:

    Code:
    LoadPlugin(".........LSMASHSource.dll")
    A=LWLibavVideoSource("............S05-E06 - Unbowed, Unbent, Unbroken x264-19.9.mkv").crop(962,0,0,-542).AddBorders(2,0,0,2).subtitle(align=9, "x264", text_color=$7CFC00)
    B=LWLibavVideoSource("............S05-E06 - Unbowed, Unbent, Unbroken.mkv").crop(0,0,-960,-542).AddBorders(0,0,0,2).subtitle(align=7, "QS")
    
    AA=LWLibavVideoSource("...........S05-E06 - Unbowed, Unbent, Unbroken x264-19.9.mkv").crop(0,540,-960,0).AddBorders(0,0,2,0).subtitle(align=1, "x264", text_color=$7CFC00)
    BB=LWLibavVideoSource("..........S05-E06 - Unbowed, Unbent, Unbroken.mkv").crop(962,540,0,0).subtitle(align=3, "QS")
    
    N=StackHorizontal(B,A)
    S=StackHorizontal(AA,BB)
    
    StackVertical(N,S).SelectEvery(30, 0).AssumeFPS(1)
    or this:
    Code:
    q=FFVideoSource("S05-E06 - Unbowed, Unbent, Unbroken.mkv").hdragc().subtitle("quicksync")
    x=FFVideoSource("S05-E06 - Unbowed, Unbent, Unbroken x264-19.9.mkv").hdragc().subtitle("x264")
    
    interleave(q,x)
    or this:
    Code:
    source = WhateverSource("source.ext")
    enc1 = WhateverSource("enc1.ext")
    enc2 = WhateverSource("enc2.ext")
    Interleave(source, enc1, enc2, source)
    I'm sure that each and every codec will have their own little quirks and inaccuracies and not one the is truly lossless. They still remove information that is not clearly visible to a normal person's eyes under normal circumstances.

    When you start scrutinizing under highly altered conditions, like using a script to enhance flaws, does it really matter what kind of system you have or whether it's calibrated to any specific settings?

    I doubt anyone will bother with the transcoding to any lossless format or anything else for that matter; they've already done it all, seen it all, and know it all.

    Anyway, for my purposes, portable versions of, say, a TV series so I can watch it on the bus or train on my tablet; less the perfect is good enough for me (NO, I don't apply scripts while watching videos to over-exaggerate flaws).
    Last edited by ziggy1971; 11th Oct 2018 at 22:58.
    Quote Quote  
  2. Originally Posted by ziggy1971 View Post

    I'd still put every penny I own on the simple fact that most normal people would simply select either movie 1 or movie 2 as posted on #8 here: https://forum.videohelp.com/threads/390413-x264-vs-Quick-Sync-Comparison#post2531190

    They'd sit down, watch the movie and move on with their lives. They wouldn't apply any scripts to gloriously exaggerate any slights to any movie WHILE WATCHING it.
    For sure, but look at where you are posting . Lots of abnormal people here - some videophiles, some pros, some technicans in the AV industry etc... It's not your regular sample or cross section of random people . How many people in the general public even know what a script is, let alone run it ?

    Of course , the scripts are to show the differences more clearly . Nobody "watches" the scripts for fun, it's an analytical tool - the point is to analyze the outputs and compare the encoded videos. Since it seemed that you couldn't pick out the differences very easily. Other people can spot them right away, without any scripts. Just watching it .

    Other people might think they look the same. Some people think youtube has the same quality as a blu-ray. Or their 96kbps mp3 sounds as good as a flac master . Not all people have good eyes or ears. Maybe grandma can't see so well and has a hearing aid , and so forth... The point is people have different perceptions

    Put another way, it's the exact same thing as converting a lossless audio track to any other audio format. During the process, some of the details that virtually nobody would notice under normal circumstances are removed, thus compressing it.
    Lossy encoding yes, but there is lossless audio encoding too . e.g. Flac, ALAC etc...

    For lossy encoding, there is a contiuum of quality. What is "acceptable" quality is going to be different for different people, different scenarios . Some people are ok with that 96kbps mp3 and youtube - and that's perfectly fine for them. For others it's terrible and they wouldn't touch it with a 10ft pole

    But to say those videos you posted are similar would be wrong. You can clearly demonstrate that encoder A needs "x" more bitrate to achieve a similar level of quality as encoder B under scenario C . You can prove it with your choice of dozens of objective metrics, subjective assessments . Just because a person can't see it, doesn't mean there aren't significant quality differences



    Just for the fun of it, why don't the experts here take the source clip found here and convert it to any lossless codec they choose, then apply the same scripts they used here on this thread:

    I'm sure that each and every codec will have their own little quirks and inaccuracies and not one the is truly lossless. They still remove information that is not clearly visible to a normal person's eyes under normal circumstances.
    I think you are confusing lossy vs. lossless . There are lossless codecs available . Truly lossless . Both audio and video. For example x264 has a lossless mode. The filesize is going to be enormous, but the decoded video will be bit for bit identical. The lossless compression is respect to the decoded, uncompressed data. So it's compressed relative to uncompressed data

    When you start scrutinizing under highly altered conditions, like using a script to enhance flaws, does it really matter what kind of system you have or whether it's calibrated to any specific settings?
    Probably not, but if you had calibrated system it might help see video how it's meant to be seen. Cheaper displays tend to crush shadows and clip brights. Definitely lower bit depth displays don't show details as nicely in those regions, there are abrupt transitions instead of gradual falloffs. It can hide flaws instead of showing clear separation in detail

    So the scripts are to help people that can't see the differences clearly, for whatever reason. Maybe bad display, maybe driver issue, maybe grandma's bad eyes etc...

    But other people can differences right away just looking at it without any scripts or enhancements . It's that obvious to them . A similar analogy might be trained musicians can hear tones or notes that general public might not be able to discern



    I doubt anyone will bother with the transcoding to any lossless format or anything else for that matter; they've already done it all, seen it all, and know it all.
    There are many scenarios where lossless formats are used. Bridging different programs (e.g. you bring a video edit into a grading program, or effects program), filtering intermediates, lossless captures (film, analog to digital e.g. VHS, etc...) , compatibility (e.g. if you had a VC-1 Blu-ray , and you wanted to make an edit for a fan film - it wouldn't be supported by most NLE's. Often people would use a lossless intermediate)



    Anyway, for my purposes, portable versions of, say, a TV series so I can watch it on the bus or train on my tablet; less the perfect is good enough for me (NO, I don't apply scripts while watching videos to over-exaggerate flaws).
    Sure, that's great. It meets your needs

    In some scenarios, I use NVenc because it's much faster . It meets my needs and is "good enough" in some scenarios

    But there are significant differences between the two videos you posted. You 'd need substantially more bitrate using quicksync AVC for a similar level of quality. That's what tests show - you do serial runs at different bitrates and compare them. e.g. 5Mb/s , 7Mb/s, 9Mb/s etc...

    Other people have other different needs... they do whatever works for them
    Quote Quote  
  3. Dinosaur Supervisor KarMa's Avatar
    Join Date
    Jul 2015
    Location
    US
    Search Comp PM
    Originally Posted by ziggy1971 View Post
    Just for the fun of it, why don't the experts here take the source clip found here and convert it to any lossless codec they choose, then apply the same scripts they used here on this thread:

    I'm sure that each and every codec will have their own little quirks and inaccuracies and not one the is truly lossless. They still remove information that is not clearly visible to a normal person's eyes under normal circumstances.
    Lossless is lossless; in order for it to be labeled lossless it needs to retain every single pixel value. Everything else is lossy. I have heard rumors every couple of years about certain older versions of HuffYUV not being 100% lossless or talk about Lagarith being floating point or something, and having extremely rare rounding errors. I've never seen any information on how to repeat these alleged errors and so I'm not going to go searching for these mystical Unicorns. I used FFV1 which is used for lossless video archiving by many groups like the Library of Congress. FFV1 is also the basis of the newish Lossless Image Format FLIF, which the developer hopes replaces PNG (another lossless Image format).
    Quote Quote  
  4. Member Bernix's Avatar
    Join Date
    Apr 2016
    Location
    Europe
    Search Comp PM
    Yes lossless is when pixels color exact match original pixel color. In IT imagine if Zip, tar, 7zip, 7z or rar and any others archivers change one bit in 1TB file, because can save 80% of final size. Data has to be bit by bit exact when uncompressed otherwise you lost information (pitty english has not plural for information) because you lost plenty information and everything is wrong.
    Same for video. Just instead of bits have matched pixels, and therefore any lossy compressed video will be in lossless several times bigger than original. Have to preserve everything, compression artifacts included and spend there many bits where codec creating thess artifacts saved lot of bits.

    Just on the edge, is it still true that x264 lossless is more effective than x265 lossless?
    Quote Quote  
  5. Dinosaur Supervisor KarMa's Avatar
    Join Date
    Jul 2015
    Location
    US
    Search Comp PM
    Originally Posted by Bernix View Post
    Just on the edge, is it still true that x264 lossless is more effective than x265 lossless?
    I doubt there is much of a difference but x264 might have the edge in the lossless niche as x264 has been worked on for so long but I'm completely guessing. YULS has been the top lossless performer for awhile but it's extremely slow and does not even support 1080p. Then there is FFV1 which is the runner up to YULS but is actually usable with it being faster and having great support.
    Quote Quote  
  6. Originally Posted by Bernix View Post
    Just on the edge, is it still true that x264 lossless is more effective than x265 lossless?
    Yes. For 8bit 4:2:0, using 8bit binaries and default settings, on most types of content, most of the time x264 lossless has a compression advantage over x265 lossless . Usually ~5-15%

    eg. using his source from the other thread
    x264 3.0GB
    x265 3.34GB
    Quote Quote  
  7. Member Bernix's Avatar
    Join Date
    Apr 2016
    Location
    Europe
    Search Comp PM
    Probably is H.265 too agressive to preserve lossless with better speed and ratio with all his psychovisual features.


    To topic. Why just QS and not NVENC on Pascal and later cards and also included AMD feature. To made tests relevant to wider spectrum of people. Problem is to set as close settings as possible for all same. QS and Nvenc and AMD thing power is in speed not in quality. It is invaluable to capture video (with overshooting bitrate) for example in obs studio and in others programs. To stream live footage also. But comparing quality with x264 at same bitrate isn't posible. And yes for majority will be result from HW GPU encoding as good as from software x264, but metrics will be at same bitrate somewhere else.
    It reminds me nowadays situation in Chess. There is deep learning engine. It is getting better and better it is GPU based so 0 and 1. But there is best engine based on CPU Stockfish. And there are two groups. One leela fans, that are happy when 1080ti beats Stockfish at ordinary PC, and Stockfish fan, that claim that SF is best and using 48? or so xeon vs top GPU leela. There simply can't be ballance ever. To compare HW. What can be taken in account. Electricity bills, HW in USD, anything else? Nothing will be precise. Also Leela is unable to solve position. SF is able to play different chess variation.
    With all this i want to try say that compared CPU and GPU things is impossible everywhere, not just in videoindustry.
    BTW. Google made Alpha zero, it beats SF, but rules of game was made by google. 1 min per move it very limited SF because enhanced scaling algorithm. Also A0 run on google supercomputer, and SF has very limited memory and all. It was theater to make google as best AI firm in world. So it was unfair match because unfair condition.


    Will not read what i write probably someone will understand what i was trying to say



    Bernix
    Quote Quote  
  8. OP: You are wasting your time, x264, and to a lesser degree x265, enjoys a sort of cult following that can not be reasoned with, that will adamantly defend it to the end and will attack anyone that dares to offer an opinion that it is not that great.

    AMD has achieved that type of loyal cult following with Ryzen, among some users and Linux has done the same among many Linux users and this comes from a guy who's main machine is running a 1600 + Ubuntu.

    For the sake of argument, let's assume that software encoders, such as x264 and x265, so in fact produce better results at a given bitrate, at least at the lower end of the bit rate spectrum. So what? The reality is that anyone that knows what they are doing does not to straight encodes, nearly in ever case the source file has filters applied to it that negate any advantage the software encoder has in bit rate savings.

    My current rig, as I said above is an R5 1600, 8GB of ram and a GTX1050, if I want to transcode anything, which I don't see the point of doing under most circumstances, I fire up ShotCut, load the source and I nearly always apply filters (unless I'm just testing something) and I sue a GPU accelerated filters, like color correction, sharpening, brightness, contrast and or temperature and then I always encode using NVENC. I see no point in spending ours encoding using x264/x265 + a slow preset to try to gain 10% or less bitrate saving instead of applying the filters and using a bit more bit rate and being done in a fraction of the time with a fraction of the electricity cost.

    People that get all hung up on use x264/x265 because they are "the best" are people that do not know what they are doing; every single professionally produced video you see is always filtered, be it a Blu-Ray or a video meant for a streaming service such as Hulu, Netflix or YouTube.

    I have no intention of spending my hard earned cash to buy faster processors or faster ram to try and get a few more fps encoding speed using a software based encoder when in a few months I will be able to buy an RTX2050 (or whatever NVIDIA is going to name the GTX1050 replacing meant with raytracing) and get an updated nvenc build that will most certainly significantly faster encoding speed and better quality.
    Quote Quote  
  9. Member Bernix's Avatar
    Join Date
    Apr 2016
    Location
    Europe
    Search Comp PM
    Hi,
    You will pay a lot of money for RT, and their power is questionable. Therefore prices of 1080ti -> 1050ti didn't go down. At least here. Most powerful RTX nowadays and RT games these cards are working at max to achieve acceptable result. It should be big bang, but not sure if it was such big. Expectation were much bigger...
    Don't know prices in other countries, but supposing situation is same. And to CPU you will be forced to upgrade it as well if you are using PC not just for GPU encoding. Min. specs of CPU arise.


    Bernix
    Quote Quote  
  10. Originally Posted by sophisticles View Post
    For the sake of argument, let's assume that software encoders, such as x264 and x265, so in fact produce better results at a given bitrate, at least at the lower end of the bit rate spectrum. So what? The reality is that anyone that knows what they are doing does not to straight encodes, nearly in ever case the source file has filters applied to it that negate any advantage the software encoder has in bit rate savings.
    The compression advantage does not suddenly 'disappear' when you apply a filter

    The filtered source becomes the "new" source, and you can do your new tests on that. Guess what ? The compression advantage is still there

    I see no point in spending ours encoding using x264/x265 + a slow preset to try to gain 10% or less bitrate saving instead of applying the filters and using a bit more bit rate and being done in a fraction of the time with a fraction of the electricity cost.
    Sometimes it's ~30-80% more bitrate . If you used his test here, it works out to be about 1.5-1.6x more bitrate required in this testing scenario. For many it doesn't matter , they don't care if it's 2-3x the filesize. For some people it does. It doesn't matter what they choose - just stick to the facts .

    Please provide evidence to the contrary. All the evidence is against your assertions that advantages are suddenly "negated" when applying a filter .

    You're confusing the facts, with a personal bias. Just post the facts and data, they speak for themselves . Or are you someone that cannot see differences here either ? The proof is there, but subjective and objective. If you want PSNR/SSIM graphs and charts I can provide them too

    If something sucks, I'll say it sucks. x264 has significant weakness - have a look at fades for example. It's quite weak on shadow detail too. It's just a lot better than the other AVC alternatives. Quality wise, way more pros than cons. Sure it's slower than NVEnc, or QS , but they just don't have the quality/compression ratio . I use NVEnc all the time for it's speed

    Most of the time, I don't bother with slow presets either - a waste of time. But that is personal choice based on the facts, not because of some anti-whatever bias

    and get an updated nvenc build that will most certainly significantly faster encoding speed and better quality.
    I hope it does - any word on that ? or b-frames for HEVC ?
    Last edited by poisondeathray; 12th Oct 2018 at 15:50.
    Quote Quote  
  11. Originally Posted by poisondeathray View Post
    Originally Posted by sophisticles View Post
    and get an updated nvenc build that will most certainly significantly faster encoding speed and better quality.
    I hope it does - any word on that ? or b-frames for HEVC ?
    Nvidia said Turing (RTX 20xx) has up to 25% HEVC and up to 15% AVC bitrate encoding improvement. Nvidia themselves are comparing it to x264 preset fast (though they don't explicitly say whether "it" is HEVC or AVC. I guess "streaming to Twitch/Youtube" implies AVC?).

    Image
    [Attachment 46919 - Click to enlarge]

    https://www.nvidia.com/content/dam/en-zz/Solutions/design-visualization/technologies/t...Whitepaper.pdf
    Quote Quote  
  12. If you think about the claims, if they are true, then, at least for me, it effectively ends the debate about software vs hardware encoders. Taken one by one:

    Encode HEVC 8K30 HDR in real time - Anyone that has even tried to do 4K HDR knows how slow it is, if these RTX based cards can really encode 8K30 HDR HEVC in real time I think they will effectively kill the market for software based encoding.

    HEVC up to 25% bit rate savings and up to 15% bit rate savings for H264, both compared to x264 fast. This claim is very interesting, saying that the new NVENC found on RTX cards offers that much bit rate savings is the same as saying that at the same bit rate as x264+fast that RTX's NVENC HEVC offers up to 25% better quality and their AVC encoder offers up to 15% better quality, which means NVENC HEVC may come close to x264+slow and NVENC AVC may come close to x264+medium.

    While I realize that these results will be very much source dependent, I do know that the developer's of Davinci Resolve have said there software will make use of the RTX's Tensor Cores to improve quality and performance and it's been rumored that Adobe Premiere will also make use of the new features.

    If NVIDIA stick true to form the reasonably priced RTX2050 and RTX2060 models (I'm assuming that's what they will be called) should be out early next year, and I will end up buying the GTX1050 equivalent. This time though I will buy a 4GB model, not the 2GB model, I was doing a test encode using the Meridian sample that NetFlix created to be used for codec testing and after applying a couple of gpu accelerated filters and using NVENC HEVC through ShotCut, I found that I was 7.7GB out of 8GB system ram (I also had a bunch of Firefox tabs open), 5GB of swap and with the exception of 48MB of vram, the entire video card's ram was being used up.

    Luckily Ubuntu is not Windows and I could still use the system without it locking up or freezing and was able to shut down Firefox and free up some system ram (though it was still about 5.5GB used), but I will definitely spring for the larger buffer this time around.
    Quote Quote  
  13. Originally Posted by sophisticles View Post
    HEVC up to 25% bit rate savings and up to 15% bit rate savings for H264, both compared to x264 fast.
    I think the 25% and 15% respectively are meant in comparison to Pascal. But granted, the paper isn't all that explicit on that either.
    Quote Quote  
  14. Originally Posted by sneaker View Post
    Originally Posted by sophisticles View Post
    HEVC up to 25% bit rate savings and up to 15% bit rate savings for H264, both compared to x264 fast.
    I think the 25% and 15% respectively are meant in comparison to Pascal. But granted, the paper isn't all that explicit on that either.
    Rereading through the chart, I believe you are right, the chart does show Pascal losing to x264 fast which in turn is losing to Turing, so I think you are correct, the percentage improvement is relative to Pascal which makes it slightly better than x264 fast.

    Still, I think most users would be happy with x264 fast quality, which should be roughly equal to x265 ultrafast quality, if it means they can encode 1080p video at 300+ fps (I can hit that with a GTX1050 with the right ffmpeg settings) with z264 fast quality.

    BTW, did anyone else notice 2 new features that seem very exciting:

    AI SUPER REZ
    AI Super Rez increases the resolution of an image or video by 2x, 4x or 8x. Unlike traditional
    filtering methods which stretch out the existing pixels and filter between them, AI Super Rez
    creates new pixels by interpreting the image and intelligently placing data (see Figure 26). This
    results in a sharper enlargement that correctly preserves the depth of field and other artistic
    aspects. The video super-resolution network, which is highly optimized, can run in real-time (~30
    fps) for 1080p to 4K upscaling, with PSNR 1-2 dB higher than bicubic interpolation.

    DENOISING FILTERING
    In addition to acceleration structures that aid in improving ray tracing performance, various
    advanced filtering techniques can also improve performance and image quality without requiring
    additional rays to be cast. One such filtering technique is called denoising. Denoising can
    significantly improve the visual quality of noisy images that might be constructed of sparse data,
    have random artifacts, visible quantization noise, or other types of noise. In fact, many types and
    causes of image noise exist, and similarly many types of denoising methods also exist. Denoising
    filtering is especially effective at reducing the time ray-traced images take to render and can
    produce high fidelity images from ray tracers that appear visually noiseless.
    Currently, NVIDIA is making use of both AI-based and non-AI-based algorithms for denoising,
    choosing whatever is best for a particular application. In the future we expect AI-based denoising
    to continue to improve and replace non-AI-based metho
    Quote Quote  



Similar Threads