VideoHelp Forum
+ Reply to Thread
Page 2 of 2
FirstFirst 1 2
Results 31 to 54 of 54
Thread
  1. Member
    Join Date
    Jan 2007
    Location
    Canada
    Search Comp PM
    Originally Posted by KarMa View Post
    I took a look at the x264 and QS AVC videos, which I made into a lossless comparison. Due to me using lossless video I got rid of 29 out of 30 frames and made it play at 1fps. It comes out to 192 frames. I probably should have done 1 out of 24 frames but whatever. The top left and bottom right squares are QS, and the top right and bottom left is the x264 sample. Encoded with lossless FFV1.

    As a AVC Hardware encoder it certainly looks like one of the best around and would certainly use it for realtime streaming or whatever. But if you have the time, x264 will still look better at any given bitrate (assuming you use some of the slower x264 presets).


    Code:
    LoadPlugin(".........LSMASHSource.dll")
    A=LWLibavVideoSource("............S05-E06 - Unbowed, Unbent, Unbroken x264-19.9.mkv").crop(962,0,0,-542).AddBorders(2,0,0,2).subtitle(align=9, "x264", text_color=$7CFC00)
    B=LWLibavVideoSource("............S05-E06 - Unbowed, Unbent, Unbroken.mkv").crop(0,0,-960,-542).AddBorders(0,0,0,2).subtitle(align=7, "QS")
    
    AA=LWLibavVideoSource("...........S05-E06 - Unbowed, Unbent, Unbroken x264-19.9.mkv").crop(0,540,-960,0).AddBorders(0,0,2,0).subtitle(align=1, "x264", text_color=$7CFC00)
    BB=LWLibavVideoSource("..........S05-E06 - Unbowed, Unbent, Unbroken.mkv").crop(962,540,0,0).subtitle(align=3, "QS")
    
    
    
    N=StackHorizontal(B,A)
    S=StackHorizontal(AA,BB)
    
    
    StackVertical(N,S).SelectEvery(30, 0).AssumeFPS(1)
    I'd still put every penny I own on the simple fact that most normal people would simply select either movie 1 or movie 2 as posted on #8 here: https://forum.videohelp.com/threads/390413-x264-vs-Quick-Sync-Comparison#post2531190

    They'd sit down, watch the movie and move on with their lives. They wouldn't apply any scripts to gloriously exaggerate any slights to any movie WHILE WATCHING it.

    Put another way, it's the exact same thing as converting a lossless audio track to any other audio format. During the process, some of the details that virtually nobody would notice under normal circumstances are removed, thus compressing it.

    (Mind you, my explanation will probably bring another uprising and the end to the natural world, but hey, who's perfect, except the select few who seem to think they are in this thread.)

    I wonder, really, how this entire thread had gone had I posted both examples of x264 with just 2 different runs, yet still titled it as "x264 vs Quick Sync Comparison"? I wonder how many experts here could've actually figured it out that they both the same clip with the same settings?

    Just for the fun of it, why don't the experts here take the source clip found here and convert it to any lossless codec they choose, then apply the same scripts they used here on this thread:

    Code:
    LoadPlugin(".........LSMASHSource.dll")
    A=LWLibavVideoSource("............S05-E06 - Unbowed, Unbent, Unbroken x264-19.9.mkv").crop(962,0,0,-542).AddBorders(2,0,0,2).subtitle(align=9, "x264", text_color=$7CFC00)
    B=LWLibavVideoSource("............S05-E06 - Unbowed, Unbent, Unbroken.mkv").crop(0,0,-960,-542).AddBorders(0,0,0,2).subtitle(align=7, "QS")
    
    AA=LWLibavVideoSource("...........S05-E06 - Unbowed, Unbent, Unbroken x264-19.9.mkv").crop(0,540,-960,0).AddBorders(0,0,2,0).subtitle(align=1, "x264", text_color=$7CFC00)
    BB=LWLibavVideoSource("..........S05-E06 - Unbowed, Unbent, Unbroken.mkv").crop(962,540,0,0).subtitle(align=3, "QS")
    
    N=StackHorizontal(B,A)
    S=StackHorizontal(AA,BB)
    
    StackVertical(N,S).SelectEvery(30, 0).AssumeFPS(1)
    or this:
    Code:
    q=FFVideoSource("S05-E06 - Unbowed, Unbent, Unbroken.mkv").hdragc().subtitle("quicksync")
    x=FFVideoSource("S05-E06 - Unbowed, Unbent, Unbroken x264-19.9.mkv").hdragc().subtitle("x264")
    
    interleave(q,x)
    or this:
    Code:
    source = WhateverSource("source.ext")
    enc1 = WhateverSource("enc1.ext")
    enc2 = WhateverSource("enc2.ext")
    Interleave(source, enc1, enc2, source)
    I'm sure that each and every codec will have their own little quirks and inaccuracies and not one the is truly lossless. They still remove information that is not clearly visible to a normal person's eyes under normal circumstances.

    When you start scrutinizing under highly altered conditions, like using a script to enhance flaws, does it really matter what kind of system you have or whether it's calibrated to any specific settings?

    I doubt anyone will bother with the transcoding to any lossless format or anything else for that matter; they've already done it all, seen it all, and know it all.

    Anyway, for my purposes, portable versions of, say, a TV series so I can watch it on the bus or train on my tablet; less the perfect is good enough for me (NO, I don't apply scripts while watching videos to over-exaggerate flaws).
    Last edited by ziggy1971; 11th Oct 2018 at 22:58.
    Quote Quote  
  2. Originally Posted by ziggy1971 View Post

    I'd still put every penny I own on the simple fact that most normal people would simply select either movie 1 or movie 2 as posted on #8 here: https://forum.videohelp.com/threads/390413-x264-vs-Quick-Sync-Comparison#post2531190

    They'd sit down, watch the movie and move on with their lives. They wouldn't apply any scripts to gloriously exaggerate any slights to any movie WHILE WATCHING it.
    For sure, but look at where you are posting . Lots of abnormal people here - some videophiles, some pros, some technicans in the AV industry etc... It's not your regular sample or cross section of random people . How many people in the general public even know what a script is, let alone run it ?

    Of course , the scripts are to show the differences more clearly . Nobody "watches" the scripts for fun, it's an analytical tool - the point is to analyze the outputs and compare the encoded videos. Since it seemed that you couldn't pick out the differences very easily. Other people can spot them right away, without any scripts. Just watching it .

    Other people might think they look the same. Some people think youtube has the same quality as a blu-ray. Or their 96kbps mp3 sounds as good as a flac master . Not all people have good eyes or ears. Maybe grandma can't see so well and has a hearing aid , and so forth... The point is people have different perceptions

    Put another way, it's the exact same thing as converting a lossless audio track to any other audio format. During the process, some of the details that virtually nobody would notice under normal circumstances are removed, thus compressing it.
    Lossy encoding yes, but there is lossless audio encoding too . e.g. Flac, ALAC etc...

    For lossy encoding, there is a contiuum of quality. What is "acceptable" quality is going to be different for different people, different scenarios . Some people are ok with that 96kbps mp3 and youtube - and that's perfectly fine for them. For others it's terrible and they wouldn't touch it with a 10ft pole

    But to say those videos you posted are similar would be wrong. You can clearly demonstrate that encoder A needs "x" more bitrate to achieve a similar level of quality as encoder B under scenario C . You can prove it with your choice of dozens of objective metrics, subjective assessments . Just because a person can't see it, doesn't mean there aren't significant quality differences



    Just for the fun of it, why don't the experts here take the source clip found here and convert it to any lossless codec they choose, then apply the same scripts they used here on this thread:

    I'm sure that each and every codec will have their own little quirks and inaccuracies and not one the is truly lossless. They still remove information that is not clearly visible to a normal person's eyes under normal circumstances.
    I think you are confusing lossy vs. lossless . There are lossless codecs available . Truly lossless . Both audio and video. For example x264 has a lossless mode. The filesize is going to be enormous, but the decoded video will be bit for bit identical. The lossless compression is respect to the decoded, uncompressed data. So it's compressed relative to uncompressed data

    When you start scrutinizing under highly altered conditions, like using a script to enhance flaws, does it really matter what kind of system you have or whether it's calibrated to any specific settings?
    Probably not, but if you had calibrated system it might help see video how it's meant to be seen. Cheaper displays tend to crush shadows and clip brights. Definitely lower bit depth displays don't show details as nicely in those regions, there are abrupt transitions instead of gradual falloffs. It can hide flaws instead of showing clear separation in detail

    So the scripts are to help people that can't see the differences clearly, for whatever reason. Maybe bad display, maybe driver issue, maybe grandma's bad eyes etc...

    But other people can differences right away just looking at it without any scripts or enhancements . It's that obvious to them . A similar analogy might be trained musicians can hear tones or notes that general public might not be able to discern



    I doubt anyone will bother with the transcoding to any lossless format or anything else for that matter; they've already done it all, seen it all, and know it all.
    There are many scenarios where lossless formats are used. Bridging different programs (e.g. you bring a video edit into a grading program, or effects program), filtering intermediates, lossless captures (film, analog to digital e.g. VHS, etc...) , compatibility (e.g. if you had a VC-1 Blu-ray , and you wanted to make an edit for a fan film - it wouldn't be supported by most NLE's. Often people would use a lossless intermediate)



    Anyway, for my purposes, portable versions of, say, a TV series so I can watch it on the bus or train on my tablet; less the perfect is good enough for me (NO, I don't apply scripts while watching videos to over-exaggerate flaws).
    Sure, that's great. It meets your needs

    In some scenarios, I use NVenc because it's much faster . It meets my needs and is "good enough" in some scenarios

    But there are significant differences between the two videos you posted. You 'd need substantially more bitrate using quicksync AVC for a similar level of quality. That's what tests show - you do serial runs at different bitrates and compare them. e.g. 5Mb/s , 7Mb/s, 9Mb/s etc...

    Other people have other different needs... they do whatever works for them
    Quote Quote  
  3. Dinosaur Supervisor KarMa's Avatar
    Join Date
    Jul 2015
    Location
    US
    Search Comp PM
    Originally Posted by ziggy1971 View Post
    Just for the fun of it, why don't the experts here take the source clip found here and convert it to any lossless codec they choose, then apply the same scripts they used here on this thread:

    I'm sure that each and every codec will have their own little quirks and inaccuracies and not one the is truly lossless. They still remove information that is not clearly visible to a normal person's eyes under normal circumstances.
    Lossless is lossless; in order for it to be labeled lossless it needs to retain every single pixel value. Everything else is lossy. I have heard rumors every couple of years about certain older versions of HuffYUV not being 100% lossless or talk about Lagarith being floating point or something, and having extremely rare rounding errors. I've never seen any information on how to repeat these alleged errors and so I'm not going to go searching for these mystical Unicorns. I used FFV1 which is used for lossless video archiving by many groups like the Library of Congress. FFV1 is also the basis of the newish Lossless Image Format FLIF, which the developer hopes replaces PNG (another lossless Image format).
    Quote Quote  
  4. Member Bernix's Avatar
    Join Date
    Apr 2016
    Location
    Europe
    Search Comp PM
    Yes lossless is when pixels color exact match original pixel color. In IT imagine if Zip, tar, 7zip, 7z or rar and any others archivers change one bit in 1TB file, because can save 80% of final size. Data has to be bit by bit exact when uncompressed otherwise you lost information (pitty english has not plural for information) because you lost plenty information and everything is wrong.
    Same for video. Just instead of bits have matched pixels, and therefore any lossy compressed video will be in lossless several times bigger than original. Have to preserve everything, compression artifacts included and spend there many bits where codec creating thess artifacts saved lot of bits.

    Just on the edge, is it still true that x264 lossless is more effective than x265 lossless?
    Quote Quote  
  5. Dinosaur Supervisor KarMa's Avatar
    Join Date
    Jul 2015
    Location
    US
    Search Comp PM
    Originally Posted by Bernix View Post
    Just on the edge, is it still true that x264 lossless is more effective than x265 lossless?
    I doubt there is much of a difference but x264 might have the edge in the lossless niche as x264 has been worked on for so long but I'm completely guessing. YULS has been the top lossless performer for awhile but it's extremely slow and does not even support 1080p. Then there is FFV1 which is the runner up to YULS but is actually usable with it being faster and having great support.
    Quote Quote  
  6. Originally Posted by Bernix View Post
    Just on the edge, is it still true that x264 lossless is more effective than x265 lossless?
    Yes. For 8bit 4:2:0, using 8bit binaries and default settings, on most types of content, most of the time x264 lossless has a compression advantage over x265 lossless . Usually ~5-15%

    eg. using his source from the other thread
    x264 3.0GB
    x265 3.34GB
    Quote Quote  
  7. Member Bernix's Avatar
    Join Date
    Apr 2016
    Location
    Europe
    Search Comp PM
    Probably is H.265 too agressive to preserve lossless with better speed and ratio with all his psychovisual features.


    To topic. Why just QS and not NVENC on Pascal and later cards and also included AMD feature. To made tests relevant to wider spectrum of people. Problem is to set as close settings as possible for all same. QS and Nvenc and AMD thing power is in speed not in quality. It is invaluable to capture video (with overshooting bitrate) for example in obs studio and in others programs. To stream live footage also. But comparing quality with x264 at same bitrate isn't posible. And yes for majority will be result from HW GPU encoding as good as from software x264, but metrics will be at same bitrate somewhere else.
    It reminds me nowadays situation in Chess. There is deep learning engine. It is getting better and better it is GPU based so 0 and 1. But there is best engine based on CPU Stockfish. And there are two groups. One leela fans, that are happy when 1080ti beats Stockfish at ordinary PC, and Stockfish fan, that claim that SF is best and using 48? or so xeon vs top GPU leela. There simply can't be ballance ever. To compare HW. What can be taken in account. Electricity bills, HW in USD, anything else? Nothing will be precise. Also Leela is unable to solve position. SF is able to play different chess variation.
    With all this i want to try say that compared CPU and GPU things is impossible everywhere, not just in videoindustry.
    BTW. Google made Alpha zero, it beats SF, but rules of game was made by google. 1 min per move it very limited SF because enhanced scaling algorithm. Also A0 run on google supercomputer, and SF has very limited memory and all. It was theater to make google as best AI firm in world. So it was unfair match because unfair condition.


    Will not read what i write probably someone will understand what i was trying to say



    Bernix
    Quote Quote  
  8. OP: You are wasting your time, x264, and to a lesser degree x265, enjoys a sort of cult following that can not be reasoned with, that will adamantly defend it to the end and will attack anyone that dares to offer an opinion that it is not that great.

    AMD has achieved that type of loyal cult following with Ryzen, among some users and Linux has done the same among many Linux users and this comes from a guy who's main machine is running a 1600 + Ubuntu.

    For the sake of argument, let's assume that software encoders, such as x264 and x265, so in fact produce better results at a given bitrate, at least at the lower end of the bit rate spectrum. So what? The reality is that anyone that knows what they are doing does not to straight encodes, nearly in ever case the source file has filters applied to it that negate any advantage the software encoder has in bit rate savings.

    My current rig, as I said above is an R5 1600, 8GB of ram and a GTX1050, if I want to transcode anything, which I don't see the point of doing under most circumstances, I fire up ShotCut, load the source and I nearly always apply filters (unless I'm just testing something) and I sue a GPU accelerated filters, like color correction, sharpening, brightness, contrast and or temperature and then I always encode using NVENC. I see no point in spending ours encoding using x264/x265 + a slow preset to try to gain 10% or less bitrate saving instead of applying the filters and using a bit more bit rate and being done in a fraction of the time with a fraction of the electricity cost.

    People that get all hung up on use x264/x265 because they are "the best" are people that do not know what they are doing; every single professionally produced video you see is always filtered, be it a Blu-Ray or a video meant for a streaming service such as Hulu, Netflix or YouTube.

    I have no intention of spending my hard earned cash to buy faster processors or faster ram to try and get a few more fps encoding speed using a software based encoder when in a few months I will be able to buy an RTX2050 (or whatever NVIDIA is going to name the GTX1050 replacing meant with raytracing) and get an updated nvenc build that will most certainly significantly faster encoding speed and better quality.
    Quote Quote  
  9. Member Bernix's Avatar
    Join Date
    Apr 2016
    Location
    Europe
    Search Comp PM
    Hi,
    You will pay a lot of money for RT, and their power is questionable. Therefore prices of 1080ti -> 1050ti didn't go down. At least here. Most powerful RTX nowadays and RT games these cards are working at max to achieve acceptable result. It should be big bang, but not sure if it was such big. Expectation were much bigger...
    Don't know prices in other countries, but supposing situation is same. And to CPU you will be forced to upgrade it as well if you are using PC not just for GPU encoding. Min. specs of CPU arise.


    Bernix
    Quote Quote  
  10. Originally Posted by sophisticles View Post
    For the sake of argument, let's assume that software encoders, such as x264 and x265, so in fact produce better results at a given bitrate, at least at the lower end of the bit rate spectrum. So what? The reality is that anyone that knows what they are doing does not to straight encodes, nearly in ever case the source file has filters applied to it that negate any advantage the software encoder has in bit rate savings.
    The compression advantage does not suddenly 'disappear' when you apply a filter

    The filtered source becomes the "new" source, and you can do your new tests on that. Guess what ? The compression advantage is still there

    I see no point in spending ours encoding using x264/x265 + a slow preset to try to gain 10% or less bitrate saving instead of applying the filters and using a bit more bit rate and being done in a fraction of the time with a fraction of the electricity cost.
    Sometimes it's ~30-80% more bitrate . If you used his test here, it works out to be about 1.5-1.6x more bitrate required in this testing scenario. For many it doesn't matter , they don't care if it's 2-3x the filesize. For some people it does. It doesn't matter what they choose - just stick to the facts .

    Please provide evidence to the contrary. All the evidence is against your assertions that advantages are suddenly "negated" when applying a filter .

    You're confusing the facts, with a personal bias. Just post the facts and data, they speak for themselves . Or are you someone that cannot see differences here either ? The proof is there, but subjective and objective. If you want PSNR/SSIM graphs and charts I can provide them too

    If something sucks, I'll say it sucks. x264 has significant weakness - have a look at fades for example. It's quite weak on shadow detail too. It's just a lot better than the other AVC alternatives. Quality wise, way more pros than cons. Sure it's slower than NVEnc, or QS , but they just don't have the quality/compression ratio . I use NVEnc all the time for it's speed

    Most of the time, I don't bother with slow presets either - a waste of time. But that is personal choice based on the facts, not because of some anti-whatever bias

    and get an updated nvenc build that will most certainly significantly faster encoding speed and better quality.
    I hope it does - any word on that ? or b-frames for HEVC ?
    Last edited by poisondeathray; 12th Oct 2018 at 15:50.
    Quote Quote  
  11. Originally Posted by poisondeathray View Post
    Originally Posted by sophisticles View Post
    and get an updated nvenc build that will most certainly significantly faster encoding speed and better quality.
    I hope it does - any word on that ? or b-frames for HEVC ?
    Nvidia said Turing (RTX 20xx) has up to 25% HEVC and up to 15% AVC bitrate encoding improvement. Nvidia themselves are comparing it to x264 preset fast (though they don't explicitly say whether "it" is HEVC or AVC. I guess "streaming to Twitch/Youtube" implies AVC?).

    Image
    [Attachment 46919 - Click to enlarge]

    https://www.nvidia.com/content/dam/en-zz/Solutions/design-visualization/technologies/t...Whitepaper.pdf
    Quote Quote  
  12. If you think about the claims, if they are true, then, at least for me, it effectively ends the debate about software vs hardware encoders. Taken one by one:

    Encode HEVC 8K30 HDR in real time - Anyone that has even tried to do 4K HDR knows how slow it is, if these RTX based cards can really encode 8K30 HDR HEVC in real time I think they will effectively kill the market for software based encoding.

    HEVC up to 25% bit rate savings and up to 15% bit rate savings for H264, both compared to x264 fast. This claim is very interesting, saying that the new NVENC found on RTX cards offers that much bit rate savings is the same as saying that at the same bit rate as x264+fast that RTX's NVENC HEVC offers up to 25% better quality and their AVC encoder offers up to 15% better quality, which means NVENC HEVC may come close to x264+slow and NVENC AVC may come close to x264+medium.

    While I realize that these results will be very much source dependent, I do know that the developer's of Davinci Resolve have said there software will make use of the RTX's Tensor Cores to improve quality and performance and it's been rumored that Adobe Premiere will also make use of the new features.

    If NVIDIA stick true to form the reasonably priced RTX2050 and RTX2060 models (I'm assuming that's what they will be called) should be out early next year, and I will end up buying the GTX1050 equivalent. This time though I will buy a 4GB model, not the 2GB model, I was doing a test encode using the Meridian sample that NetFlix created to be used for codec testing and after applying a couple of gpu accelerated filters and using NVENC HEVC through ShotCut, I found that I was 7.7GB out of 8GB system ram (I also had a bunch of Firefox tabs open), 5GB of swap and with the exception of 48MB of vram, the entire video card's ram was being used up.

    Luckily Ubuntu is not Windows and I could still use the system without it locking up or freezing and was able to shut down Firefox and free up some system ram (though it was still about 5.5GB used), but I will definitely spring for the larger buffer this time around.
    Quote Quote  
  13. Originally Posted by sophisticles View Post
    HEVC up to 25% bit rate savings and up to 15% bit rate savings for H264, both compared to x264 fast.
    I think the 25% and 15% respectively are meant in comparison to Pascal. But granted, the paper isn't all that explicit on that either.
    Quote Quote  
  14. Originally Posted by sneaker View Post
    Originally Posted by sophisticles View Post
    HEVC up to 25% bit rate savings and up to 15% bit rate savings for H264, both compared to x264 fast.
    I think the 25% and 15% respectively are meant in comparison to Pascal. But granted, the paper isn't all that explicit on that either.
    Rereading through the chart, I believe you are right, the chart does show Pascal losing to x264 fast which in turn is losing to Turing, so I think you are correct, the percentage improvement is relative to Pascal which makes it slightly better than x264 fast.

    Still, I think most users would be happy with x264 fast quality, which should be roughly equal to x265 ultrafast quality, if it means they can encode 1080p video at 300+ fps (I can hit that with a GTX1050 with the right ffmpeg settings) with z264 fast quality.

    BTW, did anyone else notice 2 new features that seem very exciting:

    AI SUPER REZ
    AI Super Rez increases the resolution of an image or video by 2x, 4x or 8x. Unlike traditional
    filtering methods which stretch out the existing pixels and filter between them, AI Super Rez
    creates new pixels by interpreting the image and intelligently placing data (see Figure 26). This
    results in a sharper enlargement that correctly preserves the depth of field and other artistic
    aspects. The video super-resolution network, which is highly optimized, can run in real-time (~30
    fps) for 1080p to 4K upscaling, with PSNR 1-2 dB higher than bicubic interpolation.

    DENOISING FILTERING
    In addition to acceleration structures that aid in improving ray tracing performance, various
    advanced filtering techniques can also improve performance and image quality without requiring
    additional rays to be cast. One such filtering technique is called denoising. Denoising can
    significantly improve the visual quality of noisy images that might be constructed of sparse data,
    have random artifacts, visible quantization noise, or other types of noise. In fact, many types and
    causes of image noise exist, and similarly many types of denoising methods also exist. Denoising
    filtering is especially effective at reducing the time ray-traced images take to render and can
    produce high fidelity images from ray tracers that appear visually noiseless.
    Currently, NVIDIA is making use of both AI-based and non-AI-based algorithms for denoising,
    choosing whatever is best for a particular application. In the future we expect AI-based denoising
    to continue to improve and replace non-AI-based metho
    Quote Quote  
  15. Member
    Join Date
    Jan 2007
    Location
    Canada
    Search Comp PM
    and get an updated nvenc build that will most certainly significantly faster encoding speed and better quality.
    I hope it does - any word on that ? or b-frames for HEVC ?
    Quick Sync HEVC does, now, include all the I, P, B-frames. I'll be redoing my tests in another thread with the Intel UHD 630, y'all can debate the results there when done.

    Just for the sake of argument, have a look at some comparisons of current CPU benchmarks with regard to encoding:
    http://www.legitreviews.com/intel-core-i7-8700k-core-i5-8400-processor-review-coffee-lake_198473/6

    My current system, an i7-4930K OC'd to 4.6GHz. I could sell it, (best offer I've got $450 CAD) and buy either a Threadripper system 16+ cores & motherboard & DDR4 RAM, etc. we're looking at a complete rebuild and over $3000 for the new system. All that just to improve my encoding frame rate by what ~33 fps (I benched my system using the same software and settings as listed in the article. For the Fast 1080p30 setting I got 49.03 (82.63 fps Threadripper 1950X 166.67% more cores/threads just to gain 68.53% fps) and for the Legacy Normal setting I got 72.13 (119.0 fps Threadripper 1950X 166.67% more cores/threads just to gain 64.98% fps))

    Instead, I opted to upgrade my NAS system. An Intel i7-8700K that can serve as my NAS, streaming, and dedicated encoding machine. So I can now use 2 separate 6 core/12 thread systems, at least doubling my current encoding capabilities, for less than $1000. I'll be replacing my current i7-4930K with a 12 core/24 thread Xeon soon and relegating its duties to the NAS and moving the i7-8700K to my new daily user.

    Yes, I think that there is a huge bias as to the capabilities of Quick Sync.

    I also think it's akin to what a person drives to work with. Both a Lamborghini and a Pinto will get you from home to work and back again, in all likelihood at the same pace (assuming responsible driving, abiding by speed limits, traffic lights, etc.) However, the guy in the Lamborghini is all flash and no class (which, technically, the bank owns because of the loan) while the Pinto driver owns his car and is confident enough in his own life that he doesn't need any "compensating" toys.

    Go ahead spend all the hours you want encoding a video a hundred times over just to figure out that one tiny setting may save you 0.00001% bitrate. I'd rather use the extra 5% bitrate (or whatever it may need) to get 99.5% of the quality of the original and spend my days doing something more useful than lounging around a forum all day, complaining about what others do.

    Another note on x264's Fast or Very Fast settings. Where is it said that this setting is set in stone? For all us low level consumers, the Very Fast setting of today's x264 encoder could very well be the Placebo setting of just 1 or 2 years ago. The encoder has changed over the years; it's SOFTWARE so it can change virtually overnight.

    Maybe we can all play nice now. I doubt it though; some people just argue for the sake of arguing, they really don't say anything.

    I am curious though, with some people who can become experts in certain fields, why do they always limit themselves to such a meaningless task of video editing/encoding. With such brain power even I could think of several hundred better things to do, like finding a cure for cancer, solving the global warming issue, quantum physics. Do something useful.

    Another question I have: Why are all tests compared to x264 fast or very fast? With all the advancements in technology you'd think they would move up the ladder and target higher qualities like the Slow or Slower settings. For instance, Quick Sync, why not just build it so that it can, by default, produce a higher quality result (as compared to x264)? After all the iterations since Sandy Bridge, why not just improve it to a very high quality hardware? But then again, it's Intel. They've been monkeying around for years with their core counts until Threadripper comes along and well, puts and end to that.
    Last edited by ziggy1971; 19th Oct 2018 at 23:36.
    Quote Quote  
  16. Member
    Join Date
    Jan 2007
    Location
    Canada
    Search Comp PM
    Since posting facts seems an issue here. Perhaps someone could post an x264 setting that will produce a very good quality output and won't take 3 days to encode a 30 minute episode of a TV show.




    As for lossless encoding; I used CineForm with Film Scan 2, the highest level available in VirtualDub to encode the same source files I used here and there is clear evidence that it is NOT a lossless codec, according to the definition describe above. Yet, it's the codec used in GoPro cameras (don't quote me on this, I don't have a camera such as the GoPro so I'm not positive on what's available on those systems)

    I don't use Lagarith or HuffYUV. I'm not experienced enough to comment on those codecs.
    Last edited by ziggy1971; 19th Oct 2018 at 23:44.
    Quote Quote  
  17. Originally Posted by ziggy1971 View Post
    Another question I have: Why are all tests compared to x264 fast or very fast? With all the advancements in technology you'd think they would move up the ladder and target higher qualities like the Slow or Slower settings. For instance, Quick Sync, why not just build it so that it can, by default, produce a higher quality result (as compared to x264)? After all the iterations since Sandy Bridge, why not just improve it to a very high quality hardware? But then again, it's Intel. They've been monkeying around for years with their core counts until Threadripper comes along and well, puts and end to that.
    Certain encoding complexities do not lend themselves to parallelization on GPU . GPU is better at certain tasks than CPU, and vice versa . Why can't I play my video games at 120FPS+ at UHD with CPU ? Similar reasons.

    In fact, the tests where Intel MSS HEVC encoder (not quicksync), score better than x265 result because it's actually a primarily CPU based encoder, with some GPU acceleration on the side. As a result, the FPS is much slower than the consumer HEVC quicksync version

    The more parallelization , the more threads, the fewer redundancies can be used - and that is the bottom line basis for modern codec compression. Thus compression efficiency is negatively impacted. The absolute highest quality is always with 1 thread, but it's just not feasible in most situations (way too slow), and the difference is usually negligible for low thread counts, except for very low bitrate scenarios

    And now, the advancements are with new codecs, like AV1, which just wipe the floor with x264 and x265 but much slower. I'd love to see some GPU acceleration there



    Originally Posted by ziggy1971 View Post
    Since posting facts seems an issue here. Perhaps someone could post an x264 setting that will produce a very good quality output and won't take 3 days to encode a 30 minute episode of a TV show.
    If you can't see the difference , why even bother ? If you're happy with QS, or NVEnc, why not just use that ? Just set and forget, don't tinker so much, if have better things to do

    Or why even bother encoding at all ? HDD's are becoming less and less expensive . I just copy my BD's to external HDD and play off that - even faster than NVenc or QS, even less energy consumption. And no quality loss .

    Or what's wrong with the other settings you used earlier ? Or even "default" with tunings ? Even default settings will yield higher quality/compression ratio than QS, I think that's what you used in the other thread.

    Or just use a bit higher bitrate, or bit lower crf, or slower settings, etc....


    As for lossless encoding; I used CineForm with Film Scan 2, the highest level available in VirtualDub to encode the same source files I used here and there is clear evidence that it is NOT a lossless codec, according to the definition describe above. Yet, it's the codec used in GoPro cameras (don't quote me on this, I don't have a camera such as the GoPro so I'm not positive on what's available on those systems)
    That's right, it's not a lossless codec (it's know as a "visually lossless" codec, which isn't quite truly lossless)

    But the official version doesn't support 4:2:0 either, you would be upsampling to 4:2:2 . The official version does not support 8bit either, so you would be upsampling the bit depth too 10 or 12 (BD is 8bit 4:2:0) . All those conversions and manipulations will mean it' s technically not lossless either (Unless their are up/down sampled in a specific way)

    You could use something like UT Video codec, lagarith if you wanted lossless files . Official Huffyuv does not support 4:2:0, so that technically would not be lossless , unless you upsampled using nearest neighbor, and downsampled back down to 4:2:0 using nearest neighbor algorithm as well
    Quote Quote  
  18. Member
    Join Date
    Jan 2007
    Location
    Canada
    Search Comp PM
    As for lossless encoding, my work or play isn't that important to be of serious concern so CineForm is good enough for me.

    I think, perhaps, you touched on something I know virtually nothing about; color spaces. I have no idea what color spaces are being used, converted to and from, what software uses what color spaces and so on.

    MeGUI adds ConvertToYV12() at the end of the script before transcoding to x264, why, I couldn't tell you. Perhaps some explanation may help.

    I will be encoding some SD video in the near future, it's the old MASH series. It's 11 season long and would like to get really good quality on that one especially. The rest are so-so, not really all thaaaaat important.

    Suggestions?

    As for the x264 encoder settings I used for the testing in this thread, well, that was more or less just winging it without knowing what it all means. For all I know, I could have been high quality x264 settings vs mediocre settings in Quick Sync. I just threw them together, hoping to gain some insight, learn a few things and go on. It was not meant to become WWIII by no means.

    I liked my other thread on Quick Sync HEVC better. Just some opinions, insight, pros and cons etc. review the aspects of the encoder, not biases, utter dislikes/hatred and so forth. Just good, clean, debates, which I hope for with the new HEVC encodes I'll post soon (also in 10 bit), at least that's what the box says.
    Quote Quote  
  19. Originally Posted by ziggy1971 View Post
    As for lossless encoding, my work or play isn't that important to be of serious concern so CineForm is good enough for me.

    I think, perhaps, you touched on something I know virtually nothing about; color spaces. I have no idea what color spaces are being used, converted to and from, what software uses what color spaces and so on.

    MeGUI adds ConvertToYV12() at the end of the script before transcoding to x264, why, I couldn't tell you. Perhaps some explanation may help.
    If you are coming from a cineform source, that' s why it's added . You will likely be decoding it at 4:2:2, but your final format goal is 4:2:0. Most distribution sources are 4:2:0

    So all that is unnecessary if you have things like BD, DVD sources. Even web, like youtube, vimeo . They are all 8bit 4:2:0 . Not only does it add processing time going back and forth doing those conversions, it introduces avoidable quality loss if not performed with nearest neighbor algorithm (if you just plugged it into vdub, and let cineform do everything, it's not using nearest neighbor)

    What you should be doing is using the source directly instead of cineform intermediate . The commonly used GUI's like megui, staxrip can use avisynth and you can load the source directly. Also, the cineform version will be much larger than the original . (Cineform is useful for other workflows, where you use other programs like video editors, compositing, effects, grading but not for straight encoding or light filtering)



    I will be encoding some SD video in the near future, it's the old MASH series. It's 11 season long and would like to get really good quality on that one especially. The rest are so-so, not really all thaaaaat important.

    Suggestions?
    I honestly wouldn't even bother, I'd just stream copy it . But some devices can't play dvd/mpeg very well so....

    Otherwise, use decent settings , even default settings with tunings (e.g "film") . Use lower crf if you want higher quality. Use some slower presets if you don't mind slower encoding. For me the best switch is crf . I tend to use lower crf and larger filesizes . I don't bother with things like "very slow", but that's just me . In the end, more bitrate/lower crf is always the most effective for quality

    If the source condition had issues, you might need to prefilter it (e.g if source was noisy, maybe denoise it first, etc...) , that can help compression immensely

    Or if I need it fast with ok watchable quality, I'd use NVEnc , just my personal preference
    Quote Quote  
  20. Member
    Join Date
    Jan 2007
    Location
    Canada
    Search Comp PM
    Hard drives aren't becoming cheaper. Over 6 years ago I purchased 4 x 1TB Western Digital Black hard drives for $79.99 each. Ever since then I have yet to see them drop in price below $99.99 each. (Yes, they are all still running, and all approaching 50,000 Power on Hours. The 3TB Seagate hard drives I've bought at the best price was over 4 years ago, now they are more expensive than the were back then.

    I was thinking of just using the source vob/mpg files, but as far as I know they don't support subtitles that my media players can read. I also tried muxing the m2v, ac3/dts and idx/sub files into an mkv container, however, I think that introduces aspect ratio errors (not exactly clear on that) So, I was thinking of ripping them and encoding to h/x264 to make them playable on all my media streaming devices.

    HEVC would be an option if my media players had better support for them. As it is currently, playing HEVC requires a lot of CPU and battery power to play them on devices such as tablets and phones; it's fine for my TV as it has the hardware decoder. Hardware decoding would be nice, but not implemented in other devices.

    With a series such as MASH, since I want a very high quality result I really don't care if the h/x264 encoded files end up being the same file size as the original vobs. Ripped vobs don't play properly, mkv with ac3/dts and idx/sub file play perfectly fine everywhere I need them to.

    As for CineForm, I usually only use it to create custom "fan clips" applying some other effects and stuff.
    Quote Quote  
  21. Originally Posted by ziggy1971 View Post
    Hard drives aren't becoming cheaper. Over 6 years ago I purchased 4 x 1TB Western Digital Black hard drives for $79.99 each. Ever since then I have yet to see them drop in price below $99.99 each. (Yes, they are all still running, and all approaching 50,000 Power on Hours. The 3TB Seagate hard drives I've bought at the best price was over 4 years ago, now they are more expensive than the were back then.
    price/GB, sure they have

    You can get a cheap external 8TB, usb3 with enclosure + cables, free shipping for about $180 CDN (I've seen it as cheap as $160 at the end of summer). I bought a few over the year. Prices fluctuate a bit, and we get shafted on exchange rates, but the same 8TB external 2 years ago was ~ $240 . That's my strategy these days, just external HDD's. Easy to backup (multiple backups of whatever, data, projects, videos), easy to plug in to TV's/media players , and a lot less clutter than 10,000 discs with cases. The time saved alone swapping discs and burning is worth it. 5TB used to be the "sweet spot" , but 8TB seems to be it now


    I was thinking of just using the source vob/mpg files, but as far as I know they don't support subtitles that my media players can read. I also tried muxing the m2v, ac3/dts and idx/sub files into an mkv container, however, I think that introduces aspect ratio errors (not exactly clear on that) So, I was thinking of ripping them and encoding to h/x264 to make them playable on all my media streaming devices.

    HEVC would be an option if my media players had better support for them. As it is currently, playing HEVC requires a lot of CPU and battery power to play them on devices such as tablets and phones; it's fine for my TV as it has the hardware decoder. Hardware decoding would be nice, but not implemented in other devices.

    With a series such as MASH, since I want a very high quality result I really don't care if the h/x264 encoded files end up being the same file size as the original vobs. Ripped vobs don't play properly, mkv with ac3/dts and idx/sub file play perfectly fine everywhere I need them to.
    I completely agree there - it depends on which target devices you're expecting to be playing it on .

    The other streams matters too - some devices won't play AC3 or DTS or idx/sub . So do some tests before you waste a bunch of time
    Quote Quote  
  22. Member
    Join Date
    Jan 2007
    Location
    Canada
    Search Comp PM
    Yeah, I haven't looked at 8TB drives. When I copy 3TB of data onto another drive and consider the time it takes; it's a lot of data to lose (if one dies) so I haven't been able to trust anything above 3TB yet. All my drives are internal drives.

    I see you're listed as in Canada so take a look at this:
    https://www.bestbuy.ca/en-ca/product/western-digital-wd-my-book-pro-10tb-usb-3-0-deskt...10416656.aspx?

    and that's just a simple 2-bay NAS system. For that price I built a pretty decent 2nd PC with 8-drive capacity plus 2 SSD's. Yes, total cost was about $100-200 more, but the expansion capabilities (8 drives as opposed to 2) is well worth it. Currently I have 8x 3TB Seagate Ironwolf drives plus a Samsung 120GB SSD as the system drive. I had a cheaper i3-6100 in it before for NAS duties, but I decided to upgrade the motherboard and CPU so I could also use it to encode videos without tying up my daily PC. Just built it a few days ago, so I haven't done any real encoding yet. I may have to get another liquid cooler, I'm not sure yet.

    As for the media files, ac3/dts/TrueHD all work with my media player via streaming. Except for one format, E-AC3; my media player will not play files with DD+ or 6.1 audio. The video plays, but the audio is silent and my receiver doesn't show that anything is getting to it. Its weird, but OK.
    Last edited by ziggy1971; 20th Oct 2018 at 11:46.
    Quote Quote  
  23. Originally Posted by poisondeathray View Post
    What I'd like to see is HEVC encodes from the Intel Media SDK . They made it free a while back for Windows ( Intel Media SDK 2018 R1, I think there is an R2 now). This is not the same thing as the general quicksync that everyone uses; this is the full version with bells and whistles hardware+software encoder that beats x265 according to PSNR/SSIM tests from MSU (PSNR/SSIM aren't great tests , but the fact that it scores higher would make it interesting to look at) . If you can get that working, that would be interesting. It's supposed to be difficult to get running
    Here's the thing, I think Intel has intentionally downplayed Intel Media SDK, on both Windows and Linux and here's why I think that: There's was a time you could build ffmpeg with qsv support on Linux, Red Hat based distros where supported, such as CentOS and so was Ubuntu, but now it's a convoluted mess to try and build ffmpeg with qsv support on Linux, you need a specific version of CentOS in order to even install Media SDK on Linux (and believe me I have tried every hack I could find to work around the OS detection routine) and even if you grab the supported version you still need a specific kernel revision based on the 3.xx branch and you can't update the OS or else it breaks functionality and you need to use a specific graphics driver that's a pain in the ass to get working.

    You can use the open source graphics stack that Intel provides, but this only allows you to use the vaapi interface, which is meant to be generic (you can use it with AMD's gpu's also, wven though it's developed by Intel) and it only supports a subset of all the Quick Sync features.

    Reading through the Intel developer forums you get conflicting answers from Intel engineers, with one claiming that Intel doesn't support the latest Media SDK with anything newer than Sky Lake and others claiming that he is wrong, others claiming that it only supports the "pro" graphics on Xeons and so on.

    Further adding to the mess is that vp9 encoding is not supported on Windows, only on Linux (presumably BSD as well) and only through vaapi.

    I have spent a lot of free time getting vaapi working on Ubuntu with a now dead i7100 and I have come to one conclusion..Intel, and AMD for that matter, does not want the general public using their hardware encoders and if you think about it why would they?

    CPU's have gotten to the point where for most tasks they are "fast enough", there's really very little reason for most people to upgrade their systems and video encoding is one of the few things that still drive upgrades. Why would either Intel or AMD promote a technology that effectively ends the upgrade cycle?

    Coffee Lake's iGPU supports HDR encoding, supposedly higher quality encodes, a bunch of other features, have you ever seen any review that makes any real mention of QS? Once in a while I will see a review where they do a half assed test with QS either via HandBrake or a similar app, but never any quality comparisons, never any in-depth talk about the features, Intel itself barely mentions it in their marketing materials, usually just to say it's there.

    AMD does the same thing, have they ever mentioned that their APU's, like the 2400G, with the Vega based iGPU, is capable of hardware encoding? They barely support it via AMF and only on Windows, though rumor has it that you can make use of it via vaapi on Linux.

    I honestly think that both Intel and AMD developed these hardware encoders for their enterprise customers, I strongly suspect that the hardware vp9 encoder was made just for Google, QS has filtering capabilities that I don't know of any app that exposes their use.

    In a nutshell, I think NVIDIA is the average consumers only hope for a viable hardware encoding solution, because they would not be competing with themselves by releasing a hardware encoder.
    Quote Quote  
  24. Originally Posted by sophisticles View Post
    Reading through the Intel developer forums you get conflicting answers from Intel engineers, with one claiming that Intel doesn't support the latest Media SDK with anything newer than Sky Lake and others claiming that he is wrong, others claiming that it only supports the "pro" graphics on Xeons and so on.
    The objective metrics where the Intel SDK HEVC approach "CPU" encoders quality wise - are actually primarily CPU encoders themselves . The speed is way down, not much faster than CPU only (I've seen some reports ~20fps on 1080p on a i7-7700k on quality mode) . NVenc encodes HEVC about 50x faster (!), albeit quality way down. And I haven't seen the actual test material, only numbers - which are pretty much meaningless. SSIM only has limited correlation, not that useful for quality testing by itself - it has to be combined with a bunch of other things for the whole analysis. But that's why I'd like to see more testing in that area, because there isn't much. But with early testing, it looks like AV1 is going to steamroll everything, so maybe not, maybe a waste of time


    CPU's have gotten to the point where for most tasks they are "fast enough", there's really very little reason for most people to upgrade their systems and video encoding is one of the few things that still drive upgrades. Why would either Intel or AMD promote a technology that effectively ends the upgrade cycle?
    I agree to some extent, especially with no reason to upgrade comment but for other reasons too . Video encoding is a really small piece of the pie. It's not a main driver. Our perspectives are massively skewed here, because it's a video site . The majority of public, or even people making CPU/system purchases don't even do any encoding . At all.

    The majority of general public don't care much about video or audio quality. Youtube is perfectly fine. Facebook video is perfectly fine. 128Mb/s "tin can" audio mp3 is fine. So in that respect, the superfast encoders like NVEnc are perfectly adequate. Speed is more important


    Coffee Lake's iGPU supports HDR encoding, supposedly higher quality encodes, a bunch of other features, have you ever seen any review that makes any real mention of QS? Once in a while I will see a review where they do a half assed test with QS either via HandBrake or a similar app, but never any quality comparisons, never any in-depth talk about the features, Intel itself barely mentions it in their marketing materials, usually just to say it's there.
    None of the typical hardware review sites do adequate video encoding testing , to the level that would satisfy an enthusiast. They are more concentrated on games, some applications . But in depth testing would be dozens of pages long and take days/weeks to test. It will never happen on those sites. They just run premade benchmark suites, print fancy graphs



    In a nutshell, I think NVIDIA is the average consumers only hope for a viable hardware encoding solution, because they would not be competing with themselves by releasing a hardware encoder.
    There is a distinction for NVEnc commercial vs. enterprise space too. Their consumer GTX/RTX cards are artificially limited to 2 streams per card by drivers, Quadros are not (even if the hardware is essentially identical in the same generation) . Its the same for 3D applications, the consumer version, even if it has identical HW specs runs much slower on professional apps like CAD, 3D virtualization software.
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!