Hey, you have a Skylake!!! I hate to impose but could you do this forum a big favor? Recently MSU did their annual codec comparison and found that Intel's Quick Sync used via Intel Media Server gave superior quality to x264+placebo+tune ssim and superior quality to x265+ very slow+tune ssim (when the Intel encoder was configured for highest possible quality).
The question is do the implementations of Quick Sync used by most users also provide similar quality?
Do you think you could run some encoding tests with Stax Rip, use any source you want (the higher quality the better), pick a decent bit rate (like 5mb/s for 1080p), and run some encoding tests using x264, x265 and quick sync (configured for the highest quality), then post the samples for us to see and report the encode times?
I know I'm asking a lot but I also know that such a test would attract a ton of interest; it would also help me decide whether to put together a cheap encoding pc based on a Skylake.
Sorry to impose.
+ Reply to Thread
Results 31 to 60 of 76
-
-
Hi, sounds like about at least 2-3 evenings for the tests. I'm afraid, that's a bit too much.
If you could provide a batch-script that does the tests, I would gladly run them.
In the meantime (and a little to distract you), I have run some encoding-tests comparing x264 and x265.
For this, I encoded the same 10-second-video (1080p60, Sony HDR-AS200V) with about 200 different settings:
As you can see, x265 and x264 behave very differently. While the bitrate goes down with processing time in x264, it goes up in x265.
Currently, VQMT is measuring the MS-SSIM-value to create a nice overview of quality vs. bitrate. Let's see, where this goes.
From the first glance at the encodes:
Ultrafast with CRF 33 creates about the same quality as placebo with CRF 40. But ultrafast uses 67kbit/s, while placebo needs only 30. -
Conclusion is simple - CRF in not a measure of quality. Same CRF with different settings will produce different quality. That fact is know since early days of x264 but many people ignore or forgot that. And bitrate/filesize too. There is no direct correlation between CRF and presets or between presets at same CRF.
The only thing slower presets provide is better compression ratio. The slower the preset is the better quality you will get for obtained bitrate/filesize. In another words, you will get better quality per used bit.
In your own test you need quality measure. CRF is not that so you can not draw a conclusion.
Although those have similar names and use same those encoders are very different and their presets are very different and there is no direct correlation between their settings, values and results.Last edited by Detmek; 21st Sep 2016 at 05:03.
-
Well, I did not use Stax Rip, but QSVEnc to do some tests.
The encoding is surprisingly quick. Even at setting "best", the encode ran at about 30fps. That did not feel right.
Well, and the results are very bad as well.
Please compare yourself:- all images are from the same source: 1080p60 action-cam
- all videos are encoded at about 16Mpbs
- all are zoomed to 200%
x265, crf 19, ultrafast, encoded with 40fps
I am pretty bad at judging these images.
IMO, x265 added too much noise to the surfaces, while QSVEnc lost more details.
Just for comparison:
x265, crf 20, placebo, encoded with 0.25 fps
Is there anything, you could do wrong with QSVEnc?Last edited by Epaminaidos; 23rd Sep 2016 at 14:23.
-
@Epaminaidos
Just to make sure, the QSVEnc video is H.265 and not H.264? You also might want to post source images too as we have no idea what quality of image the encoder was given. -
BTW:
The quality-measurements just finished:
SSIM:
95.35% for QSVEnc
95.69% for x265 - ultrafast
96.77% for x265 - placebo
With MS-SSIM, QSV is a bit better.
98.53% for QSVEnc
98.41% for x265 - ultrafast
99.06% for x265 - placebo
But that still does not seem right:
The best quality QSVEnc can deliver seems to be only as good as x265's ultrafast preset...
Whoever wants to check the results himself, please be my guest:
https://www.dropbox.com/sh/rdgrht7nxgzbv7y/AADqT707e_gUg3jH90JMyTn-a?dl=0 -
IMHO to properly perform codec comparison multiple video clips must be used (with different characteristic) and video should be as much as possible high quality (without compression and quantization distortions which imply using as a source ultra high definition to be resized down by 1/4 or more if possible - 8k down sized 16x should be relatively free from most of compression artifacts).
To judge codec quality good methodology is vital... -
MSU doesn't even seem use downscaled 4k-8k sources. Also Epaminaidos source was a 50Mbit H.264 1080p video, which seemed to come right off the device. Only thing that could of been better would be to use a camera with better lenses and sensors, rather than a GoPro. But the video is still pretty good anyway.
-
Well, this is not about an objective codec-comparison, but about the best quality I can get in my use-case.
I read the MSU-Codec-Comparison. There, Intel MSS HEVC Encoder is superior to x265 in almost all settings in terms of "Quality per bitrate".
Thus I tried it myself (hoping that QSVEnc uses the Intel-Encoder).
And the result does not at all confirm the results by MSU: QSV's best setting can barely compete with x265 on ultrafast.
And I have no idea, why...
@KarMa: Yes, the footage comes straight from the device (Sony ASR-200V). The only thing I did was truncating it without reencode to 10s. -
MSU used very specific settings. For x265 they used single pass ABR mode. That's probably severely crippling x265. For hevc_qsv they used mfx_transcoder.exe + mfxplugin64_hevce_gacc.dll. I don't know if QSVEnc uses the same libs and settings. You can see the command lines used for each encoder in MSU's report.
I suspect Intel paid for this report. And they got what they paid for. -
Well... so prepare to be mislead - or use same source as MSU.
And which QSV software, software hardware or hardware as Intel provide various encoding tools (and i assume software may be high quality where hardware focused on speed)...
And source is not about bitrate but about already performed lossy compression - things not visible but present in signal - there is no brain inside codec - there is strict math and this math may be triggered to take particular decisions by things normally not perceived immediately.
From my perspective x265 may behave different than expect and different than research sources claim - things that should slow encoding may work exactly opposite - i have very strong conclusion that H.265 is very young codec and x265 implementation is far from being reliable enough to compete with x264 on quality wise (computational cost) encoding. -
I find your theory fascinating and wish to subscribe to your newsletter.
Please explain to me the "meat" of your theory, in particular:
MSU has been conducting these tests for at least a decade and Intel has participated in them since Sandy Bridge was released; why then did Intel wait until this iteration to bribe MSU (as your theory requires) and not bribe them earlier?
Why did Nvidia and AMD offer greater bribes for this or previous tests in order to place higher in the standings?
When did Moscow State University decide to conduct tainted testing in exchange for money?
There was one time about 10 years ago that Main Concept's H264 won the test, did they also bribe MSU?
How about all the times that x264 won, did they also bribe MSU for that winning streak?
When x265 won, did they also bribe MSU? If so, then why did neither the x264 team nor the x265 team offer a bribe?
Did Intel bribe the x264 team and the x265 team this time around? MSU asked each participant what settings they wanted used for this test and the x265 team chose a single pass ABR with preset very slow and tune ssim, did Intel bribe them into asking for those settings to be used?
Did Intel also bribe the x264 and x265 teams into keeping their mouths shut? Neither one have said a word about this test, even when they were asked for a statement to be included in the final report.
Lastly, where does "Kingsoft", a company that I have never heard of, come into play? They also beat x265 and x264 and in some tests beat Intel as well, did they offer a bigger bribe?
So, do you want to double down on your theory or do you perhaps want to embrace the theory that Intel, which spent 11.7 BILLION DOLLARS in 2015, finally developed a hardware codec that beats pure software codecs. -
Uh oh, Jagabo seems to have triggered the guy who said "x265 is a steaming pile of garbage that shouldn't be used by anyone for anything, unless you have a raw sewage fetish." In this very thread mind you. Bait which everyone surprisingly managed to live with quietly.
Did Intel go out of their way to rig the test? Idk probably not, but this won't be proven one way or the other on Videohelp. However, it doesn't help Intel's case that they have been know to be anti-competitive and practice intimidation to undercut AMD. Which Intel ended up paying nearly 3 billion dollars in fines (in the US/EU). Which can make everything Intel has a hand in suspect right off the bat. However I'm still interested in comparing videos encoded with Intel's hardware.
Should point out that there is visible IPB quality pulsing on all the videos, when I use subtract(A,B), so comparing any arbitrary frames is not ideal.Last edited by KarMa; 25th Sep 2016 at 09:05.
-
The thouht that Intel rigged this test requires us to make the following assumption:
That Intel was able to coeherce a well known Russian university into complying with it's demands.
Once you embrace that scenario it then casts a suspicious light on all previous tests, where a software encoder won, it begs the question as to why Nvidia and AMD did not also try to imfluence the results, it makes one wonder why Intel waited to long to mess with the results and then why all the other participants haven't cried foul?
The test the poster did with QSVenc are deeply flawed for so many reasons:
His source is mediocre at best, GoPro camera's are not exactly top of the line video equipment.
He only used a 10 second clip, at 30 frames per second that's 300 frames, meaning with GOP of 250 frames that's only 1 I frame in the entire clip for which the other frames can reference. This does not give the analysis algorithms each codec uses an adequate chance to get the best picture quality. For proper tests the source should be of prestine quality and more importantly should be a previously uncompressed or losslessly compressed or at the very least Intra frame only source, like AVC Intra or MPEG Intra or something similar.
But the biggest mistake is using QSVenc, this implimentation of Quick Sync is severely limited, making use of only some of the settings QS is capable of.
And let's not forget that the screenshots he offered as evidence have been upscaled 4x which greatly distorts the results, and in all fairness I can't see that much difference between any of the screenshots anyway.
I appreciate him taking a whole 10 seconds out of his busy schedule to run a test, but it would have been nice if he had spent more time on testing than he did on putting together his graphs and responding to this thread.
Basically it would have been better off if he didn't bother testing at all, because that's pretty much what he did. -
Now that you brought it up, mentioning a proper sample and stressing its importance. Lots of us are still waiting for your sample CRF vs. 2pass VBR using x264, ( same settings, same final volume) where you insisted 2pass is better.
-
-
At the very least he could share the command line he used for QSVenc, my guess is he didn't use anywhere near the most aggressive settings, which is why I suggested he use Staxrip as it would make it easier for him.
Or, how about this:
https://trac.ffmpeg.org/wiki/HWAccelIntro
Or he could try the trial version of this:
http://tmpgenc.pegasys-inc.com/en/product/tvmw6.html
Or he could stick with QSVenc and do something remotely approaching a proper test, I mean really take a 10 second clip, encode it using Lord knows what settings, take a screenshot, blow it up and then post it as proof that Quick Sync offers lower quality than x264 and x265?
Why not just tell me he doesn't want to be bothered, I understand it's an imposition, I would have understood if he just said I don't feel like it (as he originally did), I would have no problem with that, but to do such a lame, half hearted test and then spend magnitudes of order more time posting the "results" than actually performing the test?
That's just a big F U to this forum and a slap in the face. -
What I don't understand about these 'official' tests is why the fück developers provide such crappy settings? Dark Shikari used a qcomp of 1.0 when showcasing how great his codec was with its new mb-tree for example, despite that being the worst qcomp to use especially with mb-tree.
If you can't trust a video codec's own developer that actually built it then who the hell do you trust? A random fanboy on a video hobbyist forum? Messed up. Just messed up. Really makes you wonder if you're the one who is insane. -
Dark Shikari could never be trusted, he was a liar, disingenuous and clearly not all up there based on his actions and decisions in the past couple of years.
At least a decade ago, when CUDA was first introduced, he began working on x264 cuda, a cuda powered x264 variant and he was very excited about it. When asked how it was coming along he said that the company that was financing x264 would have the final decision as to when to release it. Then he reveals that the company has decided to discontinue x264 cuda and instead focus on ASIC accelerated x264 because of significantly lower power consumption. This variant has never been released to the public, despite x264 being GPL'd software, they made sure they didn't have to release it by forking x264 into x264llc, a proprietary licensed variant. He then went on one tirade after another complaining about gpu powered codecs and convinced almost everyone that gpu's were somehow poorly suited for encoding video.
As for the setting he used for his silly psychovisual algorithms, the sad reality is that with the exception of AQ, all the psychovisual algorithms the x264 and x265 developers developed are garbage, they function on the principle of robbing Peter to pay Paul, they offer minor benefits under very specifc case uses, those cases being when the encoder is out of his mind and feels like seeing how little bit rate he can use while still keeping the video barely watchable. -
But that's the thing, why would people who are otherwise well-above average intelligence make retarded decisions like these? Is Dark Shikari the only one? I think not.
I came across such inane lunacy more than once. On the soundexpert website that crowdsources ABX tests for audio codecs, the admin used a bad combination to test an AAC codec. I gave him instructions on a configuration to improve the quality but he told me "sorry, I only take advice from the developers." I'm wondering why the developer would advise him on such an idiotic setup. Nero AAC is well known to be absolute shit at samplerates lower than 44.1 khz which is why one should never, under any circumstances use SBR for anything below as it will make the annoying, scratchy artifacts far more audible. And according to 300+ anonymous listeners who rated it in a blind, controlled test, I was right.
So if an illiterate newb on a video forum who wasn't even out of highschool yet knows a product better than the senseless dumb fücks who actually wrote it, I only see two scenarios. Either I'm completely insane or they are. But given 300+ people unanimously agreed with me that the "developer's advice" produced far worse quality, I'm humbly leaning towards the idea that I'm perfectly normal, which doesn't comfort me.
As for the setting he used for his silly psychovisual algorithms, the sad reality is that with the exception of AQ, all the psychovisual algorithms the x264 and x265 developers developed are garbage, they function on the principle of robbing Peter to pay Paul, they offer minor benefits under very specifc case uses, those cases being when the encoder is out of his mind and feels like seeing how little bit rate he can use while still keeping the video barely watchable. -
It's not a GoPro. And it was a source video of 1080p 50Mbit in H.264.
The source video had just over 580 frames. Now when you look at the 2016 MSU report, you will see that they used clips in the range of 300 frames to 1500 frames. So 580 frames is well within MSU standards at least. Why you all of a sudden have a problem with it is interesting. You also told Epaminaidos to use any high quality source he wanted.
Then insist he uses staxrip.
Then ask him to share his settings used.
He provided the original source video, along with the 3 encoded videos via drop box. And x264 was not part of his quicksync/hevc test, which makes me think you are not really paying attention in this thread.
Also MSU's tests are built around testing clips of similar frame number sizes.
Your request of Epaminaidos in post #31 left a lot of room for interpretation. Epaminaidos seems to of done everything you asked for besides use x264, nor did he use tune SSIM in x265 which I'm fine with. As I hold low confidence in SSIM.
I pretty much see you slapping Epaminaidos in the face. After he posted his tests, you could of said "hey you forgot this" or "could you maybe post this" or something. Instead you went straight to complaining.
Also Epaminaidos even started a new thread yesterday to try and understand YUV to RGB conversions to help with his comparisons. The guy is certainly trying to help. -
Thank you for that.
I do currently have about 350 different encodes of this clip and am still in the middle of analysing them. There are also some encodes with x264 since I am interested in the changes the CRF-settings. I also did some encodes of the same clip at 30fps just to get a feeling about how much additional bitrate is necessary for the same quality (also x264 and x265 of course).
If anybody needs more information about the details, just ask.
What I will not do, is the same test with a perfect-quality-source, since this is just not what I will use in real life. But I am glad to share the batch-files I used to generate the encodes, frame-captures, zooms, encoding-times and quality-measurements.
Why I did not use StaxRip? I clicked on the link and saw a GUI. That's just not usable to test a lot of different settings.
QSVEnc seemed to be a simpler way to get hardware-accelerated encodes. Well, I got them. But it seems like QSVEnc does not do the best job. Perhaps I really did get the settings wrong. Encoding H.265 at 30fps at the best settings just does not feel right. -
staxrip uses rigaya's qsvenc for QS; it's not going to have "more" options than qsvenc than using the binary directly; if anything it will be more limited
As for MSS, it might have some "special sauce" or other options, but the cost of entry is $5K
I don't know what options VMW uses, but it's probably not much different than qsvenc. -
@Karma: I do consider MSU's tests where they used short 10 second clips to be useless for the same reasons that I outlined earlier.
But MSU also did archive usage tests, where they took full blu-rays and reencoded them to 5mb/s and where they used placebo + tune ssim for x264 and very slow + tune ssim for x265 and this was where Quick Sync beat both x264 and x265.
These are the results I'm really interested in, I don't encode 10 second clips at a time, who really does that? But, creating a pristine backup of a blu-ray at a greatly reduced size? That's something I can see a use for and i think that's what most users would be interested in. -
Does StaxRip use something else?
Here's the command line I used:
%qsvEnc% -i %original60fps% -o "%targetDirectory%\%filename%" --codec hevc --vbr %vbr% --quality %preset% >> %log%
Parameters were: vbr: 16000, preset: best
The simple-frame is the frame in the middle of the clip (i.e. at 5.00s).
But MSU also did archive usage tests, where they took full blu-rays and reencoded them to 5mb/s and where they used placebo + tune ssim for x264 and very slow + tune ssim for x265 and this was where Quick Sync beat both x264 and x265.
And even if, I would probably not have the patience for that. On my hardware, encoding a full 2-hour-movie in x.264 placebo would take more than 60 hours (given that "tune ssim" does not accelerate the encode).
BTW: What is so important about "tune ssim"? Feels a bit like cheating to get a good rating.
These are the results I'm really interested in, I don't encode 10 second clips at a time, who really does that?
I just opened a few movies and hand-counted the duration between the camera-switches (how do you call that duration?). The average was about 6-7s only. Very much of the previous shot cannot be used anyway I guess.Last edited by Epaminaidos; 26th Sep 2016 at 17:49.
-
Give-up with this personal war against DS - NVidia was unable to create any video encoder based on CUDA for so long... AMD was unable for GCN, Intel for own GPU's - all those companies using specialized video encoder core - ever ask your self why there is no GPGPU video encoder?
Grow up... or pay some developers to make your dream true then sell this product on market - based on your description it will be easily outperforming any video encoder on market. -
After looking at the change logs for Staxrip, it seems like they just use QSVEncC too. Staxrip is just a GUI for QSVEncC along with a few other codecs like x264 and x265. So at the end of the day, sophisticles recommended you use QSVEncC lol.
Link and or page number? As I'm only seeing tests for Fast, Universal, and Ripping. All of which used short clips in the 2016 HEVC tests. -
I do not have any "personal war" against anyone, I work 60+ hours a week, I don't have the time or energy to engage in any, I don't even now what to call it, with someone I don't know. All I did was point out some factual information, if you have a problem with it, that's too bad.
Nvidia never tried to create a CUDA based encoder and in fact one of their engineers over at the Nvidia forums said that they had no plans to do so. What Nvidia did was create the programming frame work, CUDA and then released an SDK and some code samples on how one could go about creating an encoder that ran on their GPU. They released a template and their engineer explicitly said that what Nvidia expected to happen was that developers would take that skeleton program and build on top of it a proper encoder, but he also said that Nvidia would not be doing it themselves.
As to why AMD, Nvidia and Intel all switched to ASIC chips for encoding and decoding it's the same reason Avail Media, the company that was (and possibly still is) the financial backer of x264 switched to ASIC's to accelerate x264 and why bitcoin miners and other crypto-currency miners switched to ASIC's: power consumption and speed. A GPU running at full tilt uses up massive amounts of power and generates a ton of heat, ASIC's are both faster and much lower power consumption.
But GPU powered encoders have been created, look up GPEG2, a GPU powered MPEG-2 encoder, written in HLSL that ran on any DX9 class card or greater.
As for "my dream", I have no idea what you are talking about, I just stated that I can't stand any of the psycho-visual algorithms, except AQ, that x264 and x265 use, why do you have a problem with that? -
Hmmm, I just went to check and I can't seem to find it, all I find is the following:
HEVC Codec Testing Objectives
The main goal of this report is the presentation of a comparative evaluation of the quality of new HEVC codecs and codecs of other standards using objective measures of assessment. The comparison was done using settings provided by the developers of each codec. Nevertheless, we required all presets to satisfy minimum speed requirement on the particular use case. The main task of the comparison is to analyze different encoders for the task of transcoding video—e.g., compressing video for personal use.
HEVC Codec Testing Rools
The comparison was performed on Corei7 6700K (Skylake) @4Ghz, RAM 8GB, Windows8.1. For this platform we considered three key use cases with different speed requirements.
Fast/High Density – 1080@60fps
Universal/Broadcast VQ – 1080p@25fps
Ripping/Pristine VQ – 1080p@1fps and SSIM-RD curve better than x264-veryslow
Video sequences selection
Video sequences for all of MSU video codec comaprison reports were choosen by MSU team through manual selection. Various videos were selected to help to find the strengths and weaknesses of video encoders. This comparison’s test dataset was significantly updated. Our goal was to create dataset with videos that encoders are facing in everyday life. For this purpose 30000 videos from Vimeo service were analyzed and 885 4K videos with high bitrate were analyzed. These videos were clusterized (by spatial and temporal complexity) and 27 video from the clusters were chosen.
When you look at the settings used, at the developer's request, they used x264+placebo+tune ssim for the "Ripping" use case. I guess I must of made the Blu-Ray association in my mind because it says "Ripping" use case and 1080p, so to me that meant they were ripping Blu-Rays, I don't know where I came up with the 5mb/s thing though, as it seems they tested at various bit rates, interestingly enough Intel's encoder beat x264 in the "Ripping" test convincingly, with x264 normalized to 100% Intel's encoder needed just 75% of the bit rate to achieve the same quality as measured by YUV-SSIM.
But your point is well taken, I have the same issue with the MSU clip selections, too short to give the analysis algorithms a chance to do their thing, especially in a 2 pass mode.
Similar Threads
-
Specific file size for x265 encoding...
By alryan011 in forum Video ConversionReplies: 2Last Post: 26th May 2015, 17:09 -
Const Quality or 2 Pass?
By killerteengohan in forum Newbie / General discussionsReplies: 20Last Post: 17th Jul 2014, 14:01 -
The fascinating quality of x265!1080p video with fast movements in 1300kb/
By Stears555 in forum Video ConversionReplies: 76Last Post: 12th Dec 2013, 08:00 -
Compressing FCP/AVID projects to smallest file size w/ best quality
By Deneimy in forum Video ConversionReplies: 6Last Post: 12th Feb 2013, 18:34 -
Converting DVD's to best quality smallest size video file
By Thunderblaze in forum DVD RippingReplies: 1Last Post: 31st Dec 2011, 04:07