I finally caught up with all the work I had to do and so did some new Ice Lake QSV test encodes.
For source I used the samples from here:
Specifically I downloaded all the test samples with the following specs:
Camera: Sony F65
Resolution: 4096 x 2160 pixels
Frame Rate: 50 fps (progressive)
Bit depth: 16 bit
Data format: RAW
It seems that this was the format these sequences where filmed in, then after grading, mastered as 10-bit 4:2:0 3840x2160p50 YUV RAW, I took these and converted them to y4m and ingested those into Shotcut on Ubuntu Mate 20.10 (Groovy Gorilla).
From there I ran a number of test encodes, including 2 baseline encodes, 1 x264+preset slow+8 b-frames crf 23 and 1 x265+preset fast+8 b-frames crf 23. For Ice Lake I tested using qsv_h264 and qsv_hevc, for h264 I used 8 b-frames and 16 reference frames and for hevc I used 8 b-frames and 8 reference frames.
I had managed to get qsv_vp9 working with Ubuntu 20.04 LTS, but then I somehow managed to screw it up and I haven't been able to get it working again.
The test system was this one:
Allow me to say that this system should be considered a "bare bones" kit if you are planning on getting a laptop for hardware encoding. The SSD is way too small, the HDD is way too slow and limited nearly all my test encodes to about 18min30sec encode time for a 1min30sec source and the 8GB of ram is easily maxed out with many workloads, remember, the iGPU uses system memory.
Having said this, the price does fluctuate between $500 and $570, I picked it up for $550, if you can get it at the $500 price point, with another $300 for 2x1 TB fast SSD's to replace the default drives and up the ram to 16GB and you end up with a 17" screen laptop that can output to a 5K display and 5G WiFi.
+ Reply to Thread
Results 1 to 14 of 14
Last edited by sophisticles; 25th Oct 2020 at 21:10.
I watched a couple of the H264 ones (the top one and then the 10M one). I was surprised how good it looks. Which iGPU does this laptop use? It certainly looks better than some of the encodes I have done using my GTX1070. I wonder how much better a Turing card would do compared to my current one. I'm still going to argue that hardware encoding is good enough for a fast encode I might watch on my laptop or phone while traveling. But at home I want the Blu-ray without any additional compression.
This laptop uses the i5-1035G1, meaning it's an Ice Lake CPU (features AVX-512) and the Gen 11 G1 iGPU. This is the QSV version that Intel claimed was able to beat x265+Placebo+PSNR.
Tiger Lake and Intel's new dGPU Xe are supposed to be even higher quality than this Ice Lake's GPU.
The only thing holding Intel's QSV back is the lack of hardware support, I truly believe they build it for their corporate customers but they realize that it presents a serious threat to their CPU market share because one of the few usage cases left that still drives upgrading is video, games and 3d rendering being the other consumer/prosumer market drivers.
Ice Lake QSV is capable of hardware encoding of mpeg-2, avc, hevc, vp8, vp9, mjpeg, as well as decoding of a bunch of formats and a bunch of hardware filters, like de-interlacing, sharpening, etc. Good luck in trying to get them to work, avc, hevc and vp8 I can get to work easily, vp9 is a nightmare, mpeg2 I got to work a few times, mjpeg never, and the filters I have been able to get them to work one at time, the de-interlace filter is actually pretty good.
Provide a link to a test video you want to see encoded and i will do it.
I also included x264+medium+8-bframes, both 1 and 2 pass, for the sake of comparison.
Obviously, one has to be insane to encode 2160p50 @ 5M with any codec,
You repeated Intel's claim that they beat out x265+Placebo+PSNR. So it was implied you should do a x265 comparison. I was kinda shocked at the quality until I figured out it was x264.
I would do the test myself but downloading 10GB would take me forever. If I did the test I would use "veryslow" and no PSNR tuning.
But I do want to see for myself if Ice Lake QSV is capable of beating x265+very slow, I know it can easily beat x264+medium but if it can beat x265+very slow then I would say it's game over for software based encoding.
As requested, I did the same tests with x265+veryslow, no tune, 5M, both 1 pass and 2 pass vbr. Note, for these encodes used the SSD boot drive for both read and write, in order to minimize the I/O bottleneck associated with the slow 5400 rpm secondary drive this laptop has.
Here's the kicker, the qsv encodes took about 30 seconds per each 10 second source, the x265 encodes took over 40 minutes per pass, so the 1 pass encodes took 40 minutes each and the 2 pass encodes took 8 minutes each, that's for a 10 second source.
Obviously a full 90 minute movie encoded to x265+veryslow on hardware like this laptop is a nonstarter.
Edit: I need to redo the crowd run 2 pass because I accidentally deleted it. Upload tomorrow.
Last edited by sophisticles; 2nd Nov 2020 at 21:47.
I've had other things on my mind these past day but I'll just say that there is a very obvious quality difference between the two samples, with Intel on the losing side. Blatantly. I asked for such a low bitrate, as it simply makes it easier to see how efficient a given codec is by focusing on the very low end. The idea being that as you up the bitrate, that efficiency gap will remain but simply narrow until you reach lossless.
The Intel claim seems to be fairly bogus. Though it's certainly worthwhile if you need a low wattage way to encode 4K content if you just accept that you will lose some efficiency and need more bitrate to compensate.
Last edited by KarMa; 8th Nov 2020 at 19:41.
Also, as I mentioned, the x265 encodes, on my system took 40 minutes per pass for a 10 second clip, so using some basic math, (40 min) x (60 sec/min) = 2400 seconds to complete each pass. In order to do this encode in real time, per pass, using x265+veryslow you would need a system 240 times faster than my laptop, which you're not going to find any consumer, or even prosumer, grade computer that fast.
And lastly, as I mentioned, no one, would ever encode 2160p50 at 5M, iirc, when I did this encode with x264+slow+crf 23 it used something like 60M, so it's not fair to pick a scenario that has no practical applications in order to compare encoders.
NVENC) and x264, I always end up preferring x264. Maybe the newer NVENC is better than my GTX1070 but I noticed I needed a lot more bitrate with NVENC to match x264. NVENC H265 just looked unacceptable to me.