VideoHelp Forum
+ Reply to Thread
Results 1 to 4 of 4
Thread
  1. This question popped into my head due to the other discussion on H264 encoders. Recently I ran across this:

    https://software.intel.com/en-us/articles/evolution-of-hardware-hevc-encode-on-tenth-g...ore-processors

    Where Intel claims that Quick Sync HEVC encoder found in their new mobile i7 1065G7 (with Iris Pro graphics) beats x265 + preset very slow + tune PSNR in quality across the board.

    Since tune PSNR turns off all psycho-visual optimizations, I know people will dismiss those claims as inconsequential, even if true.

    This got me thinking, does x265+veryslow+psnr beat x264+slow? My reasoning is this, if it does and x264 is using all the psycho-visual optimizations available to it with that preset and if Intel's claims are true, then that means Intel's new QS on it's new processors beats x264+slow.

    So I decided to replicate Intel's test, since Intel specifically stated they used test sequences from Xiph's namely rush_field_cuts and park_joy, I downloaded these files and encoded them on Ubuntu via Handbrake, using a 2 pass 5mb/s encode for each:

    x264, x264 10-bit and x265.

    Some notes, x265+veryslow+psnr is nearly unusable for any actual encoding, it's just too slow. On a i3 7100 + 16GB ddr4 based system, the x264 encodes finished in under a minute, the x265 encodes finished 20 minutes for the park one and 13 minutes for the field one, that woks out to less than 1 fps encode speed, a full movie would take an entire night.

    Anyway, I was more concerned about quality difference, here are the encodes.

    On my 50 inch 1080p60 tv, I would be lying if I said I can see a difference.
    Image Attached Files
    Quote Quote  
  2. Wondering why they only uses constant quantizer encoding without adaptive b-frames and with disabled scene cut detection which, those don't really seem like setting most folks your use. So I'm not surprised when people has problems with a general ranking of encoders based on such a test.

    Some notes, x265+veryslow+psnr is nearly unusable for any actual encoding, it's just too slow.
    Man you'll really have fun with AV-1 which is even slower, but the film grain modeling really can produce nice outputs at seemingly insane low bit rates.
    see: https://forum.doom9.org/showthread.php?p=1904806#post1904806


    As a side note:
    Personally I like gpu encoding for personal casual watching use, but for anything where quality is important per clip adjusted settings for encoding and filtering should be used. So we clearly.

    a full movie would take an entire night
    *gig* we really have different speed requirements

    ----
    So, sorry for the a bit off-topic comments, have fun with your quest to prove that gpu encoding is better in all cases than software encoding.
    (if that isn't your goal, I must have misunderstood the post)


    Cu Selur
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  
  3. Originally Posted by Selur View Post
    So, sorry for the a bit off-topic comments, have fun with your quest to prove that gpu encoding is better in all cases than software encoding.
    (if that isn't your goal, I must have misunderstood the post)
    The goal was to see if Intel's new QSV could beat x264+slow going assuming their claims regarding their beating x265+veryslow+psnr are true.

    For me power consumption, system noise and fast encoding are just as important as any minor differences in quality at bit rates so low as to be unusable as far as I'm concerned.

    If you look at this performance test done by the Handbrake people:

    https://handbrake.fr/docs/en/1.3.0/technical/performance.html

    A 22C/44T Broadwell with 55mb of cache and 32 GB of ram can only achieve 1.8 fps encoding from a cropped 4k to 1080p. That was a $4000 cpu when it was new, and it will probably still set you back at least $500 + the cost of a motherboard, ram, beefy power supply and high end liquid cooling. If Intel really managed to create a hardware encoder that can match or exceed that quality, at a 15W TDP with a max TDP of 25W, that would be insane. I don't see see any mention of encoding speed, but I think it would be safe to expect a desktop variant to be able to do encode 1080p30 at real time at the minimum, so this would be a game changer.
    Quote Quote  
  4. assuming their claims regarding their beating x265+veryslow+psnr are true.
    Like I said, the settings they used do seem wrong for that claim to be taken seriously, but go ahead.

    For me power consumption, system noise and fast encoding are just as important as any minor differences in quality at bit rates so low as to be unusable as far as I'm concerned.
    Okay.
    Personally I use a fully water cooled system (using it for 10+ years with minor adjustment to the current cpu&gpu) so noise is of no real concern. I can sleep happily with my head next to my main system.
    Speed is only important for me when I do casual encoding for just viewing.
    For serious archiving and production purposes speed isn't that important as long as I get a ~2 hour clip ready in a day. (for archiving 2 days is fine too)

    But it is right and important that you state what's important for you since GPU encoding usually is:
    a. quieter and less power consuming (since all the work is done by a single chip on the board)
    b. faster since the methods used are especially developed for that chip.

    Software encoding, usually can offer (sometimes really) slow but more precise methods where the algorithms can't be multithreaded.

    -> I see now where you are coming from and my main concerns are:
    a. I can't take the publication that motivated you serious due to the settings they used.
    b. That you simply seem to have completely different weighted evaluation goals than I. (for me quality weights usually more than most of the other points you mentioned)

    Cu Selur
    users currently on my ignore list: deadrats, Stears555
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!