VideoReDo does a great job of transcoding to .m2ts from the file formats I use, but the downer for me is that they don't plan to support hardware encoding acceleration. It's all CPU based.
Software like VRD that runs on my eight cores still takes a while (almost two hours to transcode a 90+ minute feature) and all the while my system fans are blasting. Rejig with the default settings uses the GPU but screws up the output file so that it's all jerky or has major video sync issues.
So I'm looking for recommendations (including payware) that do a good, fast job of transcoding. The problem I've seen with most paid options is that they tend to be both expensive and bloatware.
+ Reply to Thread
Results 1 to 30 of 73
You'll find that all the GPU based encoders deliver lower quality and are slower than x264 at the veryfast preset. If you don't want to use the CLI encoder use one of the graphics front ends like Handbrake or Vidcoder.
If your CPU is an Intel with Quick Sync the QS encoder is about 2x faster than x264 but still delivers lower quality.
can we be honest with ourselves for a second, you haven't tested all the gpu based encoders have you? you have tested the basic CUDA based encoder that apps like badaboom and media coder have to offer but i'm willing to bet you haven't tested all of the following:
AMD's APP encoder, the quick sync version featured on Haswell cpu's, the OCL and CUDA powered main concept encoders, the sony OCL and CUDA powered encoder, Elemental's high end CUDA encoder, the new hardware encoder that was revised for Maxwell based gpu's or even the OCL capabilities built into the latest builds of x264.
if you haven't tested each and every single one of these for yourself then you shouldn't give people inaccurate information.
and realistically, if the rumors about the upcoming GTX880 are true (the rumors are flying that it will feature a 3 core Denver based ARM processor built into the gpu), pure software based will soon go the way of the dodo assuming that cpu's themselves don't eventually become little more than a basic processor that does little more than shovel data to add in cards that do all the actual processing.
The thread title is "Affordable transcode options with OpenCL / GPU support". How many of the encoders you've mentioned fit the thread topic?
The number one rule when accusing someone of providing inaccurate information would be to provide evidence to the contrary. There's nothing in your post to that effect.
I can't recall the last time I read an article on h264 encoding where the x264 encoder wasn't used as the quality benchmark. I'm yet to read an article where anybody's claimed a hardware encoder matches x264 for quality. Can you provide a link?
In my testing QS on a Haswell is slightly better than QS on a Sandy Bridge but still worse than x264 at veryfast.
The only option that I know of now is the CUDA encoder inside of the Hybrid program or the external encoder feature in Virtualdub, using the same CUDA encoder that is in the Hybrid program.
I've heard that future NVidia graphics card will do better than what is capable now which from my experimenting was no faster than x264 superfast with at least as good of quality and maybe worse which isn't saying a lot since x264 superfast is not great quality. The other downfall of CUDA with my experimenting was that the filesize was almost twice as big as the x264 file and larger than the original file that I was trying to compress. So my conclusion was that CUDA 264 did a much worse job of compressing a file at the same quality and speed of x264 superfast.
I would recommend using the external encoder feature to encode x264. The Hybrid program or Handbrake which is the easiest to use. Selur can help with the Hybrid program. There is a thread here just for using his program which is capable of using most CLI encoders which come with his program.
The Virtualdub forum has instructions on how to use the external encoder feature. Virtualdub doesn't come with it's own encoders so you'll need to track those down (the forum also lists where to download these files) or you could cheat and just download the Hybrid program and direct Virtualdub to the CLI encoders in his program.
All of the programs are just frontends for the x264.exe encoder that can be run from a command prompt or with ffmpeg which has libx264 and H264 built in.
media coder is free, yes i have tested it.
the hardware h264 encoding chip found in maxwell and kepler class cards is free as in built into the gpu, no i haven't tested it, only app that supports it is media coder, again free.
main concept's cuda based encoder varies in cost depending on what apps integrate the sdk, i have tested it and the price varies from as little as $50 for some apps to as much as $1500 for total code. i have not tested the OCL version by main concept, pricing is similar.
i have tested sony's cuda powered encoder, cost is as little as $50, have not tested the OCL version.
AMD's APP encoder is built into video cards that use the GCN architecture, it's supported by a few app, notably A's encoder, cost free (it also supports MS' h264 encoder).
i have tested intel's QS encoder, only the h264 portion, and only the IB version, cost is built into the processor.
I'm yet to read an article where anybody's claimed a hardware encoder matches x264 for quality. Can you provide a link?
if you have the cash, Elemental makes a gpu powered encoder that is world class, they are the guys behind badaboom, they used the general public as guinea pigs to develop their cuda powered encoder, then drove interest by partnering up with Adobe for their "Mercury" engine and then they went after the professional market. these guys already have a gpu powered hevc encoder that can do real time 4k encoding and there is an admin over at the doom9 forums that has used it personally and swears that it matches x264 quality but flat out smokes it in throughput.
gpu's are awesome for video encoding, it's just that the people that write software for casual users are lazy, good for nothing, jackasses who are barely passably competent as programmers.
they suck balls and they know it.
edit: i found one of the threads over at doom9 where the poster known as Blu Misfit talks about Elemental's stuff and this guy is known as a x264 cheerleader so for him to say this it says something about Elemental's product.
this is one are that i feel companies like Nvidia, AMD and Intel have really dropped the ball, they spend millions developing these technologies (i read one report that claimed Intel spent 5 years and 100 million bucks developing QS) and then they wait for some 3rd party developer to write software that uses it.
they have so many engineers on staff, they should write basic software in house that fully exploits all the hardware's capabilities and then release that code as open source so that adoption is rapid.
Movie Studio which features a CUDA encoder capable of 2 pass encoding:
dial up the reference frames to 16 and test it out for yourself, it also supports OCL for those with an AMD based card.
x264 encoder in the article anywhere. No mention of the names of any software used for comparison encodes. No real comparison encodes, for that matter.
Desktop 1, 2, & 3? Seriously??
"In terms of performance, the only close-to-equivalent test bed that I could configure was a 3.33 GHz 12-core workstation running a fast software enterprise encoder program that shall remain nameless."
The majority of the article covers topics like de-interlacing and closed caption encoding which have nothing to do with the actual encoding quality.
Are you supporting jagabo's claim now?
my contention has always been that the relatively poor showing of gpu powered encoders on the desktop has zip to do with the underlying technology and more to do with other factors such as developers with ulterior motives, developers with inadequate coding skills and other special interest segments that have a vested interest in keeping software based encoders at the forefront.
So is there anything in the "affordable GPU encoding" area which competes with x264 for quality while encoding at a higher speed?
if you are talking about bit rate starved encodes done at stupid low bit rates, like say 1500 kps for 1080p resolution, then no, there is no gpu powered encoder available to the general public that is capable of beating any software based encoder, primarily because software based encoders blur images significantly more than gpu powered encoders thus masking artifacts.
in fact, much of x264 vaunted quality comes from it's deblocking filter which can be cranked real high and it's so-called psy-rd optimizations also serve to try and hide artifacts by blurring the image in some parts while sharpening it in others. the deblocking filter in particular acts like a customizable blurring filter and at lo bit rates this software encoder is unmatched if only because it kills details to the point where artifacts can no longer be seen.
if the quality test is at sane bit rates, i.e. blu-ray level bit rates for 1080p, then yes gpu powered encoders can match software encoders and in fact one of the main developer's, DS, has repeatedly complained about "unfair comparisons in which the tester uses too much bit rate and then concludes that there's no difference between encoder quality" and when pressed he admitted to me on a number of occasions over at doom9 that "of course if you use enough bit rate all encoders will look good, even mpeg-2).
as far as speed and cost is concerned that's a bit of a head fake. yes x264 is free and many apps that support it are free but the hardware required to run it isn't and saying that i7+x264+very fast is faster than a cuda encoder that conveniently ignores the fact that one has to spend a considerable amount to but an i7, the motherboard, the ram and if he wants faster encoding performance has to usually upgrade all 3, maybe not the ram, and then re-install the OS and apps.
with a gpu powered encoder, like NVENC built into Kepler and Maxwell class Nvidia cards or even a CUDA based encoder, you have the handful of free apps, but lets say you have to spend $100 on a piece of software like tmpg or sony's apps to get good gpu encoding, added to the cost of a fast card, say you buy a good mid range card in the $250 range, that's still cheaper than going the x264 route and it will be much cheaper and easier to upgrade.
in fact, AMD's cards offer a much better value, the GCN architecture is tailor made for gpu compute and all benchmarks have it flying when used with the OCL encoders found in sony's apps and you can get a good card for about $100.
don't dismiss gpu encoding just because some guys seem to have a bit rate starvation fetish where they seem to love seeing how badly they can mess up their video by dropping the bit rate before it becomes unwatchable.
I tried the Sony Movie Studio 13 Platinum trial and it has a pretty interface but I don't know that it does any better. I can afford to pay something over $100, but my experience is that prettier interfaces give you less quality programming effort in the long run ...
i neglected to mention that x264 DOES have some rudimentary gpu acceleration via Open CL, in which only lookahead is offloaded to the gpu.
test results have been a mixed bag, on my system, X6 1045t, 8gigs ddr3, 9600GSO enabling OCL results in a slightly slower encode by about 3-5fps when the encode using uf is in the 130fps range. with slower presets there is no statistical difference with encode speed.
one of the main x264 developers has said that lookahead performance can increase by about 40% on the latest AMD apu's and by a factor of 2 on the latest AMD discrete video cards, my tests were done with a relatively old and slow card that only uses ddr2 and a narrow memory width and most of the benchmarks i have seen were also done by people with older cards where memory bandwidth was an issue.
there have been some reports that enabling OCL on x264 results in slightly lower quality but in my tests i could see no difference and in all honesty with better hardware if you could use higher lookahead settings with OCL than without that should offset any quality differences.
try it yourself with media coder and see what you think.
I have a SAPPHIRE 100352-3L Radeon HD 7950, and thought that would have been supported. Wonder what I missed?
"H.264 encoding GPU acceleration (Intel QuickSync, nVidia CUDA, OpenCL)"
Does this only come with the premium version? (Answer: YES) Edit.
GPU edition only supports CUDA. Bite me. Over $200 invested in hardware and no help.
Last edited by lasitter; 13th Apr 2014 at 11:57.
you're confusing CUDA with OCL and the 2 available CUDA encoders with the OCL patch for x264. and no, it's not only available in the paid premium version.
on the video tab choose x264 from the drop down menu, go to advanced and you will see "enable OCL if possible".
i just tested it on my system, X6 1045t, 8gigs ddr3, 9600GSO, source was a vc-1 12mb/s 1080p and target was 12mb/s encoded with x264 using the medium preset, 1 pass, no resizing, no de-interlace, no sharpening and all other settings were left at default. without OCL it took 535.5 seconds to finish the test encode (source was 4 min 11 sec long). with OCL the encode was done in 589.3 seconds and using GPU-Z it barely loaded up the gpu and onboard ram. quality-wise i see no difference between the two encodes.
keep in mind that the X6 was released sometime in the second quarter of 2010 and the 9600gso was released in late 2008 and it was just a rebadged 8800gs which in turn was released in 2007.
i would love to see someone with a high end hexacore intel cpu do a similar test, using OCL both with the built in intel gpu and maybe a discrete high end video card like an r9 290 or a 780ti.
And from my handbrake log ... looks like everything is being detected otherwise ...
AMD Radeon HD 7900 Series - 220.127.116.11
Temp Dir: C:\Users\CHUCKL\AppData\Local\Temp\
Install Dir: D:\Apps\MMedia\Handbrake
Data Dir: C:\Users\CHUCKL\AppData\Roaming\HandBrake\HandBrak e\0.0.0.6162
[17:00:07] hb_init: starting libhb thread
HandBrake svn6162 (2014041301) - MinGW i686 - http://handbrake.fr
8 CPUs detected
Opening E:\KTTMG\High Tension (12_28_2013).mpg...
[17:00:07] - logical processor count: 8
[17:00:07] OpenCL device #1: Advanced Micro Devices, Inc. Tahiti
[17:00:07] - OpenCL version: 1.2 AMD-APP (1348.4)
[17:00:07] - driver version: 1348.4 (VM)
[17:00:07] - device type: GPU
[17:00:07] - supported: YES
I know their web site claims it:
"H.264 encoding GPU acceleration (Intel QuickSync, nVidia CUDA, OpenCL)"
But even when I go into the advanced interface and try to check "forced OpenCL if available" (something like that) it doesn't seem to matter. I'll try reinstalling it and see if I get lucky.
Last edited by lasitter; 13th Apr 2014 at 19:17.
OK, so I re-installed it, went to the ffmpeg subdirectory for MediaCoder, did "ffmpeg" without arguments, and I think THAT is supposed to tell me all the build info for that build of ffmpeg. I got:
configuration: --enable-gpl --enable-version3 --disable-w32threads --enable-avisynth --enable-bzlib --enable-fontconfig --enable-frei0r --enable-gnutls --enable-iconv --enable-libass --enable-libbluray --enable-libcaca --enable-libfreetype --enable-libgsm --enable-libilbc --enable-libmodplug --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-librtmp --enable-libschroedinger --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvo-aacenc --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libx264 --enable-libxavs --enable-libxvid --enable-zlib
How wrong am I about this? I don't see the arguments in there for OpenCL support ... and from the ffmpeg help ...
"When FFmpeg is configured with --enable-opencl, it is possible to set the options for the global OpenCL context.
The list of supported options follows:
Set build options used to compile the registered kernels.
The specified index must be one of the indexes in the device list which can be obtained with ffmpeg -opencl_bench or cv_opencl_get_device_list(). ‘device_idx’
Select the index of the device used to run OpenCL code.
The specified index must be one of the indexes in the device list which can be obtained with ffmpeg -opencl_bench or av_opencl_get_device_list()."
it's not ffmpeg that supports OCL, it's x264, i know for a fact that the x264 build contained in media coder is built with OCL support. did you try with the 32bit media coder build or the 64 bit (i used the 64 bit).
since you have a 7950 handy, give A's video converter a shot:
it supports microsoft's h264 encoder, QS and AMD's VCE encoder:
your video card supports version 1 of the encoder, you are the perfect person to test the quality and speed (not too many apps support this hardware).
video encoders -> x264 -> "Enable OpenCL if possible" (checked)
And the Video (tab) -> Encoder (x.264) ... checking the "GPU" checkbox gives me the "No supported acceleration hardware detected"
In the "Recent Updates" tab at the bottom ... I get ...
[03-09] Updated x264 to r2273 and FFmpeg to 1.1.3 (even though 2.2.1 is available)
And I'd really appreciate it if someone could parse this page for me:
MediaCoder GPU encoding quality improved practically
There's talk about Intel MSDK and CUDA but no mention of ATI or OpenCL. I'm sorry, but that confuses me.
On the features page I also get "
MediaCoder utilizes Intel QuickSync and nVidia CUDA technologies to accelerate H.264 video transcoding. Transcoding time for HD video can be significantly shortened and CPU utilization reduced."
Again, no mention of ATI / OpenCL. Where's the love for my hardware?
"Supporting GPU accelerated H.264 encoding" has the checkmark available only on the last two premium versions.
Thanks guys for helping me sort this out.
media coder is set up a bit confusingly, that "gpu" checkbox is meant to enable the 2 CUDA encoders, when that check box is checked the encoder is changed to CUDA encoder or NVENC, both of which are nvidia only encoders.
you don't need to check the "gpu" box in order to use x264's OCL capabilities.
with regards to the "improving quality practically" what the media coder author has done is enable a high quality denoise filter by default if a hardware encoder is chosen, such as intel's quick sync or nvidia's cuda.
open cl is not an ATI/AMD only gpu computer programming framework even though it's come to be synonymous with it. Nvidia released their framework for general purpose computing on their gpu's and they called it CUDA. AMD developed an open source alternative called OCL that runs on all gpu's, including Intel's and Nvidia's.
the reality is more complex than that as general purpose computing on a gpu can be achieved via DX9 class HLSL (high level shader language), DX10 class DX Compute, OCL, CUDA (which is C for Nvidia gpu's), FORTRAN (Nvidia has a FORTRAN compiler for their gpu's), and there's probably one or two more that i'm not familiar with (i heard rumors a while ago of a JAVA compiler designed for gpgpu).
this is why it annoys me to no end when people make broad statements like gpu's are no good for video encoding or some similar claim because it completely ignores numerous intertwined variables that make up the gpgpu compute landscape.
with regards to you, do as i said before for enabling OCL in media coder but ignore the "gpu" checkbox (leave it unchecked).
jagabo's is, and always has been, that the very basic underlying technology behind CUDA and gpgpu in general is somehow poorly suited for video encoding.
Anyway......... I ran an x264 CRF18 encode (default settings) on an AVI I had handy. This old E6750 CPU managed about 45fps. Using the default MediaCoder/CUDA settings I was encoding at over 100fps, resulting in an encode of just over half the bitrate and about a tenth of the quality (if it can be measured that way). I couldn't seem to coax anything better out of a CUDA encode while selecting the quality based encoding method, and I couldn't fix the problem of a few rows of pixels worth of crud down one side. The source video was mod16 and I didn't resize.
Maybe it's just this antiquated PC. A 8600GT is the newest video card I have. I'm not a gamer so I don't upgrade video cards often. The same applies to the video card drivers if that's likely to make a difference. The drivers I'm using would be over a year old. I'm running XP.
The difference though, is despite all that, the quality of the x264 encode was fine.
Seeing as I couldn't find a way to run a quality based CUDA encode which gave me an acceptable quality output... it was the same (ie awful) regardless of the quality value I chose.... I tried an average bitrate encode instead using the bitrate the CRF18 x264 encode gave me. The result was much better, although encoding speed slowed to around 80fps. Still nearly twice the speed of x264 on this PC though. It reduced the keyframe pumping significantly, but didn't eliminate it. Not that it matters. I prefer quality based encoding. If another encoder doesn't offer an equivalent of x264's CRF encoding I can't see myself using it, even if it is much faster. MediaCoder/CUDA doesn't even seem to allow 2 pass encoding.
Last edited by hello_hello; 14th Apr 2014 at 01:51.
could you upload the test avi you used, i'd like to try it out myself, also what bit rate did x264 crf 18 give you and what preset did you use?
and yes, the drivers do make a difference, it's been my experience that some driver revisions seem to kill quality, usually when Nvidia is trying to improve overall performance (the latest beta drivers improved performance considerably but definitely had a quality effect with some encoders).
this is why even the main concept CUDA and OCL powered encoders only feature a subset of all the features that their full software based encoder does.
now if these developers had the desire i have no doubt that they could build an h264, mpeg-2, mpeg-4 asp encoder from the ground up that ran entirely on a gpu and fast at that, hell the gpeg-2 people did.
the sad thing is we're seeing the same crap being repeated with hevc, a standard built from the ground up with an eye towards gpu acceleration and easy parallelism. you know the threading model i suggested for encoders like x264 use in order to benefit from gpu's, namely segmenting the video on gop boundaries, the one you said would lead to cache thrashing, you might find it interesting to note that x265 uses gop level parallelism for it's threading model.
then we have this question, intel use inclusive cache hierarchies meaning the data from the L1 is mirrored in the L2 and so on and AMD uses exclusive cache hierarchies, do you believe that one is more susceptible than they other? what about intel's new L4 that is found on some haswell's and will eventually be included in all Intel cpu's, namely the 128mb eDRAM L4, that won't help eliminate thrashing?
lastly we have media coder which already features segmental video encoding, i have yet to see any thrashing taking place in my tests.
in short i know you're an old school programmer but i think that maybe you're a bit too old school, you still think in terms of pre-Pentium cpu's i honestly don't think it would thrash with 500 threads and certainly not on any modern video card.
Virtualdub's External Encoder which I could.
EDIT: I just used the CUDA encoder at default settings and the results were way worse than I had remembered. Looking at the numbers, you would think that the CUDA H264 encoder would've produced the best looking picture but the picture was pathetic looking. The DivX265, X265 and X264 pictures looked identical to my "human" eye.
Using the same 4097x2160 input avi in Virtualdub with the external encoder feature to compare CUDA H264 using default settings, X264 at superfast preset, DivX265 at fastest preset and X265 at ultrafast preset...
[i] VideoEnc: fieldMode: 0dieMode: 0currentProfile: highINFO: Reading input from stdIn,...
[i] VideoEnc: INFO: Create the timer for frame time measurement,..
[i] VideoEnc: INFO: Creating encoder api interface,..
[i] VideoEnc: INFO: Created a NVEncoder instance,..
[i] VideoEnc: INFO: Using H.264 encoder,...
[i] VideoEnc: INFO: Detected 1 GPU(s) capable of GPU Encoding.
[i] VideoEnc: INFO: GPU Device 0 : GeForce GTS 450
[i] VideoEnc: INFO: Compute Capability = SM 2.1
[i] VideoEnc: INFO: Total Memory = 1024 MBytes
[i] VideoEnc: INFO: GPU Clock = 1764000 Hz
[i] VideoEnc: INFO: Multiprocessors = 4
[i] VideoEnc: INFO: GPU Encoding Mode:
[i] VideoEnc: INFO: CPU: Entropy Encoding
[i] VideoEnc: INFO: GPU: Full Offload of Encoding
[i] VideoEnc: INFO: Using device with index 0,...
[i] VideoEnc: PARAM: NVVE_GPU_OFFLOAD_LEVEL 8
[i] VideoEnc: PARAM: NVVE_OUT_SIZE 4096 2160
[i] VideoEnc: PARAM: NVVE_ASPECT_RATIO 1 1 1
[i] VideoEnc: PARAM: NVVE_FIELD_ENC_MODE 0
[i] VideoEnc: PARAM: NVVE_P_INTERVAL 1
[i] VideoEnc: PARAM: NVVE_IDR_PERIOD 250
[i] VideoEnc: PARAM: NVVE_DYNAMIC_GOP 1
[i] VideoEnc: PARAM: NVVE_RC_TYPE 3
[i] VideoEnc: PARAM: NVVE_AVG_BITRATE 1500000
[i] VideoEnc: PARAM: NVVE_PEAK_BITRATE 300000000
[i] VideoEnc: PARAM: NVVE_QP_LEVEL_INTRA 10
[i] VideoEnc: PARAM: NVVE_QP_LEVEL_INTER_P 12
[i] VideoEnc: PARAM: NVVE_QP_LEVEL_INTER_B 15
[i] VideoEnc: PARAM: NVVE_FRAME_RATE 25 1
[i] VideoEnc: PARAM: NVVE_DEBLOCK_MODE 1
[i] VideoEnc: PARAM: NVVE_PROFILE_LEVEL 65380
[i] VideoEnc: PARAM: NVVE_SET_DEINTERLACE 0
[i] VideoEnc: PARAM: NVVE_DISABLE_CABAC 0
[i] VideoEnc: PARAM: NVVE_CONFIGURE_NALU_FRAMING_TYPE 0
[i] VideoEnc: PARAM: NVVE_DISABLE_SPS_PPS 0
[i] VideoEnc: PARAM: NVVE_SLICE_COUNT 0
[i] VideoEnc: INFO: Register the callback structure,...
[i] VideoEnc: INFO: Create the hw resources for encoding..
[i] VideoEnc: INFO: Starting encoding,...
[i] VideoEnc: INFO: Colorspace: IYUV
[i] VideoEnc: INFO: measuring FPS: true
[i] VideoEnc: INFO: showFramestats: 0
[i] VideoEnc: Starting the encoding,...
[i] VideoEnc: Finished encoding,...
[i] VideoEnc: INFO: Number of Coded Frames : 374
[i] VideoEnc: INFO: Elapsed time : 42981 ms
[i] VideoEnc: INFO: End to End FPS : 8.70152
[i] VideoEnc: INFO: CPU utilization : 33.5006
[i] VideoEnc: (user: 12.2134%, kernel: 11.8686%) / 4 cores
File size: .............. 7.16 MB
Bit rate: ............... 3777 Kbps
Maximum bit rate: ........ 300 Mbps
Width: .................. 4096 pixels
Height: ................. 2160 pixels
Frame rate: ........... 24.000 fps
[i] VideoEnc: raw [info]: 4096x2160p 0:0 @ 24/1 fps (cfr)
[i] VideoEnc: x264 [info]: kb/s:1105.22
[i] VideoEnc: encoded 375 frames, 8.41 fps, 1105.22 kb/s
File size: .......... 2.07 MB
Overall bit rate: ... 1109 Kbps
Bit rate: ........... 1087 Kbps
Width: .............. 4096 pixels
Height: ............. 2160 pixels
Frame rate: ....... 24.000 fps
DivX265 (version 18.104.22.168)
[i] VideoEnc: DivX 265/HEVC Encoder (version 22.214.171.124)
[i] VideoEnc: Profile: DivX 4K
[i] VideoEnc: Encoding
[i] VideoEnc: Format: ................. Main@5.0, 4096x2160 1:1
[i] VideoEnc: Number of coded frames .. 375
[i] VideoEnc: Total encoding time ..... 62219 ms
[i] VideoEnc: Pure encoding time ...... 45602 ms
[i] VideoEnc: Average time per frame .. 165.917 ms
[i] VideoEnc: Average speed achieved .. 6.0 fps
[i] VideoEnc: Average CPU load ........ 93.8 % (4 pictures, 8 threads)
[i] VideoEnc: Peak memory usage ....... 1296.570 Mb
[i] VideoEnc: Average bitrate ......... 317.37 kbit/sec @ 24.000 Hz (Const QP)
File size: ......................................... 613 KB
[i] VideoEnc: yuv [info]: 4096x2160 fps 24000/1000 i420 unknown frame count
[i] VideoEnc: x265 [info]: HEVC encoder version 0.9+29-83ccf2f1453f
[i] VideoEnc: x265 [info]: build info [Windows][GCC 4.6.3][64 bit] 8bpp
[i] VideoEnc: encoded 375 frames in 80.23s (4.67 fps), 411.63 kb/s
File size: .......... 795 KB
Overall bit rate: ... 417 Kbps
Bit rate: ........... 408 Kbps
Width: ............. 4096 pixels
Height: ............ 2160 pixels
Frame rate: ...... 24.000 fps