It might've been because I was logged in as sudo (root user).
I've done some more serious tests and I'm getting amazing speed considering I'm on a 32-bit host OS emulating a 64-bit one.
I hope to take advantage of the new "para-virtualization" feature of Virtualbox and get even faster speed but I had trouble getting the new 5.0 to work.
And I'm so glad I tested out system save states in the middle of x265 encoding because they do not work. Upon resuming, it encodes 100 more frames and then fails with a "frame doesnt match the type from first pass", same thing when suspending the process and resuming from VM saved state.
But at least Sleep/Stand-by on Windows works while the VM is running with x265 encoding. So that problem is solved if anyone had the same difficulty.
I understand that, but 8 bit has 256 calculations and 10 bit has 1024 so wouldn't it have 4 times more cycles to go thru or am I misunderstanding some fundamental point here?Code:64-bit code for 10 bit depth uses assembler optimization with MMX / SSE instructions too, and also more and wider registers.
+ Reply to Thread
Results 631 to 660 of 782
-
-
Absolutely, you do misunderstand completely.
Parameters with 8 bit resolution can have 256 different values, that's true.
Parameters with 10 bit resolution can have 1024 different values, that's true.
But why do you believe that an encoder's purpose is to go through all possible values? A parameter can be calculated straight. The number of required calculations is rather constant, it doesn't matter much how exactly it is calculated (except for a few additions to scale and limit the precision).
To keep it simple, imagine that normalized float values (between -1 and 1, typically like sine and cosine) are calculated with more or less precision, and because we know that they will be normalized, we know that we only need to store a fractional part of the number (only digits to the right of the decimal). This is the most usual kind of calculations in transformations from pixel values to a discrete frequency spectrum, limited variants of the Fourier sequence.
Older algorithms in the MPEG standards used the DCT (Discrete Cosine Transform) with 8x8 samples of luminance or chrominance differences; from AVC on, there are slightly different integer transforms but with the same purpose: Describing the gradient of pixel values over small partitions of the video frame as frequency spectrums. The difference between bit depths is only the precision of the frequency parameter values, but the required calculations will basically be almost the same. -
I get it, it doesn't have to go thru all possible values, but I had the impression an encoder had to.
New problem though, I can't do the second pass (no error is shown).
http://pasteboard.co/2yY54zyK.png
First pass completed successfully, second one does nothing. 2pass tests on a smaller 4000-frame video was a success. I don't remember seeing any stats files but the first pass does create a lower quality .HEVC.
What's going on here? -
JavaScript was enabled. But I guess it decided not to support Opera 12 (Presto) anymore, despite being a quite standard compliant engine.
You forgot a closing double-quote after the output name; your shell interpreter waited eternally for the whole filename... -
AAAAAAAAAAAAHHHHHH!!!!!!!!! What a blithering idiot! I remembered to delete both quotes in the first pass but not the second one? Reminds me of the time my internet was cut for 3 days so I had to run in circles punctuated by nerdraging only to find out hours later that the reason my x264 script wasn't working was because one of the commands had one instead of two hyphens.
I can't thank you enough for putting up with me Ligh.De.
Also, I just figured out that the VM was using only 40% of the CPU on average the whole time because I set its cores to 4. I thought cores meant cores, not threads but I guess I was wrong. So the Linux vs. Windows tests I did are not accurate after all. I'll have to redo them and probably expect double speed. If a 64-bit Linux VM performs better than the 32-bit Windows host, I will shit bricks.
EDIT: The 'cores' setting in virtualbox are cores after all because setting it to 8 slowed everything down. But why the VM is taking only an average of 45% of the CPU is a mystery. On the host machine it's closer to 75%. Extrapolating this, the Linux 8-bit test should be 4.55 fps and the 10-bit 3.93.Last edited by -Habanero-; 8th Aug 2015 at 13:57.
-
Finally was able to do an anime test with a 45-minute episode. The x264 encode was 190MB and x265 120. I am extremely impressed at this increase in efficiency. In 2003, this very same video was being shared as a VHS rip at 352x240, 15fps, 90MB total size and the quality SUCKED.
Now with x265 we are approaching very close to that same size at twice the resolution, twice the framerate and excellent quality.
12 years it took to get this far but we are finally here. DVD quality cartoons at only 400 kb/s.
But I've run into a technical hitch so this is only an apples-oranges comparison. The last x264 test I did in 2011 received a much higher SSIM despite that duplicate frames were left intact and no deblending was done. So either I denoised more thoroughly last time or I don't know what the hell is up.
SSIMs are 0.98990 for x264 and 0.98856 for x265. x265 edges were more blurred while x264 edges were more noisy. x265 artifacts are more pleasing to me but I'll have to agree with the SSIM that the x264 encode is slightly better so this needs a retest altogether, but I'm not waiting 40 hours to re-encode with x265 so I'll adjust x264.
All in all, its a 60% improvement which I'm very pleased with even if the encoding is 23 times slower. -
Thought I would post this here. Some interesting discussion about Handbrake, Skylake, OC, stability, and x265 with Tom Vaughn of MCW at IDF. Be good if Tom could elaborate further for the forum members.
http://www.anandtech.com/show/9533/intel-i7-6700k-overclocking-4-8-ghz/2 -
I would guess the heavy use of AVX instructions in x265 made the CPU overheat in small areas of the die, whereas a wider spectrum of instructions spread the heat over a larger area.
Certainly an interesting find, thanks. So be warned, encoding with overclocked CPUs may be expensive... -
I encoded another animated feature length movie which was an early codec experiment of mine back in 2008. Back then I emphasized compression over quality so I encoded it to 200MB (including audio) and the quality sucked but it was watchable. Part of the reason I was so careless is cuz I intended to archive this film (I haven't watched it since that day 7 years ago).
I remember reading and drooling about H.265 that month on h265.net, wondering how awesome it would be.
Now that x265 is out I wanted to see what it would look like today at the same size. But first I encoded all over again with the latest x264 and with proper parameters this time. I can't believe my eyes, look how awesome the quality is. All three of them are the same bitrate, about 0.050 BPP.
However, x264 has been massively improved since 2008 so x265 regrettably has only about 25% better efficiency than its fully-mature predecessor.Last edited by -Habanero-; 3rd Sep 2015 at 02:51.
-
-
Hmm, weird. The only difference is the 2008 encode was 8-bit and the new ones are 10-bit. I fixed it now by converting with mspaint.
You can't really notice much of a visual difference between mature x264 and immature x265 unless you directly compare. -
Originally Posted by Deepthi Nandakumar
-
Originally Posted by -Habanero-
Thank goodness we're not enduring some long era of "blurvision" with x265 though, otherwise I would have completely sat out this round this time.I hate VHS. I always did. -
We've updated x265 performance presets to incorporate the algorithms we've developed over the past months. In many cases you will see 2x the performance, with little to no effect on compression efficiency. See http://x265.org/performance-presets/ for details. The settings are documented here... http://x265.readthedocs.org/en/default/presets.html
-
Version 1.9 has been released as new milestone. From the developer mailing list:
Originally Posted by Deepthi Nandakumar -
Where can I find a changelog? I can't find the "new features" list at http://x265.readthedocs.org/en/stable/
I have found all commits changes at https://bitbucket.org/multicoreware/x265/commits/all but I would like the summarized changelog also. Or is it only available the mailing list?Last edited by Baldrick; 29th Jan 2016 at 05:21.
-
The summary in the announcement post is probably about all you get as summary what happened during v1.8; the commit list is the usual detailed change log. I doubt that a "detailed change log" somewhere in the middle of both will be created. But affirmative answers from the staff only...
The "full documentation" is the documentation of the current state. There is no history included. -
There seems to be a regression, introduced somewhere between versions 1.8+167-e951ab673b1c and 1.8+201-769081eb5f4c. In 2-pass VBR mode, x265 may miss the target bitrate, returning results with about 75-80% of the desired bitrate, up to an extreme case where the bitrate curve would not converge in the 2nd pass, making x265 believe that the maximum QP (= 51) would be sufficient.
__
P.S.: The reason seems to be that the encoding was restricted to only one GOP, using "--frames 100 --keyint 100".Last edited by LigH.de; 24th Feb 2016 at 10:15.
-
Anyone know a source for pre-compiled x265 binaries for Mac OS X?
(compiled them myself without NUMA support so far, but seeing that the newer x265 version perform bad on multi-cpu systems without NUMA support I was wondering if there are some build sites available I don't know of)
Cu Selur
Ps.: the vlc site offers mac x264 binaries, but no mac x265 binaries.
building NUMA always fails:
Code:MacMini:numactl-2.0.11 selur$ autoconf MacMini:numactl-2.0.11 selur$ ./configure checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking for a thread-safe mkdir -p... build-aux/install-sh -c -d checking for gawk... no checking for mawk... no checking for nawk... no checking for awk... awk checking whether make sets $(MAKE)... yes checking whether make supports nested variables... yes checking whether make supports nested variables... (cached) yes checking build system type... x86_64-apple-darwin15.3.0 checking host system type... x86_64-apple-darwin15.3.0 checking how to print strings... printf checking for style of include used by make... GNU checking for gcc... gcc checking whether the C compiler works... yes checking for C compiler default output file name... a.out checking for suffix of executables... checking whether we are cross compiling... no checking for suffix of object files... o checking whether we are using the GNU C compiler... yes checking whether gcc accepts -g... yes checking for gcc option to accept ISO C89... none needed checking whether gcc understands -c and -o together... yes checking dependency style of gcc... gcc3 checking for a sed that does not truncate output... /usr/bin/sed checking for grep that handles long lines and -e... /usr/bin/grep checking for egrep... /usr/bin/grep -E checking for fgrep... /usr/bin/grep -F checking for ld used by gcc... /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ld checking if the linker (/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ld) is GNU ld... no checking for BSD- or MS-compatible name lister (nm)... /usr/bin/nm checking the name lister (/usr/bin/nm) interface... BSD nm checking whether ln -s works... yes checking the maximum length of command line arguments... 196608 checking whether the shell understands some XSI constructs... yes checking whether the shell understands "+="... yes checking how to convert x86_64-apple-darwin15.3.0 file names to x86_64-apple-darwin15.3.0 format... func_convert_file_noop checking how to convert x86_64-apple-darwin15.3.0 file names to toolchain format... func_convert_file_noop checking for /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ld option to reload object files... -r checking for objdump... no checking how to recognize dependent libraries... pass_all checking for dlltool... no checking how to associate runtime and link libraries... printf %s\n checking for ar... ar checking for archiver @FILE support... no checking for strip... strip checking for ranlib... ranlib checking command to parse /usr/bin/nm output from gcc object... ok checking for sysroot... no checking for mt... no checking if : is a manifest tool... no checking for dsymutil... dsymutil checking for nmedit... nmedit checking for lipo... lipo checking for otool... otool checking for otool64... no checking for -single_module linker flag... yes checking for -exported_symbols_list linker flag... yes checking for -force_load linker flag... yes checking how to run the C preprocessor... gcc -E checking for ANSI C header files... yes checking for sys/types.h... yes checking for sys/stat.h... yes checking for stdlib.h... yes checking for string.h... yes checking for memory.h... yes checking for strings.h... yes checking for inttypes.h... yes checking for stdint.h... yes checking for unistd.h... yes checking for dlfcn.h... yes checking for objdir... .libs checking if gcc supports -fno-rtti -fno-exceptions... yes checking for gcc option to produce PIC... -fno-common -DPIC checking if gcc PIC flag -fno-common -DPIC works... yes checking if gcc static flag -static works... no checking if gcc supports -c -o file.o... yes checking if gcc supports -c -o file.o... (cached) yes checking whether the gcc linker (/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ld) supports shared libraries... yes checking dynamic linker characteristics... darwin15.3.0 dyld checking how to hardcode library paths into programs... immediate checking whether stripping libraries is possible... yes checking if libtool supports shared libraries... yes checking whether to build shared libraries... yes checking whether to build static libraries... yes checking for gcc... (cached) gcc checking whether we are using the GNU C compiler... (cached) yes checking whether gcc accepts -g... (cached) yes checking for gcc option to accept ISO C89... (cached) none needed checking whether gcc understands -c and -o together... (cached) yes checking dependency style of gcc... (cached) gcc3 checking for thread local storage (TLS) class... __thread checking whether C compiler accepts -ftree-vectorize... yes checking that generated files are newer than configure... done configure: creating ./config.status config.status: creating Makefile config.status: creating config.h config.status: executing depfiles commands config.status: executing libtool commands MacMini:numactl-2.0.11 selur$ make /Applications/Xcode.app/Contents/Developer/usr/bin/make all-am CC libnuma.lo In file included from libnuma.c:37: ./numaint.h:53:9: warning: 'howmany' macro redefined [-Wmacro-redefined] #define howmany(x,y) (((x)+((y)-1))/(y)) ^ /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.11.sdk/usr/include/sys/types.h:184:9: note: previous definition is here #define howmany(x, y) __DARWIN_howmany(x, y) /* # y's == x bits? */ ^ libnuma.c:317:1: error: only weak aliases are supported on darwin make_internal_alias(numa_pagesize); ^ ./numaint.h:18:73: note: expanded from macro 'make_internal_alias' #define make_internal_alias(x) extern __typeof (x) x##_int __attribute((alias(#x), visibility("hidden"))) ^ libnuma.c:670:1: error: only weak aliases are supported on darwin make_internal_alias(numa_max_node); ^ ./numaint.h:18:73: note: expanded from macro 'make_internal_alias' #define make_internal_alias(x) extern __typeof (x) x##_int __attribute((alias(#x), visibility("hidden"))) ^ libnuma.c:691:1: error: only weak aliases are supported on darwin make_internal_alias(numa_max_possible_node_v1); ^ ./numaint.h:18:73: note: expanded from macro 'make_internal_alias' #define make_internal_alias(x) extern __typeof (x) x##_int __attribute((alias(#x), visibility("hidden"))) ^ libnuma.c:692:1: error: only weak aliases are supported on darwin make_internal_alias(numa_max_possible_node_v2); ^ ./numaint.h:18:73: note: expanded from macro 'make_internal_alias' #define make_internal_alias(x) extern __typeof (x) x##_int __attribute((alias(#x), visibility("hidden"))) ^ libnuma.c:780:1: error: only weak aliases are supported on darwin make_internal_alias(numa_node_size64); ^ ./numaint.h:18:73: note: expanded from macro 'make_internal_alias' #define make_internal_alias(x) extern __typeof (x) x##_int __attribute((alias(#x), visibility("hidden"))) ^ libnuma.c:857:1: error: only weak aliases are supported on darwin make_internal_alias(numa_police_memory); ^ ./numaint.h:18:73: note: expanded from macro 'make_internal_alias' #define make_internal_alias(x) extern __typeof (x) x##_int __attribute((alias(#x), visibility("hidden"))) ^ libnuma.c:873:8: warning: implicit declaration of function 'mremap' is invalid in C99 [-Wimplicit-function-declaration] mem = mremap(old_addr, old_size, new_size, MREMAP_MAYMOVE); ^ libnuma.c:873:45: error: use of undeclared identifier 'MREMAP_MAYMOVE' mem = mremap(old_addr, old_size, new_size, MREMAP_MAYMOVE); ^ libnuma.c:916:1: error: only weak aliases are supported on darwin make_internal_alias(numa_alloc_interleaved_subset_v1); ^ ./numaint.h:18:73: note: expanded from macro 'make_internal_alias' #define make_internal_alias(x) extern __typeof (x) x##_int __attribute((alias(#x), visibility("hidden"))) ^ libnuma.c:917:1: error: only weak aliases are supported on darwin make_internal_alias(numa_alloc_interleaved_subset_v2); ^ ./numaint.h:18:73: note: expanded from macro 'make_internal_alias' #define make_internal_alias(x) extern __typeof (x) x##_int __attribute((alias(#x), visibility("hidden"))) ^ libnuma.c:1051:1: error: only weak aliases are supported on darwin make_internal_alias(numa_set_membind_v2); ^ ./numaint.h:18:73: note: expanded from macro 'make_internal_alias' #define make_internal_alias(x) extern __typeof (x) x##_int __attribute((alias(#x), visibility("hidden"))) ^ libnuma.c:1157:1: error: only weak aliases are supported on darwin make_internal_alias(numa_get_mems_allowed); ^ ./numaint.h:18:73: note: expanded from macro 'make_internal_alias' #define make_internal_alias(x) extern __typeof (x) x##_int __attribute((alias(#x), visibility("hidden"))) ^ libnuma.c:1399:1: error: only weak aliases are supported on darwin make_internal_alias(numa_node_to_cpus_v1); ^ ./numaint.h:18:73: note: expanded from macro 'make_internal_alias' #define make_internal_alias(x) extern __typeof (x) x##_int __attribute((alias(#x), visibility("hidden"))) ^ libnuma.c:1400:1: error: only weak aliases are supported on darwin make_internal_alias(numa_node_to_cpus_v2); ^ ./numaint.h:18:73: note: expanded from macro 'make_internal_alias' #define make_internal_alias(x) extern __typeof (x) x##_int __attribute((alias(#x), visibility("hidden"))) ^ libnuma.c:1538:1: error: only weak aliases are supported on darwin make_internal_alias(numa_run_on_node_mask_v2); ^ ./numaint.h:18:73: note: expanded from macro 'make_internal_alias' #define make_internal_alias(x) extern __typeof (x) x##_int __attribute((alias(#x), visibility("hidden"))) ^ 2 warnings and 14 errors generated. make[1]: *** [libnuma.lo] Error 1 make: *** [all] Error 2 MacMini:numactl-2.0.11 selur$
Last edited by Selur; 9th Mar 2016 at 06:32.
users currently on my ignore list: deadrats, Stears555, marcorocchini -
True, though I exaggerated and starved my encodes of bitrate back then because I didn't wanna accept that x264 was only 50% better than Xvid, I wanted it to be twice as better. But now I can use the same low bitrate and get actual high quality so dreams do come true.
Anyway, has anyone else noticed that x265's quality has not improved over the last year? I did my last test exactly a year ago and noticed the average SSIM decreased after slowly but consistently increasing before. Visual inspection makes it hard to tell the difference but there's definitely a lot of frames that are worse than x265 1.4.
I've been told the devs were focusing on increasing speed lately so maybe that's a factor? I used the same veryslow preset as always.
I've been gone all this time to come back and see this? What's happening? -
hxxp://www.sendspace.com/file/we54wq (**** off cleverbot)
Well shit, x265 has produced worse quality than x264 for this test file. For others, they do a little better than x264 but worse than x265 1.4. If this is only happening to me I'd like to know exactly what I'm doing wrong.
Commandline used:
Code:avs4x26x.exe --x26x-binary x265 ng2.avs --crf 41.1 --preset veryslow --ref 16 --bframes 16 --keyint 600 --no-psy-rd --no-psy-rdoq --rc-lookahead 250 --qcomp 0.7 --allow-non-conformance -o "ng2.hevc"
-
presets changed a lot over the last year to writing 'used the same veryslow preset as always' basically means you used totally different option.
--no-psy-rd --no-psy-rdoq
--qcomp 0.7
ng2.avs
--crf 41.1
for general amusement I attached my first try to encode such stuff with x265 (without avisynth),... for me it looks way better than your encodesLast edited by Selur; 10th Apr 2016 at 01:42.
users currently on my ignore list: deadrats, Stears555, marcorocchini -
Can you post your commandline? Your encode is a lot better than mine despite that you used only 6 refs vs. my 16 refs which is THE most crucial factor for this kind of video. This is really messed up. It can't be because your me-range was double than mine.
Also, why is your encode 2295 frames instead of 2334? I know you didn't just trim because there's no consistent cut. You need to update whatever splitter you're using because it didn't decode the source frame-accurately.
--no-psy-rd --no-psy-rdoq
...and thus you dropped the main quality gain factors which came into play the last year
--qcomp 0.7
why?
What does the script look like? Posting the source without the script basically means that one can't reproduce the stuff you do,...
Code:crop( 0, 8, 0, 0) converttoYV12
-
Why not? I like constant quality.
why is your encode 2295 frames instead of 2334?
Your encode is a lot better than mine despite that you used only 6 refs vs. my 16 refs which is THE most crucial factor for this kind of video.
btw. the SEI should still hold the encoding settings and MediaInfo should be able to show them.
as decoder I used:
Code:ffmpeg -y -threads 8 -i "C:\Users\Selur\Desktop\ng2.avi" -map 0:0 -an -sn -vsync 0 -r 6010000/100000 -pix_fmt yuv420p -strict -2 -f yuv4mpegpipe - | ...
=> it seems to be some incompatibility with libavLast edited by Selur; 10th Apr 2016 at 02:52.
users currently on my ignore list: deadrats, Stears555, marcorocchini -
Avisynth script:
Code:SetMemoryMax(768) # loading source: C:\Users\Selur\Desktop\ng2.avi SetMTMode(5) # change MT mode AviSource("C:\Users\Selur\Desktop\ng2.avi") SetMTMode(2) # change MT mode # cropping Crop(0,0,0,-8) # adjust color to YV12 ConvertToYV12(matrix="PC.601") return last
Code:ffmpeg -y -loglevel fatal -threads 8 -i "H:\Temp\encodingTempAvisynthSkript_10_20_09_3710.avs" -an -sn -vsync 0 -r 6010000/100000 -pix_fmt yuv420p -strict -2 -f yuv4mpegpipe - | x265 --preset veryslow --pmode --pme --input - --y4m --allow-non-conformance --ctu 32 --merange 100 --keyint 600 --bframes 16 --bframe-bias 25 --ref 16 --crf 41.10 --nr-intra 500 --nr-inter 500 --psy-rdoq 15.00 --aq-mode 2 --aq-strength 1.50 --range full --colormatrix bt470bg --output "H:\Temp\10_20_09_3710_02.265"
btw. since compatibility isn't the goal here: Why not use 10bit encoding?Last edited by Selur; 10th Apr 2016 at 03:33.
users currently on my ignore list: deadrats, Stears555, marcorocchini -
it's not for constant quality, but constant quantizer,....
btw. since compatibility isn't the goal here: Why not use 10bit encoding?
Anyway, the only difference I see in your commandline from mine is that you use bframe bias 25 while I don't use any bframe bias at all and you use merange 100 instead of my 57 which seems default for --veryslow preset. And you use psychovisuals. So are these it or am I missing something? I'll do new tests tomorrow.
Also, I would reconsider how I encode from now on if I were you because ffmpeg seems to be incorrectly decoding some videos for you. Why do you need ffmpeg? Use x265 directly. -
qcomp controls the variability of the quality. 0.0 is a fixed bitrate essentially and 1.0 is fixed quality.
Also, I would reconsider how I encode from now on if I were you because ffmpeg seems to be incorrectly decoding some videos for you.
Use x265 directly.
x264 can be compiled with libav, but then will also show the same problem with lagarith (and some other formats) depending on the libav version.
Only reliable way to handle lagarith is through the vfw decoder (like AviSource does), otherwise it's always a question of the libav version used.
So are these it or am I missing something?
The main boost should be due to extended merange, and the psychovisual settings.
I'm on a 32-bit OSusers currently on my ignore list: deadrats, Stears555, marcorocchini -
I didn't get what you just said. Rephrase?
You need to use x265 in conjunction with avs4x26x, you'll be able to use AVS that way. I use avisource for lagarith files because it's faster to type than ffvideosource.
Anyway, I used 100 merange, psy RD and psy RDOQ and the quality is now much better but still less than x264. 15 RDOQ increased the quality most profoundly. b-frame bias 25 decreased quality. My long time wisdom has been that it's better to let x264 decide what the optimal P/B frame balance is.
Heres my new encode. If it can still be improved can you post a new encode with proper decoder? Make sure it's not larger than mine.
It's interesting that psy-RD and psy-RDOQ increased quality so much, I thought it was a noise-retainer?
Similar Threads
-
[HEVC] x265.EXE: mingw builds
By El Heggunte in forum Video ConversionReplies: 2221Last Post: 9th Feb 2021, 01:18 -
HEVC Encoder by Strongene Lentoid
By vhelp in forum Video ConversionReplies: 126Last Post: 19th May 2017, 12:58 -
theX.265 (a free HEVC) codec. Have you ever tried that HEVC encoder? (HELP)
By Stears555 in forum Video ConversionReplies: 41Last Post: 16th Sep 2013, 11:15 -
HEVC x265 Decoder
By enim in forum Newbie / General discussionsReplies: 5Last Post: 19th Aug 2013, 12:58 -
MulticoreWare Annouces x265/HEVC Mission Statement
By enim in forum Latest Video NewsReplies: 4Last Post: 9th Aug 2013, 22:09