VideoHelp Forum




+ Reply to Thread
Page 22 of 27
FirstFirst ... 12 20 21 22 23 24 ... LastLast
Results 631 to 660 of 782
  1. It might've been because I was logged in as sudo (root user).

    I've done some more serious tests and I'm getting amazing speed considering I'm on a 32-bit host OS emulating a 64-bit one.
    I hope to take advantage of the new "para-virtualization" feature of Virtualbox and get even faster speed but I had trouble getting the new 5.0 to work.

    And I'm so glad I tested out system save states in the middle of x265 encoding because they do not work. Upon resuming, it encodes 100 more frames and then fails with a "frame doesnt match the type from first pass", same thing when suspending the process and resuming from VM saved state.
    But at least Sleep/Stand-by on Windows works while the VM is running with x265 encoding. So that problem is solved if anyone had the same difficulty.

    Code:
    64-bit code for 10 bit depth uses assembler optimization with MMX / SSE instructions too, and also more and wider registers.
    I understand that, but 8 bit has 256 calculations and 10 bit has 1024 so wouldn't it have 4 times more cycles to go thru or am I misunderstanding some fundamental point here?
    Quote Quote  
  2. Member
    Join Date
    Aug 2013
    Location
    Central Germany
    Search PM
    Originally Posted by -Habanero- View Post
    I understand that, but 8 bit has 256 calculations and 10 bit has 1024 so wouldn't it have 4 times more cycles to go thru or am I misunderstanding some fundamental point here?
    Absolutely, you do misunderstand completely.

    Parameters with 8 bit resolution can have 256 different values, that's true.
    Parameters with 10 bit resolution can have 1024 different values, that's true.

    But why do you believe that an encoder's purpose is to go through all possible values? A parameter can be calculated straight. The number of required calculations is rather constant, it doesn't matter much how exactly it is calculated (except for a few additions to scale and limit the precision).

    To keep it simple, imagine that normalized float values (between -1 and 1, typically like sine and cosine) are calculated with more or less precision, and because we know that they will be normalized, we know that we only need to store a fractional part of the number (only digits to the right of the decimal). This is the most usual kind of calculations in transformations from pixel values to a discrete frequency spectrum, limited variants of the Fourier sequence.

    Older algorithms in the MPEG standards used the DCT (Discrete Cosine Transform) with 8x8 samples of luminance or chrominance differences; from AVC on, there are slightly different integer transforms but with the same purpose: Describing the gradient of pixel values over small partitions of the video frame as frequency spectrums. The difference between bit depths is only the precision of the frequency parameter values, but the required calculations will basically be almost the same.
    Quote Quote  
  3. I get it, it doesn't have to go thru all possible values, but I had the impression an encoder had to.

    New problem though, I can't do the second pass (no error is shown).
    http://pasteboard.co/2yY54zyK.png

    First pass completed successfully, second one does nothing. 2pass tests on a smaller 4000-frame video was a success. I don't remember seeing any stats files but the first pass does create a lower quality .HEVC.

    What's going on here?
    Quote Quote  
  4. Member
    Join Date
    Aug 2013
    Location
    Central Germany
    Search PM
    Originally Posted by -Habanero- View Post
    Image not found
    Quote Quote  
  5. Enable javascript. Or alternatively: http://postimg.org/image/eykp7zbjv/
    Quote Quote  
  6. Member
    Join Date
    Aug 2013
    Location
    Central Germany
    Search PM
    JavaScript was enabled. But I guess it decided not to support Opera 12 (Presto) anymore, despite being a quite standard compliant engine.

    You forgot a closing double-quote after the output name; your shell interpreter waited eternally for the whole filename...
    Quote Quote  
  7. AAAAAAAAAAAAHHHHHH!!!!!!!!! What a blithering idiot! I remembered to delete both quotes in the first pass but not the second one? Reminds me of the time my internet was cut for 3 days so I had to run in circles punctuated by nerdraging only to find out hours later that the reason my x264 script wasn't working was because one of the commands had one instead of two hyphens.

    I can't thank you enough for putting up with me Ligh.De.

    Also, I just figured out that the VM was using only 40% of the CPU on average the whole time because I set its cores to 4. I thought cores meant cores, not threads but I guess I was wrong. So the Linux vs. Windows tests I did are not accurate after all. I'll have to redo them and probably expect double speed. If a 64-bit Linux VM performs better than the 32-bit Windows host, I will shit bricks.
    EDIT: The 'cores' setting in virtualbox are cores after all because setting it to 8 slowed everything down. But why the VM is taking only an average of 45% of the CPU is a mystery. On the host machine it's closer to 75%. Extrapolating this, the Linux 8-bit test should be 4.55 fps and the 10-bit 3.93.
    Last edited by -Habanero-; 8th Aug 2015 at 13:57.
    Quote Quote  
  8. Finally was able to do an anime test with a 45-minute episode. The x264 encode was 190MB and x265 120. I am extremely impressed at this increase in efficiency. In 2003, this very same video was being shared as a VHS rip at 352x240, 15fps, 90MB total size and the quality SUCKED.
    Now with x265 we are approaching very close to that same size at twice the resolution, twice the framerate and excellent quality.
    12 years it took to get this far but we are finally here. DVD quality cartoons at only 400 kb/s.

    But I've run into a technical hitch so this is only an apples-oranges comparison. The last x264 test I did in 2011 received a much higher SSIM despite that duplicate frames were left intact and no deblending was done. So either I denoised more thoroughly last time or I don't know what the hell is up.

    SSIMs are 0.98990 for x264 and 0.98856 for x265. x265 edges were more blurred while x264 edges were more noisy. x265 artifacts are more pleasing to me but I'll have to agree with the SSIM that the x264 encode is slightly better so this needs a retest altogether, but I'm not waiting 40 hours to re-encode with x265 so I'll adjust x264.

    All in all, its a 60% improvement which I'm very pleased with even if the encoding is 23 times slower.
    Quote Quote  
  9. Thought I would post this here. Some interesting discussion about Handbrake, Skylake, OC, stability, and x265 with Tom Vaughn of MCW at IDF. Be good if Tom could elaborate further for the forum members.

    http://www.anandtech.com/show/9533/intel-i7-6700k-overclocking-4-8-ghz/2
    Quote Quote  
  10. Member
    Join Date
    Aug 2013
    Location
    Central Germany
    Search PM
    I would guess the heavy use of AVX instructions in x265 made the CPU overheat in small areas of the die, whereas a wider spectrum of instructions spread the heat over a larger area.

    Certainly an interesting find, thanks. So be warned, encoding with overclocked CPUs may be expensive...
    Quote Quote  
  11. I encoded another animated feature length movie which was an early codec experiment of mine back in 2008. Back then I emphasized compression over quality so I encoded it to 200MB (including audio) and the quality sucked but it was watchable. Part of the reason I was so careless is cuz I intended to archive this film (I haven't watched it since that day 7 years ago).
    I remember reading and drooling about H.265 that month on h265.net, wondering how awesome it would be.

    Now that x265 is out I wanted to see what it would look like today at the same size. But first I encoded all over again with the latest x264 and with proper parameters this time. I can't believe my eyes, look how awesome the quality is. All three of them are the same bitrate, about 0.050 BPP.

    However, x264 has been massively improved since 2008 so x265 regrettably has only about 25% better efficiency than its fully-mature predecessor.
    Image Attached Thumbnails Click image for larger version

Name:	cwnewx264.PNG
Views:	1134
Size:	560.0 KB
ID:	33448  

    Click image for larger version

Name:	cwx265.PNG
Views:	1103
Size:	578.8 KB
ID:	33449  

    Click image for larger version

Name:	cwoldx264.png
Views:	1114
Size:	571.3 KB
ID:	33452  

    Last edited by -Habanero-; 3rd Sep 2015 at 02:51.
    Quote Quote  
  12. Member
    Join Date
    Aug 2013
    Location
    Central Germany
    Search PM
    Originally Posted by -Habanero- View Post
    And I have no idea why the first image was brightened when I uploaded it but that's not how it should look at all. What's wrong with this site?
    Nothing. But your earlier encode or decode/screenshot may not have respected the TV/PC scale conversion (TV luma range = 16-235; PC luma range = 0-255).
    Quote Quote  
  13. Hmm, weird. The only difference is the 2008 encode was 8-bit and the new ones are 10-bit. I fixed it now by converting with mspaint.
    You can't really notice much of a visual difference between mature x264 and immature x265 unless you directly compare.
    Quote Quote  
  14. Member
    Join Date
    Aug 2013
    Location
    Central Germany
    Search PM
    Originally Posted by Deepthi Nandakumar
    x265 version 1.8 has been released. This release supports 12bit input depths, a large amount of AVX2 optimizations, entropy coding optimizations, as well as new quality features.

    Full documentation is available at http://x265.readthedocs.org/en/stable/

    ==================== API Changes ====================
    • Experimental support for Main12 is now enabled. Partial assembly support exists.
    • Main12 and Intra/Still picture profiles are now supported. Still picture profile is detected based on x265_param::totalFrames.
    • Three classes of encoding statistics are now available through the API.
      • x265_stats - contains encoding statistics, available through x265_encoder_get_stats()
      • x265_frame_stats and x265_cu_stats - contains frame encoding statistics, available through recon x265_picture
    • --csv
      • x265_encoder_log() is now deprecated
      • x265_param::csvfn is also deprecated
    • --log-level now controls only console logging, frame level console logging has been removed.
    • Support added for new color transfer characteristic ARIB STD-B67
    ==================== New Features ====================
    • limit-refs
      • This feature limits the references analysed for individual CUS.
      • Provides a nice tradeoff between efficiency and performance.
    • aq-mode 3
      • A new aq-mode that provides additional biasing for low-light conditions.
    • An improved scene cut detection logic that allows ratecontrol to manage visual quality at fade-ins and fade-outs better.
    ==================== Preset and Tune Options ====================
    • tune grain
      • Increases psyRdoq strength to 10.0, and rdoq-level to 2.
    • qg-size
      • Default value changed to 32.
    A current build will be released soon in the mingw thread...
    Quote Quote  
  15. Member PuzZLeR's Avatar
    Join Date
    Oct 2006
    Location
    Toronto Canada
    Search Comp PM
    Originally Posted by -Habanero-
    However, x264 has been massively improved since 2008 so x265 regrettably has only about 25% better efficiency than its fully-mature predecessor.
    Improvement since 2008? You can say that again. You brought back memories of that horrid blur in the old days of x264. Ugghh. <*shuddering*> My tests were similar, and now trying with x265.

    Thank goodness we're not enduring some long era of "blurvision" with x265 though, otherwise I would have completely sat out this round this time.
    I hate VHS. I always did.
    Quote Quote  
  16. Member x265's Avatar
    Join Date
    Aug 2013
    Location
    Sunnyvale, CA
    Search Comp PM
    We've updated x265 performance presets to incorporate the algorithms we've developed over the past months. In many cases you will see 2x the performance, with little to no effect on compression efficiency. See http://x265.org/performance-presets/ for details. The settings are documented here... http://x265.readthedocs.org/en/default/presets.html
    Quote Quote  
  17. Member
    Join Date
    Aug 2013
    Location
    Central Germany
    Search PM
    Version 1.9 has been released as new milestone. From the developer mailing list:

    Originally Posted by Deepthi Nandakumar
    x265 version 1.9 has now been released. This release supports many new features as well as additional assembly optimizations for Main12, intra prediction and SAO. Recently added features lookahead-slices, limit-refs and limit-modes have been enabled by default in the supported presets.

    Full documentation is available at http://x265.readthedocs.org/en/stable/

    ========================================== New Features ==============================================
    Quant offsets: This feature allows block level quantization offsets to be specified for every frame. An API-only feature.
    --intra-refresh: Keyframes can be replaced by a moving column of intra blocks in non-keyframes.
    --limit-modes: Intelligently restricts mode analysis.
    --max-luma and --min-luma for luma clipping, optional for HDR use-cases
    Emergency denoising is now enabled by default in very low bitrate, VBV encodes
    =========================================== API Changes ==============================================
    x265_frame_stats returns many additional fields: maxCLL, maxFALL, residual energy, scenecut and latency logging
    --qpfile now supports frametype 'K"
    x265 now allows CRF ratecontrol in pass N (N greater than or equal to 2)
    Chroma subsampling format YUV 4:0:0 is now fully supported and tested
    ====================================== Presets and Performance ==========================================
    Recently added features lookahead-slices, limit-modes, limit-refs have been enabled by default for applicable presets.
    The default psy-rd strength has been increased to 2.0
    Multi-socket machines now use a single pool of threads that can work cross-socket.

    Thanks,
    Deepthi Nandakumar
    Engineering Manager, x265
    Multicoreware, Inc
    New build will arrive shortly...
    Quote Quote  
  18. I'm a MEGA Super Moderator Baldrick's Avatar
    Join Date
    Aug 2000
    Location
    Sweden
    Search Comp PM
    Where can I find a changelog? I can't find the "new features" list at http://x265.readthedocs.org/en/stable/

    I have found all commits changes at https://bitbucket.org/multicoreware/x265/commits/all but I would like the summarized changelog also. Or is it only available the mailing list?
    Last edited by Baldrick; 29th Jan 2016 at 05:21.
    Quote Quote  
  19. Member
    Join Date
    Aug 2013
    Location
    Central Germany
    Search PM
    The summary in the announcement post is probably about all you get as summary what happened during v1.8; the commit list is the usual detailed change log. I doubt that a "detailed change log" somewhere in the middle of both will be created. But affirmative answers from the staff only...

    The "full documentation" is the documentation of the current state. There is no history included.
    Quote Quote  
  20. Member
    Join Date
    Aug 2013
    Location
    Central Germany
    Search PM
    There seems to be a regression, introduced somewhere between versions 1.8+167-e951ab673b1c and 1.8+201-769081eb5f4c. In 2-pass VBR mode, x265 may miss the target bitrate, returning results with about 75-80% of the desired bitrate, up to an extreme case where the bitrate curve would not converge in the 2nd pass, making x265 believe that the maximum QP (= 51) would be sufficient.
    __

    P.S.: The reason seems to be that the encoding was restricted to only one GOP, using "--frames 100 --keyint 100".
    Last edited by LigH.de; 24th Feb 2016 at 10:15.
    Quote Quote  
  21. Anyone know a source for pre-compiled x265 binaries for Mac OS X?
    (compiled them myself without NUMA support so far, but seeing that the newer x265 version perform bad on multi-cpu systems without NUMA support I was wondering if there are some build sites available I don't know of)

    Cu Selur

    Ps.: the vlc site offers mac x264 binaries, but no mac x265 binaries.
    building NUMA always fails:
    Code:
    MacMini:numactl-2.0.11 selur$ autoconf 
    MacMini:numactl-2.0.11 selur$ ./configure 
    checking for a BSD-compatible install... /usr/bin/install -c
    checking whether build environment is sane... yes
    checking for a thread-safe mkdir -p... build-aux/install-sh -c -d
    checking for gawk... no
    checking for mawk... no
    checking for nawk... no
    checking for awk... awk
    checking whether make sets $(MAKE)... yes
    checking whether make supports nested variables... yes
    checking whether make supports nested variables... (cached) yes
    checking build system type... x86_64-apple-darwin15.3.0
    checking host system type... x86_64-apple-darwin15.3.0
    checking how to print strings... printf
    checking for style of include used by make... GNU
    checking for gcc... gcc
    checking whether the C compiler works... yes
    checking for C compiler default output file name... a.out
    checking for suffix of executables... 
    checking whether we are cross compiling... no
    checking for suffix of object files... o
    checking whether we are using the GNU C compiler... yes
    checking whether gcc accepts -g... yes
    checking for gcc option to accept ISO C89... none needed
    checking whether gcc understands -c and -o together... yes
    checking dependency style of gcc... gcc3
    checking for a sed that does not truncate output... /usr/bin/sed
    checking for grep that handles long lines and -e... /usr/bin/grep
    checking for egrep... /usr/bin/grep -E
    checking for fgrep... /usr/bin/grep -F
    checking for ld used by gcc... /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ld
    checking if the linker (/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ld) is GNU ld... no
    checking for BSD- or MS-compatible name lister (nm)... /usr/bin/nm
    checking the name lister (/usr/bin/nm) interface... BSD nm
    checking whether ln -s works... yes
    checking the maximum length of command line arguments... 196608
    checking whether the shell understands some XSI constructs... yes
    checking whether the shell understands "+="... yes
    checking how to convert x86_64-apple-darwin15.3.0 file names to x86_64-apple-darwin15.3.0 format... func_convert_file_noop
    checking how to convert x86_64-apple-darwin15.3.0 file names to toolchain format... func_convert_file_noop
    checking for /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ld option to reload object files... -r
    checking for objdump... no
    checking how to recognize dependent libraries... pass_all
    checking for dlltool... no
    checking how to associate runtime and link libraries... printf %s\n
    checking for ar... ar
    checking for archiver @FILE support... no
    checking for strip... strip
    checking for ranlib... ranlib
    checking command to parse /usr/bin/nm output from gcc object... ok
    checking for sysroot... no
    checking for mt... no
    checking if : is a manifest tool... no
    checking for dsymutil... dsymutil
    checking for nmedit... nmedit
    checking for lipo... lipo
    checking for otool... otool
    checking for otool64... no
    checking for -single_module linker flag... yes
    checking for -exported_symbols_list linker flag... yes
    checking for -force_load linker flag... yes
    checking how to run the C preprocessor... gcc -E
    checking for ANSI C header files... yes
    checking for sys/types.h... yes
    checking for sys/stat.h... yes
    checking for stdlib.h... yes
    checking for string.h... yes
    checking for memory.h... yes
    checking for strings.h... yes
    checking for inttypes.h... yes
    checking for stdint.h... yes
    checking for unistd.h... yes
    checking for dlfcn.h... yes
    checking for objdir... .libs
    checking if gcc supports -fno-rtti -fno-exceptions... yes
    checking for gcc option to produce PIC... -fno-common -DPIC
    checking if gcc PIC flag -fno-common -DPIC works... yes
    checking if gcc static flag -static works... no
    checking if gcc supports -c -o file.o... yes
    checking if gcc supports -c -o file.o... (cached) yes
    checking whether the gcc linker (/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ld) supports shared libraries... yes
    checking dynamic linker characteristics... darwin15.3.0 dyld
    checking how to hardcode library paths into programs... immediate
    checking whether stripping libraries is possible... yes
    checking if libtool supports shared libraries... yes
    checking whether to build shared libraries... yes
    checking whether to build static libraries... yes
    checking for gcc... (cached) gcc
    checking whether we are using the GNU C compiler... (cached) yes
    checking whether gcc accepts -g... (cached) yes
    checking for gcc option to accept ISO C89... (cached) none needed
    checking whether gcc understands -c and -o together... (cached) yes
    checking dependency style of gcc... (cached) gcc3
    checking for thread local storage (TLS) class... __thread
    checking whether C compiler accepts -ftree-vectorize... yes
    checking that generated files are newer than configure... done
    configure: creating ./config.status
    config.status: creating Makefile
    config.status: creating config.h
    config.status: executing depfiles commands
    config.status: executing libtool commands
    MacMini:numactl-2.0.11 selur$ make
    /Applications/Xcode.app/Contents/Developer/usr/bin/make  all-am
      CC       libnuma.lo
    In file included from libnuma.c:37:
    ./numaint.h:53:9: warning: 'howmany' macro redefined [-Wmacro-redefined]
    #define howmany(x,y) (((x)+((y)-1))/(y))
            ^
    /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.11.sdk/usr/include/sys/types.h:184:9: note: previous definition is here
    #define howmany(x, y)   __DARWIN_howmany(x, y)  /* # y's == x bits? */
            ^
    libnuma.c:317:1: error: only weak aliases are supported on darwin
    make_internal_alias(numa_pagesize);
    ^
    ./numaint.h:18:73: note: expanded from macro 'make_internal_alias'
    #define make_internal_alias(x) extern __typeof (x) x##_int __attribute((alias(#x), visibility("hidden")))
                                                                            ^
    libnuma.c:670:1: error: only weak aliases are supported on darwin
    make_internal_alias(numa_max_node);
    ^
    ./numaint.h:18:73: note: expanded from macro 'make_internal_alias'
    #define make_internal_alias(x) extern __typeof (x) x##_int __attribute((alias(#x), visibility("hidden")))
                                                                            ^
    libnuma.c:691:1: error: only weak aliases are supported on darwin
    make_internal_alias(numa_max_possible_node_v1);
    ^
    ./numaint.h:18:73: note: expanded from macro 'make_internal_alias'
    #define make_internal_alias(x) extern __typeof (x) x##_int __attribute((alias(#x), visibility("hidden")))
                                                                            ^
    libnuma.c:692:1: error: only weak aliases are supported on darwin
    make_internal_alias(numa_max_possible_node_v2);
    ^
    ./numaint.h:18:73: note: expanded from macro 'make_internal_alias'
    #define make_internal_alias(x) extern __typeof (x) x##_int __attribute((alias(#x), visibility("hidden")))
                                                                            ^
    libnuma.c:780:1: error: only weak aliases are supported on darwin
    make_internal_alias(numa_node_size64);
    ^
    ./numaint.h:18:73: note: expanded from macro 'make_internal_alias'
    #define make_internal_alias(x) extern __typeof (x) x##_int __attribute((alias(#x), visibility("hidden")))
                                                                            ^
    libnuma.c:857:1: error: only weak aliases are supported on darwin
    make_internal_alias(numa_police_memory);
    ^
    ./numaint.h:18:73: note: expanded from macro 'make_internal_alias'
    #define make_internal_alias(x) extern __typeof (x) x##_int __attribute((alias(#x), visibility("hidden")))
                                                                            ^
    libnuma.c:873:8: warning: implicit declaration of function 'mremap' is invalid in C99 [-Wimplicit-function-declaration]
            mem = mremap(old_addr, old_size, new_size, MREMAP_MAYMOVE);
                  ^
    libnuma.c:873:45: error: use of undeclared identifier 'MREMAP_MAYMOVE'
            mem = mremap(old_addr, old_size, new_size, MREMAP_MAYMOVE);
                                                       ^
    libnuma.c:916:1: error: only weak aliases are supported on darwin
    make_internal_alias(numa_alloc_interleaved_subset_v1);
    ^
    ./numaint.h:18:73: note: expanded from macro 'make_internal_alias'
    #define make_internal_alias(x) extern __typeof (x) x##_int __attribute((alias(#x), visibility("hidden")))
                                                                            ^
    libnuma.c:917:1: error: only weak aliases are supported on darwin
    make_internal_alias(numa_alloc_interleaved_subset_v2);
    ^
    ./numaint.h:18:73: note: expanded from macro 'make_internal_alias'
    #define make_internal_alias(x) extern __typeof (x) x##_int __attribute((alias(#x), visibility("hidden")))
                                                                            ^
    libnuma.c:1051:1: error: only weak aliases are supported on darwin
    make_internal_alias(numa_set_membind_v2);
    ^
    ./numaint.h:18:73: note: expanded from macro 'make_internal_alias'
    #define make_internal_alias(x) extern __typeof (x) x##_int __attribute((alias(#x), visibility("hidden")))
                                                                            ^
    libnuma.c:1157:1: error: only weak aliases are supported on darwin
    make_internal_alias(numa_get_mems_allowed);
    ^
    ./numaint.h:18:73: note: expanded from macro 'make_internal_alias'
    #define make_internal_alias(x) extern __typeof (x) x##_int __attribute((alias(#x), visibility("hidden")))
                                                                            ^
    libnuma.c:1399:1: error: only weak aliases are supported on darwin
    make_internal_alias(numa_node_to_cpus_v1);
    ^
    ./numaint.h:18:73: note: expanded from macro 'make_internal_alias'
    #define make_internal_alias(x) extern __typeof (x) x##_int __attribute((alias(#x), visibility("hidden")))
                                                                            ^
    libnuma.c:1400:1: error: only weak aliases are supported on darwin
    make_internal_alias(numa_node_to_cpus_v2);
    ^
    ./numaint.h:18:73: note: expanded from macro 'make_internal_alias'
    #define make_internal_alias(x) extern __typeof (x) x##_int __attribute((alias(#x), visibility("hidden")))
                                                                            ^
    libnuma.c:1538:1: error: only weak aliases are supported on darwin
    make_internal_alias(numa_run_on_node_mask_v2);
    ^
    ./numaint.h:18:73: note: expanded from macro 'make_internal_alias'
    #define make_internal_alias(x) extern __typeof (x) x##_int __attribute((alias(#x), visibility("hidden")))
                                                                            ^
    2 warnings and 14 errors generated.
    make[1]: *** [libnuma.lo] Error 1
    make: *** [all] Error 2
    MacMini:numactl-2.0.11 selur$
    Last edited by Selur; 9th Mar 2016 at 06:32.
    users currently on my ignore list: deadrats, Stears555, marcorocchini
    Quote Quote  
  22. Originally Posted by PuzZLeR View Post
    Originally Posted by -Habanero-
    However, x264 has been massively improved since 2008 so x265 regrettably has only about 25% better efficiency than its fully-mature predecessor.
    Improvement since 2008? You can say that again. You brought back memories of that horrid blur in the old days of x264. Ugghh. <*shuddering*> My tests were similar, and now trying with x265.

    Thank goodness we're not enduring some long era of "blurvision" with x265 though, otherwise I would have completely sat out this round this time.
    True, though I exaggerated and starved my encodes of bitrate back then because I didn't wanna accept that x264 was only 50% better than Xvid, I wanted it to be twice as better. But now I can use the same low bitrate and get actual high quality so dreams do come true.

    Anyway, has anyone else noticed that x265's quality has not improved over the last year? I did my last test exactly a year ago and noticed the average SSIM decreased after slowly but consistently increasing before. Visual inspection makes it hard to tell the difference but there's definitely a lot of frames that are worse than x265 1.4.
    I've been told the devs were focusing on increasing speed lately so maybe that's a factor? I used the same veryslow preset as always.
    I've been gone all this time to come back and see this? What's happening?
    Quote Quote  
  23. hxxp://www.sendspace.com/file/we54wq (**** off cleverbot)
    Well shit, x265 has produced worse quality than x264 for this test file. For others, they do a little better than x264 but worse than x265 1.4. If this is only happening to me I'd like to know exactly what I'm doing wrong.

    Commandline used:
    Code:
    avs4x26x.exe --x26x-binary x265 ng2.avs --crf 41.1 --preset veryslow --ref 16 --bframes 16 --keyint 600 --no-psy-rd --no-psy-rdoq --rc-lookahead 250 --qcomp 0.7 --allow-non-conformance -o "ng2.hevc"
    Quote Quote  
  24. presets changed a lot over the last year to writing 'used the same veryslow preset as always' basically means you used totally different option.

    --no-psy-rd --no-psy-rdoq
    ...and thus you dropped the main quality gain factors which came into play the last year

    --qcomp 0.7
    why?

    ng2.avs
    What does the script look like? Posting the source without the script basically means that one can't reproduce the stuff you do,...
    --crf 41.1
    to me everything looks ugly at such a quantizer,..
    for general amusement I attached my first try to encode such stuff with x265 (without avisynth),... for me it looks way better than your encodes
    Image Attached Files
    Last edited by Selur; 10th Apr 2016 at 01:42.
    users currently on my ignore list: deadrats, Stears555, marcorocchini
    Quote Quote  
  25. Can you post your commandline? Your encode is a lot better than mine despite that you used only 6 refs vs. my 16 refs which is THE most crucial factor for this kind of video. This is really messed up. It can't be because your me-range was double than mine.
    Also, why is your encode 2295 frames instead of 2334? I know you didn't just trim because there's no consistent cut. You need to update whatever splitter you're using because it didn't decode the source frame-accurately.

    --no-psy-rd --no-psy-rdoq
    ...and thus you dropped the main quality gain factors which came into play the last year
    In early 2015, psy-rd and psy-rdoq would decrease the quality and make shit blurrier. I'll try experimenting again. But if I recall right, it's for retaining film grain. My video literally has zero film grain so why would that setting make a difference?

    --qcomp 0.7
    why?
    Why not? I like constant quality. It keeps mb-tree in check.

    What does the script look like? Posting the source without the script basically means that one can't reproduce the stuff you do,...
    Sorry, I forgot to include it. It's only cropping the top by 8 pixels and converting to YV12.
    Code:
    crop( 0, 8, 0, 0)
    converttoYV12
    But that doesn't explain why your encode doesn't match the frame order of the original.
    Quote Quote  
  26. Why not? I like constant quality.
    it's not for constant quality, but constant quantizer,....

    why is your encode 2295 frames instead of 2334?
    Probably some incompatibility with ffmpeg or mencoder,..

    Your encode is a lot better than mine despite that you used only 6 refs vs. my 16 refs which is THE most crucial factor for this kind of video.
    I don't use 16 refs since they break all tier@profile@level restrictions of HEVC.

    btw. the SEI should still hold the encoding settings and MediaInfo should be able to show them.
    as decoder I used:
    Code:
    ffmpeg -y -threads 8 -i "C:\Users\Selur\Desktop\ng2.avi" -map 0:0 -an -sn -vsync 0 -r 6010000/100000  -pix_fmt yuv420p  -strict -2 -f yuv4mpegpipe -  | ...
    and ended up with 2295 frames -> just checked, when using Avisynth&AviSource all frames are there.
    => it seems to be some incompatibility with libav
    Last edited by Selur; 10th Apr 2016 at 02:52.
    users currently on my ignore list: deadrats, Stears555, marcorocchini
    Quote Quote  
  27. Avisynth script:
    Code:
    SetMemoryMax(768)
    # loading source: C:\Users\Selur\Desktop\ng2.avi
    SetMTMode(5) # change MT mode
    AviSource("C:\Users\Selur\Desktop\ng2.avi")
    SetMTMode(2) # change MT mode
    # cropping
    Crop(0,0,0,-8)
    # adjust color to YV12
    ConvertToYV12(matrix="PC.601")
    return last
    Encoding call:
    Code:
    ffmpeg -y -loglevel fatal -threads 8 -i "H:\Temp\encodingTempAvisynthSkript_10_20_09_3710.avs" -an -sn  -vsync 0 -r 6010000/100000  -pix_fmt yuv420p  -strict -2 -f yuv4mpegpipe - | x265 --preset veryslow --pmode --pme --input - --y4m --allow-non-conformance --ctu 32 --merange 100 --keyint 600 --bframes 16 --bframe-bias 25 --ref 16 --crf 41.10 --nr-intra 500 --nr-inter 500 --psy-rdoq 15.00 --aq-mode 2 --aq-strength 1.50 --range full --colormatrix bt470bg --output "H:\Temp\10_20_09_3710_02.265"
    Using 16 refs does help with that source, but the output is larger.

    btw. since compatibility isn't the goal here: Why not use 10bit encoding?
    Image Attached Files
    Last edited by Selur; 10th Apr 2016 at 03:33.
    users currently on my ignore list: deadrats, Stears555, marcorocchini
    Quote Quote  
  28. it's not for constant quality, but constant quantizer,....
    You really wanna play with words? qcomp controls the variability of the quality. 0.0 is a fixed bitrate essentially and 1.0 is fixed quality. The default 0.5/0.6 was fine in the old days before mb-tree. After mb-tree, it became a little more complicated since different regions of the same frame would be allocated different amounts of bits and I do NOT want the moving regions of the frame to be garbage while the background looks perfect. The background can suffer a little bit to provide some more consistent quality. I don't like variable quality, but I'm not a communist. That's why I pick 0.7 instead of 1.0.

    btw. since compatibility isn't the goal here: Why not use 10bit encoding?
    I'm on a 32-bit OS and x265 (as of 2015) doesn't have optimizations for its 10-bit mode so it would take an hour to encode just that tiny, short video. No thanks.

    Anyway, the only difference I see in your commandline from mine is that you use bframe bias 25 while I don't use any bframe bias at all and you use merange 100 instead of my 57 which seems default for --veryslow preset. And you use psychovisuals. So are these it or am I missing something? I'll do new tests tomorrow.

    Also, I would reconsider how I encode from now on if I were you because ffmpeg seems to be incorrectly decoding some videos for you. Why do you need ffmpeg? Use x265 directly.
    Quote Quote  
  29. qcomp controls the variability of the quality. 0.0 is a fixed bitrate essentially and 1.0 is fixed quality.
    From how I understood the source could. How I understood it was that whether the rate control aims for a constant bitrate or a constant quantizer and thus be more an option between cbr and vbr.

    Also, I would reconsider how I encode from now on if I were you because ffmpeg seems to be incorrectly decoding some videos for you.
    Didn't check that your video was lagarith which is known to cause problems with libav based tools (ffmpeg/mencoder/vlc/ffvideosource/....).

    Use x265 directly.
    x265 only supports raw video (can be y4m wrapped) -> that isn't really an option.
    x264 can be compiled with libav, but then will also show the same problem with lagarith (and some other formats) depending on the libav version.
    Only reliable way to handle lagarith is through the vfw decoder (like AviSource does), otherwise it's always a question of the libav version used.

    So are these it or am I missing something?
    I also use the internal denoiser of x265. (but that probably doesn't help much)
    The main boost should be due to extended merange, and the psychovisual settings.
    I'm on a 32-bit OS
    my condolence
    users currently on my ignore list: deadrats, Stears555, marcorocchini
    Quote Quote  
  30. Originally Posted by Selur View Post
    From how I understood the source could. How I understood it was that whether the rate control aims for a constant bitrate or a constant quantizer and thus be more an option between cbr and vbr.
    I didn't get what you just said. Rephrase?

    You need to use x265 in conjunction with avs4x26x, you'll be able to use AVS that way. I use avisource for lagarith files because it's faster to type than ffvideosource.

    Anyway, I used 100 merange, psy RD and psy RDOQ and the quality is now much better but still less than x264. 15 RDOQ increased the quality most profoundly. b-frame bias 25 decreased quality. My long time wisdom has been that it's better to let x264 decide what the optimal P/B frame balance is.

    Heres my new encode. If it can still be improved can you post a new encode with proper decoder? Make sure it's not larger than mine.

    It's interesting that psy-RD and psy-RDOQ increased quality so much, I thought it was a noise-retainer?
    Image Attached Files
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!