using high 10 profile produce the exact same result as auto, both gave me high@L3.1 videos. I heard 10 bit encoding could give better compression even on 8 bit videos, so I want to see how it differs from 8 bit encoded videos. but then x264vfw doesn't give better compression results when I choose high 10. it almost seem as if x264vfw doesn't do 10 bit encoding at all. is there some other secret option I need to use to activate 10 bit encoding in x264vfw? maybe it only accept certain color space or something?
can I only use x264 command line binary to do 10 bit encoding?
+ Reply to Thread
Results 1 to 20 of 20
Last edited by muffinman123; 7th Aug 2014 at 17:38.
nice! so I just choose high 10 profile, then select 10 bit as the 264 exe path and I am good, right?
There is a high bit depth checkbox. When you check it, it will automatically select the 10bit x264 exe version
Yes, you have to set the paths (it might be automatically set , can't remember), but also for the AAC encoder (e.g if you pick nero, you have to point it to where neroaacenc.exe is placed. or if you pick another AAC encoder, set the path for that)
when I checked highbit depth, it adds the command "--input-depth 16", but I just want to try 10 bit compression on 8 bit videos. will this cause any problem? or should I just not check highbit depth and load the 10 bit exe path manually?
EDIT: build 2389 (the one that comes with aviutl) lets it through right away and gave me working results.
build 2453 which I downloaded from videolan forum gave me "not compiled with mp4 output support", I wonder if there's a site that regularly compiles x264 with mp4 support so I don't have to build it myself.
it does accept mkv's though, that's great
so my file depth is 8 bit, and then using command "--input-depth 16" and not using cause a minor file size difference and compression rate difference.
Last edited by muffinman123; 8th Aug 2014 at 01:27.
You should be aware that few devices outside a full blown computer will play 10 bit h.264.
Last edited by jagabo; 8th Aug 2014 at 06:49.
That's how it works - your 8bit file is converted to stacked 16bit and dithered to 10bit and sent to x264 10bit. All that is done "behind the scenes" in aviutl. Otherwise you could use avisynth + dither tools and do it manually
x264 can only be compiled with either 8bit or 10bit support at the time of compiling. There is no 16bit output
The 10bit thing...
a while back when the 10bit thing just emerged, there was a trend among Anime communities to use 10bit encodes.
Yet, currently the heat seems gone and I think most fansubber use the typical 8bit encodes. There are many reasons for this:
-- Machines that depends on video chipset acceleration to play video (e.g. AMD's APU series) will have a hard time decoding the material
-- Most standalone players cannot play it correctly
-- Bitrate and filesize generally increased.
-- If the source is 8bit, there is little point to encode again in 10bit
The highbit thing comes to value if your source is in loseless format and has lots of color gradients. In such case, the gradients may be smoother than a 8bit encode.
I have take note on the x264 10bit executable matter and will update it (actually just use komisar's build) when I release the new pack later this month (18-20th)
that's the thing, they said 10 bit gives better compression, but I couldn't notice it. maybe my video source is crap?
Last edited by muffinman123; 10th Aug 2014 at 18:44.
It does give slightly better compression. It's most noticable on gradients especially anime sources, live action blue skies
You have to do your tests more scientifically. You can't compare using CRF encoding (or do dozens of fractional CRF tests) because the quantizer scale is different between 8 and 10 bit x264 ie. CRF say 20 for 8bit binary isn't the same thing as CRF 20 for the 10bit binary. You can only evaluate "quality" at the same bitrate (filesize), so do 2 pass encodes or fractional incremental CRF testing e.g. 20.1, 20.2, etc... until you reach the desired bitrate
in 2 pass encoding, should I encode audio 1st or video 1st or both at the same time? default is video 1st, but if I am doing 2 pass and fixed file size, shouldn't it determine audio size 1st then work out the video size later?
If you're doing tests just to see the differences between 8bit and 10bit x264, then don't bother with audio
There is a bitrate calculator function in the adv x264 export GUI in aviutl - you should use it . The bitrate of audio and video are added up for total bitrate
I am doing both I guess, 1 is to upload some stuff to youtube, the other is to experiment with the options in x264.
YT re-encodes everything, so don't bother tweaking this or that for encoding settings, because the end result will be almost the same as long as you upload something of decent quality . There are however processing things you can do to make it look better, and upscaling properly is good for YT because it allocates more audio & video bitrate. For some reason even the SD versions get more bitrate than if you uploaded only SD version
If you're doing a bunch of tests to see the effect of various encoding settings, then don't bother with audio, it will just waste time / eat CPU cycles
so in the 264 export, I just check no sound, and the adjust the options in the video encoding and it should be good right? the gui doesn't show the screen to disable audio there, but its parent window does.
you also mentioned an important thing for me, what is the proper way to upscale interlaced sources? I usually just use yadif 1st and then lanczos 3 resize because I can't stand the speed of avisynth, but is there another way with relatively the same speed but produce better results?
Last edited by muffinman123; 11th Aug 2014 at 02:01.
If all you do is frameserve a video with Avisynth while adding Yadif de-interlacing to a script and then resize, it should be as fast as any other decoding/encoding method.
Yadif can de-interlace to "half frame rate" ie 29.970 progressive for NTSC, or it can de-interlace to "full frame rate" ie 59.94fps progressive for NTSC. The latter will generally look much smoother and it helps make de-interlacing artefacts less noticeable. Encoding will probably take longer as there's twice as many frames to encode. For a given CRF value though, the file sizes tend not to increase all that much. I'd never go back to "half frame rate" de-interlacing.
I don't know what program you're using but the full frame rate de-interlacing option is often referred to as "Yadif with Bob" or something similar. I think Handbrake simply calls it Bob de-interlacing. I don't think there's any special method of de-interlacing for up-scaling. The only de-interlacer I know of which is much better than Yadif is a script based, Avisynth de-interlacer (QTGMC). It is very slow.
There's some small Yadif and QTGMC, 25fps vs 50fps sample encodes here. I'm not necessarily trying to sell you on using QTGMC, just "full frame rate" de-interlacing.
If I play the sample encodes using my PC connected to my Plasma, the original interlaced video being de-interlaced by my video card looks pretty much the same as the encode de-interlaced by Yadif to 50fps.
Last edited by hello_hello; 11th Aug 2014 at 09:12. Reason: spelling
AviSynth than in, say VirtualDub.
Are you sure deinterlacing is appropriate for your source? If it's film based an inverse telecine will give better results.