Although technically if it's a list of defaults I guess CRF should be 23.
I spent a bit of time looking through my previous test encodes. To the point where I've no idea what I'm looking at any more. I think before I run any more testing I'm going to need to find a video to encode that exhibits a noticeable loss of detail in dark areas because at the moment I'm struggling to find an example.
Given I spent a bit of time encoding and comparing I thought I'd post an example of the result. Maybe I'm missing something and someone will be able to tell me what it is. Below are some screenshots of a dark scene. Each encode screenshot was saved as a bitmap, then I boosted the gamma for a good look at what was encoded and what wasn't. In the interest of full disclosure the "original" is itself an encode, although a fairly high quality one. And there's a slight gamma/brightness difference between the original and the encodes. That seems to be because I saved the original by opening it in a script and for some reason MPC-HC was displaying it slightly differently. I didn't bother investigating. They're all saved as jpeg (100% quality) as for the purpose of this exercise I thought lossless was probably overkill.
Original vs CRF18 --aq-mode 1 vs CRF23, all three aq modes.
Original with boosted gamma
CRF 18 --aq-mode 1
CRF 23 --aq-mode 1
CRF 23 --aq-mode 2
CRF 23 --aq-mode 3
Most of the time I thought the various AQ modes just caused the video to be encoded slightly differently, but not necessarily better or worse. Sometimes for dark scenes though, as per the example above (tell me if I'm wrong) --aq-mode 3 may have a slight edge over --aq-mode 1 (although in some areas it's better, others it's worse) and --aq-mode 2 comes last, but that's reflected in the bitrates too. And I'm not convinced for brighter scenes --aq-mode 1 doesn't look better.
And the differences above are going to be hard to spot when watching a movie. I'm still trying to find an example of a dark scene which is noticeably lower quality under normal viewing conditions. Or maybe my TV is way off being correctly calibrated......
I probably should get motivated to repeat the above at a fixed bitrate but given --aq-mode 3 hasn't excited me all that much yet and I generally use CRF encoding.....
+ Reply to Thread
Results 31 to 60 of 79
Last edited by hello_hello; 4th Feb 2015 at 20:15.
Whether that's logical or not I don't know, but so far I'm not seeing any major quality improvement due to --aq-mode 3 despite the fairly large increase in bitrate (at least at low CRF values).
But yes, I'd tend to agree, the quality of the encode is fairly bitrate dependant and how you get there may not necessarily be critical..... whether you tweak, nudge setting A, disable setting B, or lower the CRF value etc. Maybe "tweaking" is more critical in a fixed bitrate environment. I remember saying something similar in another thread recently.
I just noticed, looking at the other thread, Mephesto has been banned. Anyone know the gossip there?
Last edited by hello_hello; 4th Feb 2015 at 19:55.
True, but if mode 2 improves on mode 1 by distributing the bits better for the same perceived quality at a lower bitrate, and mode 3 does the same, only not to the detriment of dark areas, I'd expect the bitrate resulting from mode 3 still wouldn't exceed that of mode 1. Or at least not by much on average.
Whether that's logical or not I odn't know, but so far I'm not seeing any major quality improvement due to --aq-mode 3 despite the fairly large increase in bitrate (at least at low CRF values).
But since aq mode 3 is supposed to address the problems x264 has with blacks and dark scenes then I'm assuming it simply bumps the bit rate in those areas, almost like a very target "zones" patch.
I'd disagree because I think the ultimate definition of a quality encode is one that looks identical to the source, no matter what the quality of the source might be. In other words, how accurate it is. I could argue though, that for a very high quality source even minor encoding artefacts are going to be easily spotted, whereas for a lower quality, noisier source, it'd be less critical. Therefore the higher the quality the source the harder it might be to obtain a high quality encode.
But I recently did a slew of encoding tests with very high quality y4m sources and noticed a number of interesting things:
If you want the absolute highest quality encode with x264 you go with the "placebo" preset, you shouldn't even bother trying to use any other preset or custom settings, as measured by both PSNR and SSIM, placebo results in values that reach the "mastering" quality level.
There are some sources where it's very easy to achieve a high quality encode, even with relatively little bit rate and then there are sources that despite being similar content wise are very hard to impossible to get a high quality encode without sending the bit rate skyrocketing. 2 examples are the Sintel trailer, an extremely easy to encode source with low bit rate and achieve a PSNR of 50db and an SSIM value of .995 but Elephant's Dream, especially that one sequence that starts at about the 2 minute mark, is impossible to encode at those values unless you use more bit rate than even I would think is too much for that resolution.
x265, walks all over x264, once you start cranking up the settings. Interesting side note with x265, if you crank up all the quality settings, such as setting sub pixel to 7 and me to star but leave RDO at 0 and compare it to cranking up RDO to 5 but lowering all the other settings to as low as possible, you get nearly the exact same quality as measured by PSNR, SSIM and your eyes. In fact, the single setting in x265 that most effects encoding speed is RDO and it also seems to have the biggest impact on quality. If you max out the other settings and then start incrementally increasing RDO you find that the quality jumps noticably but encoding speed nosedives.
VP9 is awesome, it's too bad that it's so slow, if you have the patience try a few test encodes with VP9 and the quality settings cranked up, you won't believe your eyes. I remember hearing rumors a while back about an xvp9 encoder, if someone created a VP9 encoder with the x264 treatment, i.e. lots of assembler optimizations, I would use that almost exclusively.
As for the CRF encodes you had wanted, I did do some but they proved to be inconclusive, some encodes did in fact come out slightly better with CRF but that's because the bit rate was all over the place. I much prefer controlling the bit rate and file size than letting some program decide for me how much bit rate it thinks it should use.
You need a dark noisy shot with shallow gradients. It's also easiest to see on standard definition material since it gets enlarge more when you play it full screen. Here's a sample AVI with UT video codec (BicubicResize() from a Blu-ray source). Even at the medium preset at CRF 18 you'll see obvious posterization artifacts. The effect of aq-mode=3 on reducing that posterization isn't huge.
I really don't know how the CRF encodes could prove inconclusive. The debate was whether x264's CRF is better or worse than "insert your preferred encoder here" at a given bitrate. The rest of us are of the opinion CRF and x264's 2 pass produce the same quality at the same bitrate, so it wouldn't matter how you did it. CRF encode first, vs 2 pass encode using "insert your preferred encoder here" at the same bitrate, or x264 2 pass encoding vs "insert your preferred encoder here" 2 pass encoding at the same bitrate.
The "at the same bitrate" part is what we were debating and that's easily done.
Last edited by hello_hello; 4th Feb 2015 at 21:05.
hello_hello, jigabo, and sophisticles....thanks for all of the great information. You've taken what I had just started dabbling in and you've blown my mind. I see I have more testing to do, and more tweaking to play with.
I'm thinking Gravity might be a good option to play with...as there is a lot of black. I'm also going to be playing around with the Desolation of Smaug. I'm still playing around with the RF 19, 18, and even 17 values, at 720p. Today marks the first day I've played with the x264 presets, other than Medium. Going to mess around with slow and slower to see what the differences are. Hoping that the chart provided by jigabo is correct still, since I'd like to know what I'm applying. At this rate, it sounds like an exercise in frustration to start messing with the aq-mode...since there is no clear great increase in quality of dark scenes.
I've got 2 quick questions for you.
Is there a difference from encoding from a full BD rip with all of the files and BD structure, vs from the MKV you make with MakeMKV? (Do I automatically lose some quality when I use MakeMKV?)
Also, do you find it worthwhile to use the x264 tune options for film and animation?
Ok...that's 3 questions. Thanks.
I must say that I am shocked, people talking here about 'improving' the results by playing endlessly with codec options while at the same time they have no issue cutting the resolution of the video in half.
A case of not seeing the forest from the trees!
It's about finding the sweet spot, which is clearly going to be very personal. Which preset I ultimately choose is going to be a choice of quality vs file size. I don't need to keep my files at 1080p...especially when it's not a specifically visually stunning movie. Right now, I'm 98% happy with the settings I'm using, and I'm not eating up my storage space like it's free.
Specifically what we've been talking about recently is when you reach a quality and file size combo that you're happy with, how do you then target the trouble areas (e.g., low color gradient scenes), when you are already happy with everything else.
Look....when I encode using HandBrake, I'm basically saying that I'm ok with a degree of loss in visual quality. How much is up to me. BUT....for a 90% decrease in file size (40GB to 4GB), I am defnitely NOT decreasing the visual quality by 90%. The picture is still great! It's a treadeoff.
If I wanted all of my files at their original size and resolution, I wouldn't even be using HandBrake.
Last edited by natebetween; 5th Feb 2015 at 10:28.
--crf 18 --preset slow --ref 4 --tune film --vbv-bufsize 30000 --vbv-maxrate 30000
and then you have to have concrete scene to start variations, but are you going to do that for every movie, testing etc.?
About that 720p downscale and then to be on a mission to get it as best it could be, that is true, it is like trying to calculate number to the 10th floating point using already rounded up number. I never really understand it also. Resizing to 720p, just using this and be done with
Having some bad Blu-Ray maybe 720p is a good idea, but anyway I was going at direction to spend enourmous amount of time to encode those 720p things, it seems to be quite counterproductive to give it a lots of time, that's all.
Were I to keep all of my movies at 1080p, I would still encode to get a better file size...of course at the expense of a little bit of quality.
So, my questions are aimed at those who encode to get better file sizes for small decrease in quality. If you keep your 30-50GB BD rips, more power to you. That is not me, even though I have 9TB at my disposal via my NAS (I do KEEP my original rips, at least so far. I just serve up my smaller encoded files to my movie jukebox).
I encode 720p also, and have that sitting for watching on any device, anywhere, where copy treat is only for chosen ones, like Star Trek and many more , I understand it gets out of hand to back up originals, 30GB or so.
But, since you brought it up....a 5 GB file costs me 20 cents...but a 50GB file costs me $2. Multiplied by 100 movies....$20 vs $200 (and that's only for my SMALL movie collection).
I don't know about you, but I have good uses for $180 other than digital storage! Like a date weekend with the wife. Or more movies!
"downscale upscale" test earlier that shows just because a video is 1080p, it doesn't mean it has 1080p worth of detail. And of course none of those codec options we're playing with encode individual pixels. They encode mathematical approximations of sections of the picture.
It makes sense to me you should be able to reduce the resolution, and if it doesn't reduce the picture detail (or only reduces it by a tiny amount) what's left should be encoded more accurately for a given bitrate.
I don't just downsize to 720p willy-nilly. I test different resolutions before I encode. However I've encoded enough video to know assuming 1080p always looks better than 720p would be pretty silly. And even if it does, the difference between 1080p and 720p is usually fairly small, and for the average person using an average size TV back at average viewing distance there'd probably be none at all.
Maybe you should try encoding a few videos yourself and return when you know what you're talking about.
Last edited by hello_hello; 5th Feb 2015 at 13:07.
That's like saying the difference between $200 and $100 is fairly small.
Last edited by newpball; 5th Feb 2015 at 13:17.
Like most tweaks, the lower the CRF value the less vidual difference there'll probably be.
Tune animation does the opposite to a certain extent, It's for encoding animation with large flat areas of colour and not much fine detail. It does seem to increase compression but you'd only use it on Simpsons type animation. It increases the number of B and Reference frames, changes deblock to --deblock 1:1 and reduces psy RD strength --psy-rd 0.40:0.00. I think that's it.
For a list of what those settings do. x264 settings
Obviously the idea that just because something has a 1080p resolution it doesn't necessarily mean it contains 1080p worth of picture detail is too much for you.
Obviously the concept of encoders not encoding individual pixels and throwing detail away in the process doesn't fit into the "1080p must be better" reality you've created for yourself.
If you increase the gamma or brightness and then re-encode it, I'm not sure the result is any different. The posterization artefacts are still there. As the brightness increases they may get harder to see, or they're less obvious, but cranking up the brightness doesn't cause the encoder to "retain more detail" as such.
If anything it indicates to me there's always going to be some video that's particularly hard to encode and when it's darker the artefacts might be more noticeable, but it hasn't convinced me to sign up for a membership of the "noticeable loss of details in dark areas" club just yet.
For instance, if we suppose that 50% of a 1080p is garbage then reducing the resolution with 50% still makes it twice as bad, the scaling algorithm cannot distinguish.
Frankly your argument makes no sense at all. By reducing the resolution by 50% you always make things worse.
If my argument made no sense at all you'd have no trouble countering any of the points I made rather than offer a meaningless generalisation, or maybe you'd have looked at the example I posted which you keep ignoring, presumably because it contradicts your theory, and explained why downscaling and up-scaling made things worse.
If I take a 720p picture and duplicate each pixel horizontally and then again vertically I get a picture with lots more pixels. Quadruple the resolution if I'm not mistaken, but exactly the same amount of picture detail. If I then reduce the resolution to 720p I'm back where I started, but how have I lost picture detail in the process?
I don't know which part of a 1080p image not necessarily containing 1080p worth of picture detail you're unable to understand. Sure, maybe each and every time you resize a 1080p video down to 720p "something" is lost, but that doesn't necessarily mean it's picture detail you can see. The source video doesn't have each and every pixel encoded. It's compressed.
I don't need to argue about it. I've compared the two countless times. I've posted a screenshot as an example. I know sometimes it's possible to downscale 1080p to 720p and not loose anything in respect to visible detail. I know sometimes you can't. I know even if there's a difference it's generally not all that huge and I know at the same bitrate you can encode 720p at a higher quality than 1080p. I'd rather watch high quality 720p than low quality 1080p because any loss in picture detail due to resizing down is usually far less noticeable than a loss of detail due to compression or an increase in compression artefacts. It's not all about the resolution.
Last edited by hello_hello; 5th Feb 2015 at 15:40.
Perhaps I was wrong!
If your standards are such that visible artifacts are acceptable then obviously the resolution argument goes straight down the drain.
I think that nothing more annoying for the next generation will be this generation's idiotic acceptance of totally unnecessary compression artifacts in videos. They will understand it for the prior generations, there were technological limits but for this generation? This generation were we can buy a 3TB drive for under $120 and 64GB sticks are commonplace?
Not to mention your resolution argument obtained zero credibility the first time you ignored the example I posted and has maintained zero credibility ever since.
It doesn't matter how many times you repeat the cost of hard drives there's always reasons for wanting to reduce the size, and/or compressing more efficiently etc. If that wasn't the case we'd all be encoding with mpeg2 and there'd be no need for h265.
Unless there are valid reasons, and yes valid reasons do exist, for instance when video is streamed with limited bandwidth or when the bitrate is too high for the storage media to provide real time fps, it is ridiculous to sacrifice quality.
You can buy more hard drives and choose not to re-compress? Really? Well I never. Who'd have guessed.....
And I certainly don't see how advising to resize to 720p isn't consistent with a "high quality and compress the life out of it" goal, so I don't really know what point you're trying to make. Would the quality be higher at the same bitrate for 1080p?
You're the one insisting that resizing down to 720p always sacrifices quality, you've done nothing to prove that's the case, made no attempt to show why my example is wrong, and therefore I reject your "sacrificing quality' premise due to it's irrelevance and lack of credibility.
Last edited by hello_hello; 5th Feb 2015 at 16:58.
How can you stubbornly keep telling people what they should do with their video ?
What is the purpose of encoding? Make video smaller or make it ready for a device. Whats wrong with that?