I was wondering what would happen if for example, you have an 8-bit 720p BDRip anime episode encoded at CRF 18 and you happen to re-encode that to 10-bit using CRF 16 or basically, a lower CRF than the source's CRF.
Another example would be that I have ep20 of [a-S]'s 8-bit 1080p encode of Ghost In The Shell S.A.C. 2nd GIG at CRF 22. I re-encoded to 10-bit 720p (I resized) with CRF 18.
The overall bit rate of the 1080p source is:
2 613 Kbps
The overall bit rate of my finished re-encode (720p) is:
1 230 Kbps
I was wondering if it was possible to have a finished product with an overall bit rate HIGHER than the source itself. Since the source I used was originally 8-bit and 1080p and not 10-bit and 720p, I'm not too sure about the projected overall bit rate.
What if for example, the source WAS 10-bit and 720p already, and I simply just re-encoded it using a lower CRF, would it be possible to reach or go beyond the source's overall bit rate at all?
+ Reply to Thread
Results 1 to 14 of 14
the more you lower CRF value the bigger your file gets .... you can set it all the way to zero, which gets you really big file
This "higher quantizer" you speak of... the CRF value?
I thought the lower the CRF, the better the quality, while the higher the CRF, the lower the quality?
Last edited by zetsu_shoren; 11th Jan 2013 at 02:10.
Well yes, that's the thing.
No laptops in this house can play 1080p smoothly on MPC-HC while fullscreen. This laptop I'm using can but there are still those 'scan lines' or 'chop lines' (I'm not really sure what they are called but you know what I'm talking about) when the picture is moving, and most especially if there are rapid changing images in the video.
By turning the 8-bit 1080p to 10-bit 720p, I am able to save up to half the total size. And anyway, 720p is more standard since 1080p is already a bit on the higher end.
What I'm wondering about is if there's some kind of math involved like being able to predict which CRF value would give you the most quality retainment while downsizing both file size and resolution.
Also , "quality" is very subjective. What might be "acceptable" quality for one person might be completely unacceptable to another
All you have to do is encode some representative video at different CRF values and examine the results. Decide what CRF range gives a quality level you can live with. Then just use that range for all your encodes.
Be aware that virtually nothing but a computer will play 10 bit h.264 video. If you ever get a TV or Blu-ray player that plays media files they won't play your 10 bit video.
You can use vbv-maxrate and vbv-bufsize values to to box your bitrate a bit, for example for internet you need steady stream 1000kbps for example, I'd use lower resolution, sure , and CRF 20 and --vbv-bufsize 1300 --vbv-maxrate 900, this will set max values not over 1100-1200, and lower values are approaching that 1000. If values for those buffer settings were much higher max bitrate would go even to 2000 and lower values would stay for some scenes into 400-500 and even lower for higher CRF. Those could be scenes in low light, gradients, and it is better to let them stay higher for better result . It depends on your source, you might try higher CRF (cleaner source ) etc.
Not sure how you could make this applicable for high bitrate scenario, not streaming, maybe others tried and failed. But for streaming over web that's how I understand it could be worked with.
Last edited by _Al_; 12th Jan 2013 at 17:02.
I assume 8bit and 10bit refer to the color depth per channel. My question is where will the other 768 values each of R,G and B come from?
Using an 8bit source means he doesn't have true 10-bit data
Then it depends what method he's using for the 8bit to 10bit conversion. You can either interpolate the values by scaling, and/or pad the data +/- dithering
If he's using x264 for the 8=>10 bit conversion , values are scaled
"legal range" Y' 16-235 would scale to Y' 64=>940
no values are clipped in the conversion (super brights/darks are kept 0-1023, if they existed in the original 0-255)
Last edited by poisondeathray; 12th Jan 2013 at 22:46.
There are even 2 types of 10-bit? What the heck
Well all I use is the 10-bit x264 .exe file and then I made 3 different .bat files to re-encode 1280x720 to 10-bit 1280x720, 960x720 to 10-bit 960x720, and 1920x1080 to 10-bit 1280x720
These are the lines in them respectively:
C:\Users\JJM\x264.exe --preset veryslow --crf 19 -o C:\Users\JJM\Desktop\2ndgig20.mkv.mkv C:\Users\JJM\Desktop\2ndgig20.mkvC:\Users\JJM\x264.exe --vf resize:960,720 --preset veryslow --crf 19 -o C:\Users\JJM\Desktop\ber25.mkv.mkv C:\Users\JJM\Desktop\ber25.mkvC:\Users\JJM\x264.exe --vf resize:1280,720 --preset veryslow --crf 18 -o C:\Users\JJM\Desktop\mush6.mkv.mkv C:\Users\JJM\Desktop\mush6.mkv
Color space : YUV
Chroma subsampling : 4:2:0
I don't like re-encode groups. I USED to depend on them, but ever since I got a new laptop with 1 TB capacity, it changed. Another reason is because they kill the audio what with 7 MB audio tracks. I just decided to learn how to re-encode on my own since the opportunity opened up with a new laptop and the fact that whatever I want, I would have to wait, and when it comes, it comes with bad quality. So the ideal size I'm aiming for is 180-220 MB per 24-minute episode (it depends too on whether it's one of those series I love and "average" series, meaning, the ones I like more will have bigger sizes). Sometimes it would go lower around 170 MB, and sometimes go as high as 300 MB (but usually, this 300 MB happens for a series when a lot of the episodes of the same series I re-encoded are big, like 270 MB, 350 MB, etc).
This one other series I'm re-encoding, I'm doing it with CRF 18, and want it rather high quality since it's a visual-heavy series, what with a file with a size of 550+ MB (this one is a 2.20 GB 10-bit 1080p 13.4 Mbps overall bitrate to 561 MB 10-bit 720p 3,341 Kbps overall bitrate). For some ghastly reason, this 2.20 GB 1080p to 720p has a bigger resulting file size than the 3.77 GB 1080p to 720p I did which resulted to 414 MB, which is 147 MB smaller than the 561 MB of the first one I mentioned (2.20 GB to 561 MB). This one I truly don't get. Basically, the source file of 2.2 GB, when re-encoded, ends up with a bigger file size than with the re-encoded 3.77 GB episode. Shouldn't it be the other way, with the 3.77 GB source file having the bigger resulting file size?
If the source has a higher CRF to boot with, like that CRF 22, I'd have to use a lower CRF especially if I'm converting 1080p to 720p. Well, I don't know how this CRF works entirely, it's truly a mystery, and I'm not entirely sure about how much of the percentage of the detail is retained when I have to switch CRFs for different things with different bitrates. And I don't know if some of you have what we call here "media players" over there, but anyway, that's what laptops are for :P
I don't really get why "not everything" can play 10-bit. Does it have anything to do with the codecs or whatever is installed in the device beforehand?
CRF is watchdog for quality, if encoder decides it needs more bitrate it will give it. So bigger video file might end up bring up smaller size then you'd expect. This is the point of using CRF. Let relatively bitrate go where it wants to go, just to keep relative quality.
Dark, shadowy scenes and x264 will introduce lower bitrates (sometimes we'd like to be higher). You can use Zones to fix it a bit, but who knows beforehands right. Play your video in VLC player and watch bitrate as well, tools-media-statistics-input bitrate (you might be surprised)
I looked up that resizer, it uses default bicubic, there are more sharp resizers or more sophisticated methods, so why not to look for precision overthere and not with 10bit. You cut yourself with playback on this. Just an idea.
If the source uses a higher bitrate then I guess there's a chance it's fairly "clean".... less compression artefacts etc than one with a lower bitrate, which might make it easier to recompress and result in a smaller file size for any given CRF value. Bitrate aside, one 1080p video may have more grain or noise than the other, effecting the over-all file size when re-encoding at a specific CRF value too.... basically the easier the video is to compress, the lower the bitrate required for a given quality when re-encoding, so you can't just look at the original bitrate or CRF value alone and determine how easy it'll be to re-compress. For all you know one 1080p version may have been encoded with a much higher bitrate than was actually required to encode it at a decent quality (or a quite low CRF value) so looking at the bitrate it's using will give you no indication as the the bitrate required to re-encode it using your chosen CRF value.
If for some reason I'm re-encoding h264 video I'll usually pick a CRF value which is slightly lower than I'd otherwise use (ie CRF 16 or 17 instead of CRF 18) because I'm re-encoding an encode rather than the original so that way I reduce any further quality loss a bit, and without resizing it'd not be uncommon for the encoded version to be larger than the original. It's not like re-encoding mpeg2 video where you know the resulting file size will invariably be smaller because you're using a more efficient encoder to re-compress it, but even then I've re-encoded old, low quality mpeg2 video without reducing the file size much because it's hard to compress.
So if you're aiming for an ideal file size then you might be better off using 2 pass encoding while specifying a file size and accept that no two encodes using the same file size will result in the same quality relative to the source, or use the same CRF value each time and accept the fact the resulting file sizes aren't always going to be ideal.