Hi,
I was tasked to convert a bunch of drum lessons DVDs for a friend. Those have quite varying characteristics (4:3 or 16:9, progressive or interlaced, one or two audio tracks...), but I think I managed to apply the correct settings. Since the priority, in this case, is size reduction over pristine picture quality, I encoded all those videos with MeGUI / x264 at CRF = 24, and so far the results have been satisfying (average bitrate being around 1000kbps). However there's one I have more trouble with, which seems harder to compress. It's 1h47min long, in 4:3 format, and, resized to 640x480, it resulted in a 2GB MKV file (from a 4.35GB DVD, average bitrate 2700kbps), so it compressed much less than the others. It's quite surprising to me as it's mostly static footage (at least the first two thirds or so, then there are performances where the camera work gets more mobile — indeed the bitrate gets significantly increased on that part according to Bitrate Viewer), and the picture seems clean to me (no obvious grain or artefacts that I know how to detect). So I would like to know if I can significantly reduce the file size while preserving the same level of subjective quality (it took about 14 hours to get that 2GB file, so I'll start all over again only if I can get it significantly smaller, let's say at least 30%, with no noticeable quality drop). Is there some kind of not-so-obvious picture defect which makes it harder to compress, and if so, how can I deal with it ? Or does the picture have an inherent level of complexity (colors, textures...) which indeed requires a relatively high bitrate compared to the other videos ? Besides, was I correct to consider this footage as truly interlaced and deinterlace it with QTGMC, resulting in a framerate of 59,940FPS ? Is it wise in this case to resize to 640x480, or would I be better off keeping the original 720x480 resolution (in terms of encoding efficiency, and resulting quality) ?
Here are two short samples, one from the first part (lessons, mostly static footage), another from the second part (performances, more dynamic) :
http://www.mediafire.com/watch/kg21oy3of6nxby3/Horacio_Hernandez_VTS_02_1_16m28-17m20.demuxed.m2v
http://www.mediafire.com/watch/j43ai0c8lalprsw/Horacio_Hernandez_VTS_02_1_1h12m28-1h13...15.demuxed.m2v
The AVS script I used :
LoadPlugin("C:\Program Files\MeGUI_2507_x86\tools\dgindex\DGDecode.dll")
LoadPlugin("C:\Program Files\MeGUI_2507_x86\tools\avisynth_plugin\NicAudi o.dll")
global MeGUI_darx = 4
global MeGUI_dary = 3
Vid = MPEG2Source("E:\FullDisc\Horacio Hernandez - 2006 - Conversations in Clave - Technical study of four-way independance in afro-cuban rhythms - VTS_02_1.d2v")
VidDeint = Vid.QTGMC(Preset="Medium")
VidResize = VidDeint.LanczosResize(640,480)
Aud = NicAC3Source("E:\FullDisc\Horacio Hernandez - 2006 - Conversations in Clave - Technical study of four-way independance in afro-cuban rhythms - VTS_02_1 T80 2_0ch 192Kbps DELAY 0ms.ac3")
Mix = AudioDub(VidResize, Aud)
Return(Mix)
And a screencap showing Bitrate Viewer's analysis :
http://www.cjoint.com/c/EJgxZ3Qk01y
Thanks in advance.
+ Reply to Thread
Results 1 to 19 of 19
-
-
Since your using QTGMC() try: QTGMC(Preset="Medium", EZDenoise=1.0). You can go higher but small low contrast details will blur.
And with MPEG2Source() I recommend you enable the DCT ringing artifact removal with the argument CPU2="ooooxx". Deblocking will help too but you'll lose a little detail: CPU2="xxxxxx", or CPU=6;
The video is already very sharp and doesn't need further sharpening. So using a less sharp resizer will help too. Try BilinearResize() or Spline36Resize(). You're accentuating noise (the reason for your high bitrates) with LanczosResize().
Using:
Code:Mpeg2Source("Horacio.d2v", CPU2="xxxxxx", Info=3) BilinearResize(640,480) QTGMC(preset="medium", EZDenoise=1.0)
Last edited by jagabo; 6th Oct 2015 at 19:30.
-
What encoding settings ? Increasing the max number of b-frames will improve compression, especially in low motion areas
What about x265? What is the intended playback? a device target ? -
Wow, that was fast ! :^p (Where I am it's the middle of the night.)
@poisondeathray : I used the same parameters for all those DVDs so I thought it wasn't relevant which exact encoding parameters I used. I simply used x264 preset "slow" (which is already slow enough with my not-so-recent computer based on a Pentium Dual Core E5200 CPU), and profile High@L4.1 if that matters.
But examining the profile I created for that project I realized I had selected "Constant Quantizer" (qp) = 24, and not "Constant Quality" (crf) : what is the difference ? (Anyway, I still used the same settings for all those videos.)
x265 is very recent, I haven't tried it yet, but I'm not sure it would be a good idea, since I guess it's even more power-hungry than x264 when encoding, and the people that would use those videos aren't that up-to-date either, technologically speaking. This friend told me he'd like to watch the videos on a tactil tablet while practicing, I doubt that kind of device (especially if it's not a brand new one) can read x265 comfortably. I may be wrong on that part, but I'd prefer to stick with the tried and tested.
@ jagabo : What exactly is DCT ringing, and where can you (or I, provided I'm similarly equipped with eyes) see it in that video ? Do you see blocking artifacts, and how significant is the loss of detail when using deblocking ? Are these two settings ("CPU2="xxxxxx" / "CPU=6") different, or do they have the same effect ? And what does "Info=3" do ?
So Lanczos resizing does not only keep the native sharpness but enhances it ?
Does EzDenoise improve compressibility even when there are no obvious noise artifacts (and with little to no loss of actual detail) ?
Which of those two clips did you test, the first one I suppose ? I encoded both of them with my original settings (with in fact "qp=24" instead of "crf=24" as I wrote above) and got 1925kbps for the first one, and 4667kbps for the second one, so that's indeed a very significant improvement. But if I encode the first clip with your settings and x264 "slow" preset with "qp=24" (as before) I get 1618kbps, twice as much. But now if I use "crf=24" and change nothing else I get 746kbps ! So apparently those are very different settings despite the similar name, and I used a much higher level of quality than I thought... -
Everywhere. https://en.wikipedia.org/wiki/Ringing_artifacts. Also oversaturation, busted highlights, aliasing, macroblocks, line twitter and shimmer...very annoying samples. That's what people like these days. People are jaded. They need crude techniques to get their attention.
Last edited by LMotlow; 6th Oct 2015 at 22:23.
- My sister Ann's brother -
That wikipedia picture of DCT ringing artifacts is way overdoing it. This is more like what you will see with DVDs:
https://forum.videohelp.com/threads/294144-Viewing-tests-and-sample-files?p=1792973&vie...=1#post1792973 -
I realize the wiki page is an extreme case, but I thought ringing was the kind of echo/halo effect shown and it would be easy to see. Is that not correct? What would you call those bright and dark edge ghosts, other than edge ghosts or halos? I see those halos on poorly made DVDs very often, especially in this forum and in DV-AVI captures.
Are you sure the little arrow labeled "DCT ringing" is pointing at ringing and not at block noise/compression noise? How do those little squares of dct ringing differ from block noise?- My sister Ann's brother -
The Wikipedia image is mostly oversharpening halos. There's a little DCT ringing too. If you use a lot of bitrate you will reduce DCT blocking but you won't reduce DCT ringing. DCT ringing is the result of representing an infinite series of cosine functions with a finite series.
https://en.wikipedia.org/wiki/Fourier_series
https://en.wikipedia.org/wiki/Discrete_cosine_transformLast edited by jagabo; 6th Oct 2015 at 23:05.
-
qp is an ineffective method of rate control - it means constant quantizer encoding for each macroblock's frametype . In contrast, crf uses variable quantizers per macroblock. They fluctuatate and adjust according to things like motion, similarities. So by definition, qp cannot use features such as AQ (adaptive quantization) or mbtree (and those are some of the best features in x264 compared to other encoders). crf does not actually mean "constant quality" - it's just an "easy" way of thinking of it and labelling it for GUI's. qp is never used in actual encoding (except maybe for lossless encoding --qp 0)
Definitely mild denoising always improves compressibility, but on the encoding side increasing the number of max b-frames to a reasonable number will also help. When you have --b-adapt 2 enabled (it is for --preset slow) b-frames are only placed "smartly". For footage that has low motion, lots of similar frames, it will place b-frames . They are low cost, and x264 is notable for very high quality b-frames when slower settings are used. The b-frame quality is significantly worse when you use the faster settings. You can think of it as stuffing in lower bitrate cost frames to replace more expensive P and I frames, with very little impairment in visual quality - thus it improves compression. The penalty is moderate in terms of encoding speed - the --b-adapt 2 decision is poorly multithreaded and the larger the number for max b-frames, the slower the processing. There are diminishing returns - on live action a good number might be 4 or 5. But sometimes even 3 is too much - it really depends on the source. On cartoons with duplicate frames 7 or 8 might even be good. You can look at the log file to see what percentage of b-frames were used were used. If you're using megui it should be in a log folder it will say something like this
x264 [info]: consecutive B-frames: 3.5% 10.8% 20.6% 28.2% 16.8% 4.5% 15.6% -
Damn, that's highly informative, thanks... I'll have to ponder almost each sentence.
qp is an ineffective method of rate control
qp is never used in actual encoding
Definitely mild denoising always improves compressibility, but on the encoding side increasing the number of max b-frames to a reasonable number will also help. When you have --b-adapt 2 enabled (it is for --preset slow) b-frames are only placed "smartly". For footage that has low motion, lots of similar frames, it will place b-frames . They are low cost, and x264 is notable for very high quality b-frames when slower settings are used. The b-frame quality is significantly worse when you use the faster settings. You can think of it as stuffing in lower bitrate cost frames to replace more expensive P and I frames, with very little impairment in visual quality - thus it improves compression. The penalty is moderate in terms of encoding speed - the --b-adapt 2 decision is poorly multithreaded and the larger the number for max b-frames, the slower the processing. There are diminishing returns - on live action a good number might be 4 or 5. But sometimes even 3 is too much - it really depends on the source. On cartoons with duplicate frames 7 or 8 might even be good. You can look at the log file to see what percentage of b-frames were used were used. If you're using megui it should be in a log folder it will say something like this
x264 [info]: consecutive B-frames: 3.5% 10.8% 20.6% 28.2% 16.8% 4.5% 15.6%
This means 3.5% used zero, 10.8% used 1 consecutive, 20.6% used 2 consecutive and so forth... If you used default 3 bframe settings on this example, you could have "stuffed" quite a bit more b-frames in there (36.9%). If had you set it to 3, those last three numbers would be added to the 28.2% and instead use more expensive P or I frames. When the last number is zero you're wasting time; some people set it absurdly high like 16 and just waste time for nothing. But there is no way to know ahead of time what you should use, but since you already did an encode you do have an idea if you look at the log. But people draw the line for the "cutoff" at different values. Is 5% good enough? etc...
Code:--[Information] [27/07/2015 14:29:33] resolution: 640x480 --[Information] [27/07/2015 14:29:33] frame rate: 60000/1001 --[Information] [27/07/2015 14:29:33] aspect ratio: 4:3 (1.333) --[Information] [27/07/2015 14:29:33] target device selected: DXVA --[Information] [27/07/2015 14:29:33] Job commandline: "C:\Program Files\MeGUI_2507_x86\tools\x264\x264.exe" --level 4.1 --preset slow --qp 24 --keyint 599 --qpfile "E:\FullDisc\Horacio Hernandez - 2006 - Conversations in Clave - Technical study of four-way independance in afro-cuban rhythms - VTS_02_0 - Chapter Information {modifié}.qpf" --sar 1:1 --output "E:\FullDisc\3h3u2ej1.smt\Horacio Hernandez - 2006 - Conversations in Clave - Technical study of four-way independance in afro-cuban rhythms - VTS_02_1_Video.264" "E:\FullDisc\3h3u2ej1.smt\Horacio Hernandez - 2006 - Conversations in Clave - Technical study of four-way independance in afro-cuban rhythms - VTS_02_1.avs" --[Information] [27/07/2015 14:29:33] Process started --[Information] [27/07/2015 14:29:33] Standard output stream --[Information] [27/07/2015 14:29:33] Standard error stream ---[Information] [27/07/2015 14:29:36] avs [info]: 640x480p 1:1 @ 60000/1001 fps (cfr) ---[Information] [27/07/2015 14:29:36] x264 [info]: using SAR=1/1 ---[Information] [27/07/2015 14:29:36] x264 [info]: using cpu capabilities: MMX2 SSE2Fast SSSE3 Cache64 ---[Information] [27/07/2015 14:29:36] x264 [info]: profile High, level 4.1 ---[Information] [28/07/2015 04:20:19] x264 [info]: frame I:750 Avg QP:21.00 size: 69350 ---[Information] [28/07/2015 04:20:19] x264 [info]: frame P:182304 Avg QP:24.00 size: 7854 ---[Information] [28/07/2015 04:20:19] x264 [info]: frame B:201968 Avg QP:25.68 size: 2838 ---[Information] [28/07/2015 04:20:19] x264 [info]: consecutive B-frames: 26.2% 9.0% 7.5% 57.2% ---[Information] [28/07/2015 04:20:19] x264 [info]: mb I I16..4: 6.4% 58.5% 35.1% ---[Information] [28/07/2015 04:20:19] x264 [info]: mb P I16..4: 0.2% 0.9% 0.3% P16..4: 32.5% 11.6% 10.3% 0.0% 0.0% skip:44.3% ---[Information] [28/07/2015 04:20:19] x264 [info]: mb B I16..4: 0.0% 0.1% 0.1% B16..8: 28.5% 6.6% 2.3% direct: 3.2% skip:59.1% L0:24.3% L1:54.8% BI:21.0% ---[Information] [28/07/2015 04:20:19] x264 [info]: 8x8 transform intra:60.4% inter:61.4% ---[Information] [28/07/2015 04:20:19] x264 [info]: direct mvs spatial:99.6% temporal:0.4% ---[Information] [28/07/2015 04:20:19] x264 [info]: coded y,uvDC,uvAC intra: 82.8% 93.6% 77.9% inter: 17.9% 19.8% 5.7% ---[Information] [28/07/2015 04:20:19] x264 [info]: i16 v,h,dc,p: 34% 16% 11% 39% ---[Information] [28/07/2015 04:20:19] x264 [info]: i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 11% 11% 4% 8% 13% 13% 14% 12% 13% ---[Information] [28/07/2015 04:20:19] x264 [info]: i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 14% 12% 4% 7% 13% 12% 14% 10% 12% ---[Information] [28/07/2015 04:20:19] x264 [info]: i8c dc,h,v,p: 36% 24% 19% 21% ---[Information] [28/07/2015 04:20:19] x264 [info]: Weighted P-Frames: Y:1.8% UV:1.3% ---[Information] [28/07/2015 04:20:19] x264 [info]: ref P L0: 70.7% 13.5% 9.8% 2.5% 2.2% 1.2% 0.0% ---[Information] [28/07/2015 04:20:19] x264 [info]: ref B L0: 89.2% 8.0% 2.1% 0.7% ---[Information] [28/07/2015 04:20:19] x264 [info]: ref B L1: 96.6% 3.4% ---[Information] [28/07/2015 04:20:19] x264 [info]: kb/s:2562.01 ---[Information] [28/07/2015 04:20:19] encoded 385022 frames, 7.72 fps, 2562.01 kb/s --[Information] [28/07/2015 04:20:20] Final statistics ---[Information] [28/07/2015 04:20:20] Constant Quantizer Mode: Quantizer 24 computed... ---[Information] [28/07/2015 04:20:20] Video Bitrate Obtained (approximate): 2562 kbit/s
When the last number is zero you're wasting time; some people set it absurdly high like 16 and just waste time for nothing.
https://ericolon.wordpress.com/2013/01/06/the-secrets-of-yify-and-high-quality-and-sma...or-any-device/
Recommanded command line : "ref=16:bframes=16:b-adapt=2:direct=auto:me=tesa:merange=24:subq=11:rc-lookahead=60:analyse=all:trellis=2:no-fast-pskip=1:threads=1" Commenting : "These settings basically emulate the placebo preset in x264: This setting pushes h264 to its very limit. Its a general purpose set of options that ensure maximum compression happens."
So this guy doesn't know what he's talking about ?Last edited by abolibibelot; 7th Oct 2015 at 02:03.
-
What he says is true. It's just that the additional encoding time isn't worth the tiny improvement isn't worth the additional encoding time. Just because you specify bframes doesn't mean x264 will ever use that many. And many players can't handle video encoded with those high value so you're giving up a lot of compatibility.
Here's an example. I filtered your video with:
Code:Mpeg2Source("Kurosawa Ikiru extrait 47min40.demuxed.d2v", CPU2="xxxxxx", Info=3) ColorYUV(cont_u=-256, cont_v=-256) # greyscale QTGMC(preset="medium") SRestore() # back to 23.976 fps dehalo_alpha(rx=3, ry=3, highsens=90, lowsens=40) # reduce oversharpening halos TemporalDegrain(SAD1=100, SAD2=75, sigma=4) # light noise reduction
-
I mean it's not used in practice, because 2pass or CRF are better depending on your goals. A (slightly flawed) analogy would be CBR vs VBR. CBR doesn't distribute bitrate as effectively. Similarly qp doesn't distribute bitrate as effectively
So, if I understand correctly, it went up to 3 consecutive B-frames in this case ("consecutive B-frames: 26.2% 9.0% 7.5% 57.2%" -- by the way I wonder what "zero consecutive" means here), and since the last number is high it could have used more so as to obtain an improved encoding efficiency. Normally it was set to "preset=slow" with no customized setting, so it should have been 3 indeed (what's surprising is that it's the same number for all presets up to "very fast", but then it gets bumped to 8 for preset "very slow" and 16 for "placebo" -- shouldn't it be increased more gradually if it significantly affects the encoding time ?). But now how can I determine an optimal value from that result which is obviously not ?
That's what the "zero" means - the number that you enter will always have "placeholders" in the log file. If you set it to , say 6, it might say something like
10% 12% 11% 10% 0% 0% 0%
That means 4 , 5, and 6 consecutive b-frames strings were never even used in that encode, and you 're wasting your time.
But that 57.2% means you've should have raised it higher, at least in my opinion. The cutoff where people say "good enough" varies
On your 1st drum clip I tested with QTGMC faster, and smdegrain(tr=2) , I got this for default slow, and slow with --bframes 6, the filesize difference was 3.8%
x264 [info]: consecutive B-frames: 24.7% 28.5% 7.3% 39.5%
x264 [info]: consecutive B-frames: 23.9% 26.9% 5.7% 18.6% 4.9% 18.6% 1.4%
(But the filesize change is only a rough estimate of "similar quality" because you're using CRF and not even using the same settings - technically it's not really comparable.) -
@ jagabo
What he says is true. It's just that the additional encoding time isn't worth the tiny improvement isn't worth the additional encoding time. Just because you specify bframes doesn't mean x264 will ever use that many. And many players can't handle video encoded with those high value so you're giving up a lot of compatibility.
Here's an example. I filtered your video with:
Code:
Mpeg2Source("Kurosawa Ikiru extrait 47min40.demuxed.d2v", CPU2="xxxxxx", Info=3)
ColorYUV(cont_u=-256, cont_v=-256) # greyscale
QTGMC(preset="medium")
SRestore() # back to 23.976 fps
dehalo_alpha(rx=3, ry=3, highsens=90, lowsens=40) # reduce oversharpening halos
TemporalDegrain(SAD1=100, SAD2=75, sigma=4) # light noise reduction
and encoded with the slow preset, then the slow preset but with the addition of --bframes=16. The most consecutive bframes with the latter was 5 (the slow preset uses 3). The difference if file size was a little over 1 percent. Adding --ref=16 shaved another 0.1 percent off the file size. Going full placebo reduced file size by about 5 percent. Most of that was from --trellis=2.
@ poisondeathray
I mean it's not used in practice, because 2pass or CRF are better depending on your goals. A (slightly flawed) analogy would be CBR vs VBR. CBR doesn't distribute bitrate as effectively. Similarly qp doesn't distribute bitrate as effectively
jagabo answered this, but to add - it's really about diminishing returns. If you have time to waste, by all means set it to 16 - but 99.999% of sources won't be able to use 16 b-frames - You'll see a bunch of zeros at the end in the log file consecutive b-frame entry (but x264 still does the computational effort - even if that source doesn't use it - so encoding takes longer for zero benefit).
That's what the "zero" means - the number that you enter will always have "placeholders" in the log file. If you set it to , say 6, it might say something like
10% 12% 11% 10% 0% 0% 0%
That means 4 , 5, and 6 consecutive b-frames strings were never even used in that encode, and you 're wasting your time.
But that 57.2% means you've should have raised it higher, at least in my opinion. The cutoff where people say "good enough" varies
I was more wondering about the first number, which "poisondeathray" described as "zero consecutive B-frame" ("This means 3.5% used zero") ; in fact "1 consecutive" is also confusing -- or does "0" mean there's only 1 at a given point, and 1 actually mean there are 2 consecutive ? I'll have to read a thorough article about the types of frames and their repartition.
On your 1st drum clip I tested with QTGMC faster, and smdegrain(tr=2) , I got this for default slow, and slow with --bframes 6, the filesize difference was 3.8%
x264 [info]: consecutive B-frames: 24.7% 28.5% 7.3% 39.5%
x264 [info]: consecutive B-frames: 23.9% 26.9% 5.7% 18.6% 4.9% 18.6% 1.4%
(But the filesize change is only a rough estimate of "similar quality" because you're using CRF and not even using the same settings - technically it's not really comparable.)
So in this case the "good enough" threshold would likely be --bframes 5, with the last number being only 1.4% (and even then the benefit seems quite moderate over the default value).
And so, as a general rule, a higher number of b-frames is beneficial for videos with many static scenes ?Last edited by abolibibelot; 7th Oct 2015 at 11:50.
-
Keep in mind what you decide to do is subjective - only you can decide what works for you. The next person might decide to do something completely different. You have to draw the line somewhere - diminishing returns. So I can't answer what you should do...
But you posted this thread for a reason. You wanted better compression. So I'm thinking it matters to you.
It's begining to make sense. So the best way to proceed, to get the optimal value for a given video, is to make a short test encode (hoping it's roughly representative of the whole thing) with a high setting like 8 and watch MeGUI's log file, then decide the actual value ?
I was more wondering about the first number, which "poisondeathray" described as "zero consecutive B-frame" ("This means 3.5% used zero") ; in fact "1 consecutive" is also confusing -- or does "0" mean there's only 1 at a given point, and 1 actually mean there are 2 consecutive ? I'll have to read a thorough article about the types of frames and their repartition.
Any reason you used "faster" preset, other than speeding it up ? Is preset "medium" generally considered good enough ? (Again, I'd prefer not to get any slower, until I get a new CPU anyway.) And for this footage would you recommand SMDegrain over QTGMC's EZDenoise ?
So in this case the "good enough" threshold would likely be --bframes 5, with the last number being only 1.4% (and even then the benefit seems quite moderate over the default value).
And so, as a general rule, a higher number of b-frames is beneficial for videos with many static scenes ?
In terms for "bang for buck" increasing b-frames will have a larger effect than other settings like reference frames, mv range etc... Again we're talking like a few percent here and there. But all the adjustments you make , a few here and there might add up. Important for some people in some situations. Not important for others. You decide if it's worth it for you -
Keep in mind what you decide to do is subjective - only you can decide what works for you. The next person might decide to do something completely different. You have to draw the line somewhere - diminishing returns. So I can't answer what you should do...
But you posted this thread for a reason. You wanted better compression. So I'm thinking it matters to you.
Besides, is there some correlation between both methods, i.e. "qp=x" being roughly equivalent to "crf=y", or not at all, and it varies wildly following each video's characteristics ?
You can develop your own strategy for what to do. A small sample often doesn't correlate with the full video. Remember , diminishing returns .... (plus the time wasting on performing little tests) . But on the plus side, you get to learn how the behaviour of the encoder works with various sources ...
zero for the first entry means that percentage didn't have a b-frame in between . For example IPPP wouldn't have any. IPBPBPB would be 1 consecutive. IPBBPBBPBB would have 2 consecutive, and so forth. Those numbers in the log file are the percentage of "strings" of consecutive b-frames for the entire encode. As you can guess, the sections might vary (some might use none - for example a whip pan or explosion sequence, but a completely static shot might use strings of 16 )
Yes, SMDegrain is better for denoising, but slower -
No. It's going to vary between sources.
Then how do you proceed ? Can you now rely on your experience to select an appropriate value for this parameter (and others) just by looking at the footage you have to encode, or do you still make tests in order to get an estimate, however imperfect ?
OK, so these % values apply to individual "groups of pictures", or whatever unit the encoder treats at a given time, trying to find similarities between adjacent frames ? And the length of these groups (IPPP = 4, IPBPBPB = 7...) varies according to the way key-frames are distributed, correct ?
In this case there's no obvious noise, so the deciding factor is less the visual denoising quality (removing noise artifacts while preserving picture details) than the way it positively affects the compressibility (without adverse effects on visual quality), and, yes, speed. Anyway, I'll run a test with both suggestions on each sample and see which one gives the most "bang for the buck" in terms of compression / speed. -
The last post on this thread (from 2006) :
http://forum.doom9.org/archive/index.php/t-111978.html
says :
Ranked in terms of sharpness:
Bilinear, Bicubic, Spline16, Lanczos, Lanczos4, Spline36. -
No, spline36 is about the same as lanczos3 (3tap), a tiny bit less, but with a tiny bit less ringing also . Lanczos4 is definitely sharper with more artifacts
Here are some graphs on some of the popular resize kernals
http://svn.int64.org/viewvc/int64/resamplehq/doc/kernels.html -
[QUOTE=poisondeathray;2413358] Relatively speaking, compared with other DVD footage, I can see little grain-like noise, however I see quite strong shaky artifacts near the edges of the frame. Is there a "magical" filter for that, or maybe cropping a little might help ?
I made some tests with the first sample, and if I made no mistake :
- using QTGMC "medium" preset instead of "fast", or using SMDegrain instead of EZDenoise, or using deblocking ("CPU2="xxxxxx") instead of DCT ringing removal only, has little influence over the resulting size (though... see below) ;
- however using Spline36 instead of Bilinear significantly increases the file size, all else being equal (5123KB, when all the other test encodes are between 4679 and 4769) ;
- Indeed the deblocking setting gives a slightly softer picture, but in spite of that has no benefit on the file size, which is slightly higher (4740 vs. 4706) ;
- The quality benefit of QTGMC "medium" over "fast" preset is quite obvious (at least on a still image capture), while the impact on encoding time is very moderate (1-3%) ;
- SMDegrain(tr=2) slows down the encode almost by half compared with EZDenoise (3.72FPS / 6.94FPS with QTGMC "medium"), for no compression benefit as stated above (resulting file is actually a tiny bit larger : 4769KB/4718KB), and seems to produce artifacts in mobile parts ;
- I made the first SMDegrain test with QTGMC "medium", and, strangely enough, a second test with QTGMC "fast" + SMDegrain produced a significantly smaller file (4513 vs. 4769KB) and less artifacting, go figure... I double-checked those two and this time I got : 4600KB (QTGMC "medium" + SMDegrain) and 4513KB (QTGMC "fast" + SMDegrain, exact same size, and video streams are strictly identical). Isn't it supposed to produce consistently the exact same result with the exact same settings ? Can the other activities and the overall load on the machine somehow affect the filtering or encoding process (I wouldn't think so), or what can explain such a discrepancy ? Oh, right : checking with MediaInfo I realize I have changed the --bframes setting from default 3 to 5 in between, so that totally makes sense, and the above results are possibly flawed...
Second sample, this time all tests with the same encoding settings (crf = 24, preset = slow, bframes = 5, High@L3.1) (comparisons made by looking at a single still capture of the exact same frame -- not the ideal method to compare video encodes from what I gathered, but still the best / most convenient I know) :
DCT ringing removal + QTGMC "fast" + EZDenoise 1.0 + BilinearResize ........... 8902KB
DCT ringing removal + QTGMC "medium" + EZDenoise + BilinearResize ........... 8860KB
> seems more accurate, but less obvious than with the first sample ; very slight compression benefit
DCT ringing removal + QTGMC "fast" + EZDenoise + Spline36Resize ........... 9503KB
> sharper especially in moving areas ; significant increase in file size
DCT ringing removal + deblocking + QTGMC "fast" + EZDenoise + BilinearResize ........... 8813KB
> huge loss of detail in moving areas ; very slight compression benefit
DCT ringing removal + QTGMC "medium" + SMDegrain(tr=2) + BilinearResize ........... 8775KB
> seems a little sharper than "EZDenoise" but again with artifacts (hard to say for sure which one is the most accurate) ; slight compression benefit
DCT ringing removal + QTGMC "fast" + SMDegrain(tr=2) + BilinearResize ........... 8742KB
> slightly softer / less accurate than "medium" ; 5-6% faster
QTGMC "medium" + EZDenoise + BilinearResize ........... 9109KB
> significant size increase without "DCT ring removal" but sharper / more accurate
DCT ringing removal + QTGMC "medium" + BilinearResize ........... 8958KB
> slight size increase without "EZDenoise" ; slightly sharper
QTGMC "medium" + LanczosResize ........... 9937KB
> [control] very sharp ; very significant increase in file size
http://share.pho.to/9lom6
Similar Threads
-
Any way to improve vid quality from DVD discs?
By Miramar239 in forum RestorationReplies: 31Last Post: 6th Jul 2015, 18:39 -
If I rip with crf, how could I know the compressibility then make judgment?
By xuguang_he in forum DVD RippingReplies: 4Last Post: 24th Apr 2013, 08:31 -
How to Improve Commercial DVD with Avisynth that has no problems?
By VideoFanatic in forum RestorationReplies: 60Last Post: 28th Nov 2012, 21:06 -
Night video to DVD (improve contrast and saturation)
By cd090580 in forum Video ConversionReplies: 18Last Post: 24th Aug 2012, 10:06 -
want to improve tools and workflow for dvd to h264
By codemaster in forum DVD RippingReplies: 2Last Post: 3rd Mar 2012, 08:19