And just for the notes...
+ Reply to Thread
Results 61 to 90 of 782
Just out of curiosity, is there some study which confirms the statement that around 50% of all content is porn?With the rise of videos from mobile phones, tablets, gopros, youtube,... I doubt that there's really that much porn.
as for your examples, mobile phones can record low quality video but few record home made porn with it, tablets display content and youtube hosts it, they are not really in the same market place.
a. a 'wild' statement without a source isn't really reliableb. my tablets all also make videos
c. I would agree that the porn industry produces the most sold content, but I don't think that pron really is stored that much nowadays
... what? ...You found a firstname.lastname@example.org sample which contains dark noise and now you are trying to convert it to 1500kBit/s and you are unsure if 8 bit or 10bit will result in better quality?
If that is the case, 10bit encoding should produce a better end result unless you messed something up.
If you keep the bit rate at 1500 for the HD source and do no additional filtering, x265 is bound to look better due to it's smoothing (detail killing), also both files won't look nice.
Evaluator's Guide ... feedback is welcomed ... well, in general: It would be nice to have some more insight which options have which amount of effect and side-effects. Something like: "Little speed-up, severe quality loss; recommended to keep disabled until further notice", and possibly even a little more detail what they technically mean, for options like --early-skip, --tskip (with details to the difference to --tskip-fast, e.g. why or why not using both at the same time), or --fast-cbf (abbreviation glossary needed, what is Cbf at all?). Of course, that may mean to have the guide updated more regularly to keep the recommendations up to date.
to get started, here's what I came up with.
You have to differentiate where these options belong to, to get a small hint on what the options are ment for
"--[no-]early-skip" -> When enabled a heuristic will be used to determine if a coding unit can be skipped or not.
"--fast-cbf " = fast coding block 'fail'(?) = when enabled a fast heuristic is used to determine if a block should be coded or if it's a 'zero-block'.
I think here the problem is see that a skipped block is not a zero-block, also there should be no problem to en-/disable these separately.
"--tskip" = transform skip ->If a TU (= tree unit) has size 4x4, the encoder has the option to signal a so-called “transform skip” flag, where the transform is simply bypassed all together, and the transmitted coefficients are really just spatial residual samples. This can help code crisp small text for example.
"--tskip-fast" -> when kskip is enabled fast 'tskip', the encoder uses some faster heuristic to decide whether or not to set the 'transform skip' flag
-> --tskip-fast requires --tskip
That 'said', I agree a more frequently updated guide would be good and some additional infos regarding what the options are for and what side effects they should have would be nice.
@DmitryV: just checked and you really should add some notifications like 'parsing' or similar to the CU and TU tabs, since it takes quite a while for them to update (especially CU) and unless you know that this is the case a normal user will assume they do not work. (all in all it looks okay, but at least from here it's way to slow to be usable)
Last edited by Selur; 7th Feb 2014 at 08:14.
For the most recent builds I could get:
- v0.6+282-385560ac328d [MSVC 1800] (MeGUI): --preset placebo crashes at the beginning of the encoding
- v0.6+295-ce41ee0f5c8c [GCC 4.8.2] (Hybrid): --preset placebo crashes at the beginning of the encoding
Now I heard from JEEB via IRC that v0.7 already started, with many fixes. So I tried to set up an MSYS building environment according to the brief guide by El Heggunte (where a few steps might be explained more verbose or details added for clarification).
- v0.7+95-fa9f7b56d4d8 [GCC 4.8.2]: --preset placebo works; and v0.7 introduces adaptive scene cuts (which redefines -i vs. -I)
Now I wonder how and where I can "simply" configure the building process for e.g. AMD64 CPU target, maybe even for 16bpp.
Last edited by LigH.de; 9th Feb 2014 at 07:54.
-i/--min-keyint Minimum GOP size [auto] --no-scenecut Disable adaptive I-frame decision --scenecut How aggressively to insert extra I-frames. Default 40
a. What range of values are allowed for '--min-keyint' ? 'auto' kind of indicates, that it might not be int values
b. What does 'auto' for '--min-keyint' really mean? Is it a recommend value?
c. What range of values are allowed for '--scenecut'? Is there some science behind the 40 or is it still rooted in a mystic number that once fell from the sky when folks where looking into a threshold for Xvids scene change detection?
Remembering a few discussions about keyint && min-keyint && scenecut over at doom9 here are links to two I deem especially interesting:
http://forum.doom9.org/showthread.php?t=165197 and http://forum.doom9.org/showthread.php?t=121116
I was wondering if any more thoughts were investigated into the whole scene change detection scheme ?
(especially if the whole keyint && min-keyint && scenecut values are still valid for HD and higher resolution content)
Last edited by Selur; 10th Feb 2014 at 01:21.
btw. would be nice if copying content from the pdf wasn't disabled,...
- '--min-keyint': When will x265 use I instead of IDR Frames?
- '--scenecut X': see post above
- '--[no]-cutree: is only mentioned in the 'NEW FEATURES' listing, but not later (not counting the profile overview table)
2013/11/29, filler56789 had written:
Any GOOD reason why now the PDF is "secured" (i.e., Content Copying, Page Extraction and Commenting now are "not allowed")?
- would be useful if inside the "QUALITY PRESETS"-table, things that changed in regards to the last evaluators guide could be highlighted.
- "QUALITY PRESETS"-table states for 'ultrafast' and 'Medium' '--max-merge 2', but it should be blank for 'ultrafast' (common.cpp also needs adjustment)
Last edited by Selur; 13th Feb 2014 at 13:15.
Just uploaded some updates encoded with x265 v0.7+ and x264 core:142:
a) tos_60s_hevc.crf24.mp4 – encoded with x265 --crf 24 returning a HEVC stream with ~1520 kbps; to match the size: tos_60s_avc.1520.mp4 – encoded with x264 --bitrate 1520 (2-pass)
b) tos_60s_hevc.crf18.mp4 – encoded with x265 --crf 18 returning a HEVC stream with ~3560 kbps; to match the size: tos_60s_avc.3560.mp4 – encoded with x264 --bitrate 3560 (2-pass)
The quality differences are interesting...
a: low bitrate + high detail/action, x265 clear winner, handles the bitrate starving in a way more pleasant way
b: more bitrate, mbtree helps a lot and as only for dark regions x265 clearly wins (it smooths more in such regions which looks better), in slow scenes x264 sometimes looks better, but all in all x265 wins here too
According to the 1st pass statistics of x264, for this clip and the chosen parameters, trying to reach the same output size (not a similar visual quality):
- x265 --crf 24 ~ x264 --crf 31.68
- x265 --crf 18 ~ x264 --crf 25.17
- x265 --crf 12 ~ x264 --crf 18.36
Not to be generalized!
so atm. it looks like, size-based: "x265 cfr-value" ~= "x264 crf-value" + 7
note: this is a nice thing to know, but not that interesting since crf aims for constant rate factor not constant size
Just some evidence that the efficiency of HEVC in x265 is probably often already better than of AVC in x264. Just a tendency. The most I wanted to point out. Good job, developers!
At least whenever bit rates are sparse and the encoder should drop some details x265 wins.
Problem is that it always seems to drop some fine details, even when the bit rate should be high enough to keep them.
(this is why some folks like it for anime which already lacks details)
@x265: Are there any infos when to expect 2pass encoding support for x265? (main feature I'm really missing in x265 atm.)