VideoHelp Forum
+ Reply to Thread
Page 3 of 4
FirstFirst 1 2 3 4 LastLast
Results 61 to 90 of 120
Thread
  1. Member
    Join Date
    Jan 2014
    Location
    Kazakhstan
    Search Comp PM
    Hi!
    I understand that the DS is not found a sponsor and has been closed?
    Quote Quote  
  2. Originally Posted by sophisticles View Post


    The first person accused me in PM, at first I thought it was some kind of death threat, an odd one. Now that I have been informed I did a forum search and can understand the confusion. This deadrats fellow did seem to have a posting style eerily similar to mine and we seem to share a number of viewpoints, though I can't find any reason for him being banned.


    That was me . But it is very similar isn't it

    I don't think he should been banned, but I think he got on some people's nerves and they complained




    Is this a forum where no one is allowed to criticize open source projects or point out any weaknesses? Does that somehow offend some open source advocates. I seem to be sensing a significant amount of hostility for pointing out things concerning x264 that are well establish and obvious to anyone that has done any extensive encoding.

    Lastly, I was under the impression that cursing was not allowed on this forum yet thus far I have seen the "F" word allowed as well as the "S" word, just to be clear is cursing allowed but analysis or non-flattering comments about x264 disallowed.

    There seems to be a very odd dynamic at play in this forum, reminds me of the way Doom9 used to be, maybe it still is I stopped going there years ago. That forum had a very similar dynamic, anything that was viewed as anything but glowing praise about various open source projects was instantly greeted with hostility and accusations of "trolling".

    If you asked a question about AviDemux, x264, xvid, any open source project you immediately had to add something along the lines of "this is the best software or thank you for the great quality" as a qualifier or else posters would descend on you to flame you mercilessly.

    I'll leave this discussion, it seems that anything other than glowing praise for x264 is negatively received in this forum.

    Where is the hostility ?

    This is the same defensive style when a bunch of verbal diarrhea gets spewed forth, but never backed by any real evidence or proof. Someone makes a bunch of claims, get provided with evidence to the contrary, then they go running they just cry "oh I'm the victim." You guys are so mean

    You can't have any meaningful discussion if you cry wolf all the time. I don't see any personal attacks, there are a few posts analyzing some of the unsupported assertions that you have made. Try to be a bit more objective and scientific. Put up some proof and observations, contribute to the discussion instead of getting all defensive.

    I know you're busy, but why don't you start by answering some of the other posts in this thread , instead of ignoring them?




    x264 gets trashed here all the time. It has big glaring weaknesses. This gets discussed all the time. There are many areas that could be improved. This is common knowledge (at least to people who use it often)

    avidemux is buggy as hell and crashes all the time. This also gets mentioned all the time

    Well I just trashed some open source projects....I'm waiting to get flammed...How come I'm not burning ? ....Because my burn resistent suit is called: evidence , proof, and facts. I try to back everything up instead of spewing a bunch of things and not providing any proof . When someone provides evidence to the contrary of what I said, I acknoweldge that I was wrong and try to learn from that. Or at least investigate farther for other possible explanations. Not run and hide.
    Quote Quote  
  3. Member Cornucopia's Avatar
    Join Date
    Oct 2001
    Location
    Deep in the Heart of Texas
    Search PM
    Free/Open-source is a great thing. But it's not the be-all end all. It has holes (in scope, intent/bias, documentation/support, and even - occasionally - in rigor). And it's not always necessarily the best.
    Commercial/Closed-source is EXACTLY the same way (though usually for different reasons).

    I'm glad we have a choice, as I use both kinds all the time. Use the right tool for the right job - they're (nearly) all different.
    But, using objective, scientific criticism, peer review & logical (even if sometimes heated) discourse is really the best way to get to the heart of which ones to use.

    BTW, Grammar rant: the word is comment. The word commentate is supposed to be reserved for running (live) commentary (such as by sports announcers).

    ...I stand by my first post in this thread.

    Scott
    Quote Quote  
  4. I seem to be sensing a significant amount of hostility for pointing out things concerning x264 that are well establish and obvious to anyone that has done any extensive encoding.
    I've done extensive encoding and it's well established that... CRF is inferior to 2pass? Evidence disagrees with you. You were vague enough to make no concrete claims in order to shield yourself from any refutation anyway so I don't see the point.

    It's well-established that long GOPs make x264 suck? Don't use long GOPs then, problem solved.

    I fail to see the point of your rantings. If you're trying to point out weaknesses as you now say then start making sense and stop pointing out unsubstantiated garbage. CRF is not inferior to 2pass quality-wise so stop pointing things out that are blatantly wrong.
    Quote Quote  
  5. Originally Posted by sophisticles View Post
    I'll leave this discussion, it seems that anything other than glowing praise for x264 is negatively received in this forum.
    I'm no expert, not even close, so I generally don't reject any opinion out of hand, even if I don't agree with it initially. I even said if you post your non-CRF x264 encoder settings I'd be happy to try them out. You didn't.

    What I receive negatively in any discussion is one sidedness. For example when a poster ignores questions asked by others, but still asks questions himself. That sort of thing. It makes it impossible to have a "balanced" discussion, and sooner or later leads to the conclusion questions are being ignored for a reason. Deadrates was probably the forum question ignoring champion when it came to h264 encoder discussions.

    I know you said you've been working a lot, but I'm pretty sure so far not one question I've asked has been answered directly. Aside from repeating your justification for claiming "cheating", you've not replied to virtually any other points others have made, which is pretty much the same thing.

    Why do you claim large gops could be cheating when you say you use small gops yourself as it's better quality?
    If smaller gops increase the bitrate substantially and lowering the CRF value can increase it by a similar amount, which method would increase the perceived quality the most?
    How is using CRF encoding cheating when you claim you don't use it as it's like throwing darts?
    What x264 encoder settings do you use?
    Is there are downside to scenecut in respect to quality?
    What's the distinction between CRF using the same algorithm as 2 pass, and CRF using the same rate control algorithm as 2 pass?

    I think when you find you have an opinion different to that of the majority, the burden of proof naturally shifts a little, but that shouldn't be interpreted as hostility on it's own.
    Last edited by hello_hello; 17th Jan 2015 at 04:51.
    Quote Quote  
  6. Originally Posted by hello_hello View Post
    What's the distinction between CRF using the same algorithm as 2 pass, and CRF using the same rate control algorithm as 2 pass?
    I just got off work, it's 6am here, I've been working since 4pm yesterday, so I will only answer this question for now and after some sleep I will answer your other questions.

    The definition of an algorithm is the sequence of steps you take to achieve an objective. A 2 pass encode does an analysis of the entire video in order to determine how to distribute bits and what parts can be thrown out with minimal quality loss and then applies what it has discovered to the second pass that does the job. This is why when you run a benchmark with a 2 pass encode the first pass is so much faster than the second pass, no actual compression is taking place, only analysis. In the case where MB-Tree is used there is also a large stats file created.

    By definition no single pass encoding scheme can use the same algorithm as a 2 pass scheme because the single pass scheme lacks a primary analysis pass followed by a secondary work pass.

    It may seem like splitting hairs to some but it's misleading and dishonest to claim that CRF, which is a method of applying an average quantizer is using the exact same algorithm as a 2 pass bit rate based encode.

    CRF is more properly analogous to a 1 pass abr encode, with the difference being that abr strives for an average bit rate and CRF strives for an average quantizer.

    If you read DS' statements in the link I provided earlier, he says quite clearly that the first choice should be 2 pass vbr and if that is not available a 1 pass constant quality, which would be cq mode, in other words constant quantizer mode, not CRF.

    Reread the encoding guidelines DS lays out for what he considers a "fair" encoding test, if that is what leads to a "fair" encoding test then logic would dictate that it also leads to the best encodes.
    Quote Quote  
  7. Originally Posted by jagabo View Post
    A CRF encode and a 2-pass encode in x264, slow preset, CRF=18, bitrate=3580:

    Image
    [Attachment 29652 - Click to enlarge]


    As you can see the bitrate distribution is substantially similar. The videos themselves are substantially similar too.
    This. It's the end result that matters. There are minuscule differences, but there's no way one would be able to distinguish the two in normal viewing.

    Yes, crf encoding is like throwing darts at a dartboard , though only in the sense of not knowing the output size beforehand. But that's the point, isn't it? How can you know what bitrate to allocate a given video?

    I'm no expert either, but it seems a lot of this discussion is not relevant as to why one would use one or the other. If you want a specific target size (say for an optical disc), fine, use two pass.
    Pull! Bang! Darn!
    Quote Quote  
  8. Originally Posted by sophisticles View Post
    It may seem like splitting hairs to some but it's misleading and dishonest to claim that CRF, which is a method of applying an average quantizer is using the exact same algorithm as a 2 pass bit rate based encode.
    I thought the definition of CRF is it's an encoding method that strives for the same perceived quality as you'd get from a constant quantizer encode, only at a lower bitrate. So in theory, CQ18 and CRF18 should produce the same "perceived" quality.
    http://mewiki.project357.com/wiki/X264_Settings#crf
    Okay, I'm honestly trying to get my head around this......

    "The other thing tha most believe is the notion that CRF uses the same algorithm as 2 pass, what CRF does is use the same rate control algorithm as 2 pass"
    If CRF aims for a certain perceived quality, where does rate control come into it (vbv restrictions aside)? I can understand 2 pass requiring rate control, but the way I understand it, CRF and 2 pass use the same algorithm for determining how to distribute the bits between the various types of frames, but 2 pass has to do so while using a specified number of bits in total.

    Logically, you might assume the first pass looks at the video and makes all the decisions in regard to bit distribution and no single pass method could therefore encode in a similar fashion, but.....

    I'm trying to understand as I type, so bare with me, but according to the info from the link below, each encoding method is divided into three steps.

    http://git.videolan.org/?p=x264.git;a=blob_plain;f=doc/ratecontrol.txt;hb=HEAD
    Given some data about each frame of a 1st pass (e.g. generated by 1pass ABR, below), we try to choose QPs to maximize quality while matching a specified total size. This is separated into 3 parts:

    Step 1, 2 pass encoding:
    (1) Before starting the 2nd pass, select the relative number of bits to allocate between frames. This pays no attention to the total size of the encode. The default formula, empirically selected to balance between the 1st 2 theoretical points, is "complexity ** 0.6", where complexity is defined to be the bit size of the frame at a constant QP (estimated from the 1st pass).

    Step 1, CRF and ABR encoding:
    (1) This is the same as in 2pass, except that instead of estimating complexity from a previous encode, we run a fast motion estimation algo over a half-resolution version of the frame, and use the SATD residuals (these are also used in the decision between P- and B-frames). Also, we don't know the size or complexity of the following GOP, so I-frame bonus is based on the past.

    The way I read it, step one is the same for all three encoding methods, in that it estimates complexity, only ABR and CRF kind of do it "on the fly", whereas 2 pass encoding estimates complexity using the "previous encode" (1st pass), which is apparently average bitrate. If CRF is the same as 2 pass as stated, they must use the same formula.
    Which did make me wonder why 2 pass encoding couldn't estimate complexity "on the fly" as CRF does, but thinking about it, that'd be the definition of ABR encoding, yet 2 pass apparently uses an ABR 1st pass for estimating complexity......
    That put my brain into some sort of step one loop so I had to abort thinking about it.
    I don't quite understand the 2 pass method in respect to the 1st pass being average bitrate, nor do I understand the formula used.

    Step 2 for all three encoding methods involves some sort of scaling.

    Step 2, CRF encoding:
    (2) The scaling factor is a constant based on the --crf argument.

    Step 2, ABR encoding:
    (2) We don't know the complexities of future frames, so we can only scale based on the past. The scaling factor is chosen to be the one that would have resulted in the desired bitrate if it had been applied to all frames so far.

    Step 2, 2 pass encoding:
    (2) Scale the results of step one to fill the requested total size.

    Finally to step 3, which to me seems to contradict your CRF rate control algorithm claim.

    Step 3, CRF encoding
    :
    (3) No overflow compensation is done.

    Step 3, 2 pass and ABR encoding.
    (3) Now start encoding. After each frame, update future QPs to compensate for mispredictions in size. If the 2nd pass is consistently off from the predicted size (usually because we use slower compression options than the 1st pass), then we multiply all future frames' qscales by the reciprocal of the error. Additionally, there is a short-term compensation to prevent us from deviating too far from the desired size near the beginning (when we don't have much data for the global compensation) and near the end (when global doesn't have time to react).

    I've seen the "short term" rate control compensation in action when comparing 2 pass and CRF encodes of the same average bitrate. On one occasion the 2 pass bitrate jumped quite considerably compared to CRF during the end credits even though they'd seemed almost identical for the majority of the video. I guess it'd saved too many bits and was busy spending them before time ran out.

    I thought by re-writing the info on that page and re-arranging it in logical steps for my brain to work with I'd get my head around it by the time I was done, but I'm not there yet. If someone would care to explain it in more detail that'd be great, but so far I'm assuming no matter what the encoding method, the following applies. I just don't understand the finer details:

    Step 1: Complexity estimation
    Step 2: Scaling
    Step 3: Rate Control (or not).

    Originally Posted by sophisticles View Post
    If you read DS' statements in the link I provided earlier, he says quite clearly that the first choice should be 2 pass vbr and if that is not available a 1 pass constant quality, which would be cq mode, in other words constant quantizer mode, not CRF.
    I took it to mean CRF because CRF is meant to achieve constant "perceived" quality. Isn't CRF pretty much CQ with adaptive quantizers? CQ is a sort of "dumb" encoding mode in that respect. Xvid only has a single pass CQ mode..... no quality based method. It's sometimes referred to as Xvid's "dumb" encoding mode, even though you can still effectively pick a quality.

    He said (referring to cheating):
    "Another method is to use 1-pass bitrate mode for one encoder and 2-pass or constant quality for another."

    "2 pass or constant quality for another"
    Which method will give you the most similar result to 2 pass. CRF or CQ? Whichever one, I'm pretty sure that's what constant quality refers to.

    "A good general approach is that, for any given encoder, one should use 2-pass if available and constant quality if not"
    Last edited by hello_hello; 17th Jan 2015 at 10:55.
    Quote Quote  
  9. Originally Posted by hello_hello View Post

    I took it to mean CRF because CRF is meant to achieve constant "perceived" quality. Isn't CRF pretty much CQ with adaptive quantizers? CQ is a sort of "dumb" encoding mode in that respect.
    Yup, that's my (limited) understanding of CRF. Adaptive. In x264 constant quantizer you set value (for P frame) and I and B values are derived from ipratio and pbratio parameters. For example in MainConcept if you select Constant quantizer as rate control it greys out other bitrate options and only allows you to manually set frame QP's for example 19-21-23.
    Quote Quote  
  10. Originally Posted by hello_hello View Post
    http://git.videolan.org/?p=x264.git;a=blob_plain;f=doc/ratecontrol.txt;hb=HEAD
    Given some data about each frame of a 1st pass (e.g. generated by 1pass ABR, below), we try to choose QPs to maximize quality while matching a specified total size. This is separated into 3 parts:

    Step 1, 2 pass encoding:
    (1) Before starting the 2nd pass, select the relative number of bits to allocate between frames. This pays no attention to the total size of the encode. The default formula, empirically selected to balance between the 1st 2 theoretical points, is "complexity ** 0.6", where complexity is defined to be the bit size of the frame at a constant QP (estimated from the 1st pass).

    Step 1, CRF and ABR encoding:
    (1) This is the same as in 2pass, except that instead of estimating complexity from a previous encode, we run a fast motion estimation algo over a half-resolution version of the frame, and use the SATD residuals (these are also used in the decision between P- and B-frames). Also, we don't know the size or complexity of the following GOP, so I-frame bonus is based on the past.
    I'm actually in the middle of a project so I can't reply to everything you said but allow me to start with this and get to the rest later. You basically proved my point that 2 pass and CRF do not in fact use the same algorithm, despite what the above quote claims.

    In 2 pass, prior to starting the second pass they select the relative number of bits to allocate between frames and the default formula between the first 2 theoretical points is complexity times .6 where complexity is defined to be the bit size of the frame at a constant QP (estimated from the 1st pass).

    So to make things easier, assume a video exactly 1 I frame long. With a two pass, the encoder analyzes the frame then decides how many bits to use to encode it by how many bits it would take to encode the frame at a constant QP and multiplying that number by .6.

    With a CRF encode they take that same frame, scale it internally to half the resolution, calculate the Sum of Absolute Transformed Differences, use the residuals, in other words the remainders. They also admit that with CRF they don't know the complexity or size of the following GOP so they can only use the preceding GOPs.

    In what way are these the same steps? With 2 pass you analyze the video, can use both previous and future GOPs and they have a simple formula for calculating how many bits to use for that frame, with CRF they reduce the size of the frame to half, run a more complex math formula, take the left overs from those calculations and there's no way to know anything about future GOPs.

    It's laid out right there in black and white, clearly 2 pass and CRF do not use the same algorithm, not even close, based on what you linked to they do not even use the same rate control algorithm, unless they claim a rather complex mathematical equivalence where performing a simple multiplication on a full size frame somehow results in the same values as performing a complex calculus procedure on a frame half that resolution and then only using the residuals, but I really don't see how such a relationship is possible.

    In fact what they describe sounds a lot like spatial resampling, something that has been around for years before AVC was enven on the horizon and for which a patent has existed since 1998:

    http://www.google.com.na/patents/US6483538

    Now I'm not flat out accusing the x264 developers of stealing someone else's work and passing it off as their own, because the patent uses Fast Fourier Transforms and they use Sum of Absolute Transformed Differences, but a math transform is used and in both cases the original image is resampled to a lower resolution.
    Quote Quote  
  11. Originally Posted by sophisticles View Post
    They also admit that with CRF they don't know the complexity or size of the following GOP so they can only use the preceding GOPs.
    Not quite, that why there is rc-lookahead which is used to analyze complexity of future frames. You get better results with longer lookahead setting.
    Quote Quote  
  12. Originally Posted by sophisticles View Post
    The definition of an algorithm is the sequence of steps you take to achieve an objective. A 2 pass encode does an analysis of the entire video in order to determine how to distribute bits and what parts can be thrown out with minimal quality loss and then applies what it has discovered to the second pass that does the job. This is why when you run a benchmark with a 2 pass encode the first pass is so much faster than the second pass, no actual compression is taking place, only analysis. In the case where MB-Tree is used there is also a large stats file created.

    By definition no single pass encoding scheme can use the same algorithm as a 2 pass scheme because the single pass scheme lacks a primary analysis pass followed by a secondary work pass.

    It may seem like splitting hairs to some but it's misleading and dishonest to claim that CRF, which is a method of applying an average quantizer is using the exact same algorithm as a 2 pass bit rate based encode.

    CRF is more properly analogous to a 1 pass abr encode, with the difference being that abr strives for an average bit rate and CRF strives for an average quantizer.

    If you read DS' statements in the link I provided earlier, he says quite clearly that the first choice should be 2 pass vbr and if that is not available a 1 pass constant quality, which would be cq mode, in other words constant quantizer mode, not CRF.

    Reread the encoding guidelines DS lays out for what he considers a "fair" encoding test, if that is what leads to a "fair" encoding test then logic would dictate that it also leads to the best encodes.
    CRF and 2pass both output virtually identical quality at the same filesize. What's your point? You're tired from working a night shift (I work a night shift too and love it) and you waste all your time with so much verbosity just to say that CRF and 2pass are not identical algorithms? Yeah, there's a reason they are two different operations, Einstein.
    Quote Quote  
  13. It's fine to share observations or some assertions, but please at least provide some support . I wouldn't go into a physics conference and claim "gravity doesn't exist" without at least some hard data

    To be clear - the specific areas I'm having problems "digesting" with are these statements fom post 22:
    https://forum.videohelp.com/threads/369438-Is-x264-the-best?p=2367621&viewfull=1#post2367621

    These issues begin with CRF. Because CRF under allocates bits in the areas I described above the developers, evidently so in love with CRF mode that they were unwilling to abandon it, created a bunch of band-aids to alleviate the problems.

    The first was AQ, or adaptive quantization. If you read through the documentation you discover that AQ takes bits away from more detailed areas and tries to allocate them to flatter areas of the image and the AQ strength varies how much it biases on either end of the spectrum. If you don't use CRF there is no need for AQ.

    Same applies to fades and cross fades, x264 treats them with less importance and under allocates bits, Weighted P helps but again with no CRF much of this effect doesn't occur.
    So this implies you would disable AQ when using 1 or 2 pass encoding ? Because there is no "need" ?

    Can you please provide some examples where 2pass avoids these issues, or performs substantially better than CRF

    Or how did you come to these "conclusions" ? Was it just "reasoning" or based on actual testing ?

    The reason I'm asking these questions is to see if they are outliers or if this actually requires farther investigation. That's how things get improved. That's how you contribute. You see - there is a back and forth dialog



    Cheers
    Quote Quote  
  14. Or for reverse engineering, one has to go backwards, having facts in front, real thing, then reverse it. So you go from that fact that those two results are identical (CRF and 2pass with same average) and then let it flow to suggestion. So for example, algorithm to get video using CRF must be involved in 2pass as well, where, is 1pass calculating saving relative bitrate difference plot or relative quantize difference plot ... etc
    Quote Quote  
  15. One of the explanations DS gave about CRF and 2-pass algorithm is that CRF varies bitrate to obtain certain quality, which results in some X bitrate. 2-pass mode for X bitrate varies CRF obtain given bitrate. Small variations aside, same rate control method is used in both modes.

    Anyway, last year I performed some tests comparing CRF vs 2-pass. I encoded video with CRF 23 value using presets and noted obtained bitrates. Then I used those average bitrates for 2-pass encoding. Addition to command line was --tune ssim --ssim. In all cases I got slightly higher SSIM with CRF mode.

    Visually I could not say which version looked better. I also noticed that CRF is not fixed (absolute) measure of quality. Same CRF with different settings will give you different quality.

    Knowing that, I choose my settings first according to encoding speed I can tolerate. Then I do a few test encodings to find what CRF value gives me quality I need. That implies that x264 is very good H.264 encoder if one knows how to use it.

    So, I draw the same conclution as x264 devs. If one needs certain quality and does not care about file size one should use CRF. If one needs file with certain file size 2-pass mode is to go.


    About quality at higher bitrates. I didn't test that myself but almost all movies that one can find on public or private torrent trackers are re-encoded using x264 encoder. And those guys who know what are they doing are creating completely transparent re-encoded versions of Blu-Ray sources at half or 1/3 of original Blu-Ray bitrates.

    Edit:

    I just compared CRF23 vs 2-pass encoding by calculating SSIM for every frame. Not much deviation, as you can see. Excel file with graph attached.
    Image Attached Files
    Last edited by Detmek; 17th Jan 2015 at 18:41.
    Quote Quote  
  16. Originally Posted by poisondeathray View Post
    Or how did you come to these "conclusions" ? Was it just "reasoning" or based on actual testing ?
    It's primarily based on reasoning. You do agree with my assessment that x264, by default, under-allocates bits to areas where it thinks you won't notice so that it can use them in areas that will be more noticeable. This is well established fact, I'm not claiming credit as the originator of this idea. This is well documented and is one of the hallmarks of x264 in general and CRF in particular.

    Once you have this baseline fact the rest of my conclusions follow.

    There is also this other fact, and I really hesitate to bring it up because people will again accuse me of being a member of a different species that is also expired or they will accuse me of "having an axe to grind" or both. However, I do think many will find it interesting.

    When it comes to x264, it seems to me that it works despite the x264 developers not because of them. At times it almost seems like they got lucky. I have searched through the doom9 forums where the developer's maintained ongoing threads about the new features of x264 and the progress of their development and at times I have run across posts by the two main developers that leave me scratching my head, they are so bizarre that I am left to wonder if they were just goofing around or if someone had hacked their accounts and made those posts in their names.

    Look at this thread started by Dark Shikari, entitled "What in the world is the inloop deblocker doing?"

    http://forum.doom9.org/showthread.php?t=129071

    Initially I chuckled because I honestly thought it was a joke, an x264 developer asking why the inloop deblocking filter is behaving this way and what is going on. The more I read I realized that this guy was serious, which really made me question the wisdom of relying on any software written by any of them.

    Now queue the next person to say I am a non-living rodent, but I personally find some of the statements these guys make as part of the development process to be quite odd.

    As a side note, am I the only one that would like to see a highly simplified encoder, one without any encoding parameters other than profile and level and GOP length and composition, that is coded from the ground up with one goal in mind, namely the highest possible achievable quality at any bit rate.

    If this mean that behind the scenes it used psy optimizations, trellis, weighted frames, whatever, so be it but not have a potential 1000+ different combination of settings.

    I long for someone to say this is the absolute best quality I can get with this source material and this is the absolute fastest I can get this encoder to run while achieving that quality and just be done with it.

    And for the record I'm talking about all encoders, vp9, x265, xvid, divx, x264, all of them.
    Quote Quote  
  17. Originally Posted by Detmek View Post
    About quality at higher bitrates. I didn't test that myself but almost all movies that one can find on public or private torrent trackers are re-encoded using x264 encoder. And those guys who know what are they doing are creating completely transparent re-encoded versions of Blu-Ray sources at half or 1/3 of original Blu-Ray bitrates.

    Of course they are re-encoded with x264, you don't think movie pirates are going to spend thousands on a high end encoder, do you?

    As for being "completely transparent" at 1/3 to 1/2 the original bit rate that is a flat out lie. I have downloaded movies where the scene group claims that it is transparent to the source and in every case the results where substandard, the encodes are of good quality because the source Blu-Ray they used was of very high quality, not because of the encoder they used.
    Quote Quote  
  18. Originally Posted by sophisticles View Post
    It's primarily based on reasoning. You do agree with my assessment that x264, by default, under-allocates bits to areas where it thinks you won't notice so that it can use them in areas that will be more noticeable. This is well established fact, I'm not claiming credit as the originator of this idea. This is well documented and is one of the hallmarks of x264 in general and CRF in particular.

    Once you have this baseline fact the rest of my conclusions follow.

    There is also this other fact, and I really hesitate to bring it up because people will again accuse me of being a member of a different species that is also expired or they will accuse me of "having an axe to grind" or both. However, I do think many will find it interesting.

    When it comes to x264, it seems to me that it works despite the x264 developers not because of them. At times it almost seems like they got lucky. I have searched through the doom9 forums where the developer's maintained ongoing threads about the new features of x264 and the progress of their development and at times I have run across posts by the two main developers that leave me scratching my head, they are so bizarre that I am left to wonder if they were just goofing around or if someone had hacked their accounts and made those posts in their names.

    OK thanks for clearing that up.

    The occurs with 2 pass as well. In fact, it tends to occurs with every codec to an extent. A lot of it has to do with optimizing for PSNR in codec development. Codecs are optimize for PSNR are typically prone to those sorts of problems, especially gradient banding .

    So unfortunately this is where deductive reasoning fails to predict what happens in real life. I can't find 1 outlier or a single case that exhibits that behaviour, where CRF is markedly worse in a fade, or dark scene or gradient than 2pass with equivalent settings. And believe me, I've look across dozens of different scenarios and genres, probably a few hundred test sequences in total

    You are absolutely correct a lot of x264 is "lucky" or haphazard. The code is a patched together mismash. But for whatever reason it works.



    Look at this thread started by Dark Shikari, entitled "What in the world is the inloop deblocker doing?"

    http://forum.doom9.org/showthread.php?t=129071

    Initially I chuckled because I honestly thought it was a joke, an x264 developer asking why the inloop deblocking filter is behaving this way and what is going on. The more I read I realized that this guy was serious, which really made me question the wisdom of relying on any software written by any of them.
    To be fair, this was before Fiona joined the team offically - or at least very early before Fiona's first commits, and definitely before she became a main contributor. If you look, her first commit was in Oct 2007 as a junior contributor. That thread was posted in Aug 2007


    Now queue the next person to say I am a non-living rodent, but I personally find some of the statements these guys make as part of the development process to be quite odd.

    Don't take the rodent comments so seriously. He was fun to have around but maybe went overboard a few times and stepped on a few toes.
    Quote Quote  
  19. Originally Posted by sophisticles View Post
    Originally Posted by Detmek View Post
    About quality at higher bitrates. I didn't test that myself but almost all movies that one can find on public or private torrent trackers are re-encoded using x264 encoder. And those guys who know what are they doing are creating completely transparent re-encoded versions of Blu-Ray sources at half or 1/3 of original Blu-Ray bitrates.

    Of course they are re-encoded with x264, you don't think movie pirates are going to spend thousands on a high end encoder, do you?
    It's actually the opposite. Because they are pirates , they have access to high end encoders

    And the "high end encoders" you're likely referring to are optimized for high bitrates. In fact the studio level BD encoders cannot even encode non compliant BD material. It won't even come close to x264 at half BD bitrate ranges. None of the expensive encoders can do a good job at low to mid bitrates, because professional distribution formats like BD doesn't use low bitrates.


    As for being "completely transparent" at 1/3 to 1/2 the original bit rate that is a flat out lie. I have downloaded movies where the scene group claims that it is transparent to the source and in every case the results where substandard, the encodes are of good quality because the source Blu-Ray they used was of very high quality, not because of the encoder they used.
    Agree with your assessment of many of the claims. Some movies, however, can compress to 1/2 the bitrate with almost no loss, visual or objective metric wise. It's very case dependent
    Quote Quote  
  20. Originally Posted by sophisticles View Post
    As a side note, am I the only one that would like to see a highly simplified encoder, one without any encoding parameters other than profile and level and GOP length and composition, that is coded from the ground up with one goal in mind, namely the highest possible achievable quality at any bit rate.

    If this mean that behind the scenes it used psy optimizations, trellis, weighted frames, whatever, so be it but not have a potential 1000+ different combination of settings.

    I long for someone to say this is the absolute best quality I can get with this source material and this is the absolute fastest I can get this encoder to run while achieving that quality and just be done with it.
    Then you're talking to the right guy, I specialize in efficiency and we get endless praise by our fans of how our rips resemble the Blu-ray source at only 700 megs. I don't find x264's settings hard to work with at all, it took just one tour of the description of each setting and I cranked up what I thought was worth the encoding time. Since then, I only find myself having to adjust at most 4 settings depending on the video I'm encoding but they're usually in two categories: movie or cartoon.

    But it still comes down to personal preference. For me, twice as much encoding time for 10% better quality is damn worth it. Others disagree but then complain why other people's videos look better.

    You only have 10,000 different combinations of settings if you have no goddamn clue what you're doing. For example, is there a sensible reason to use Diamond motion search (low quality option) while using a high quality option on subpixel refinement? Anyone who knows what they're doing only see the very limited sensible options that make sense to change in certain scenarios.

    You don't wanna educate yourself about x264's encoding options, you don't want a technical background. That's cool. So is there a reason you aren't using the pre-configured profiles yet still demanding to be spoon-fed?

    What do you want, sophisticles? "The highest achievable quality at this given bitrate". Crank everything up to the max then, set the keyframe interval to infinite. Your video will take 10 times longer to encode for 1% better quality. Now you'll whine about inefficiency. And there we enter the abstract realm of what Quality:Encoding time ratio is best where a lot disagree. Are you willing to wait twice as long for 10% better quality? I am, but a lot aren't. Thank god I don't have to agree with them, thus I configure my settings accordingly to what is efficient for me.

    Computers can't read your mind, they only do what you tell them.
    Quote Quote  
  21. Yes, the best quality at any bitrate makes no sense,
    How about x264.exe --qp 0 that gives you the same quality as original. I fullfilled your specifications but it is not right perhaps, because of huge bitrate so what bitrate it is ok and what bitrate is too high? Where is the limit for hardware to actually be able to play it? What hardware, old phone, Blu-Ray player or computer?

    Quantizer (or bitrate), presets, tuning, buffers, profile and level and then just defaults, level actually defaults quite right and if profile high is not desired it should be specified (main or baseline). I cannot help it but it is very easy to set x264 now. Depending what it is encoded for.

    Expectation for encoding is never getting best result but rather, give me what you got if... I give you this time to encode, I do not have latest $1000 machine to compute, I do not have unlimited space, I need steady stream for web , I have simply 32" TV so what to encode it that is just enough,..... variety of demands
    Quote Quote  
  22. Originally Posted by _Al_ View Post
    Yes, the best quality at any bitrate makes no sense,
    How about x264.exe --qp 0 that gives you the same quality as original. I fullfilled your specifications but it is not right perhaps, because of huge bitrate so what bitrate it is ok and what bitrate is too high?
    Good point, I have to use 2pass for 700MB rips for obvious reasons so the bitrate is always easy to pick, but then I have to pick a suitable resolution. Some movies can be 720p and fit with a low bitrate if the runtime is short and the scenes aren't complex, some have too much action and can only be good quality at 400p at 700 megs and thus probably shouldn't be squeezed to 700 megs at all.
    Luckily my friend automatically knows the perfect resolution to choose which always works out, a talent I lack.

    This kind of human intervention can't be automated into a perfect one-click encode feature that will fit every circumstance.
    Quote Quote  
  23. I see the conversation evolved a bit while I was away. I missed some fun.

    Originally Posted by sophisticles View Post
    You basically proved my point that 2 pass and CRF do not in fact use the same algorithm, despite what the above quote claims.

    In 2 pass, prior to starting the second pass they select the relative number of bits to allocate between frames and the default formula between the first 2 theoretical points is complexity times .6 where complexity is defined to be the bit size of the frame at a constant QP (estimated from the 1st pass).

    So to make things easier, assume a video exactly 1 I frame long. With a two pass, the encoder analyzes the frame then decides how many bits to use to encode it by how many bits it would take to encode the frame at a constant QP and multiplying that number by .6.

    With a CRF encode they take that same frame, scale it internally to half the resolution, calculate the Sum of Absolute Transformed Differences, use the residuals, in other words the remainders. They also admit that with CRF they don't know the complexity or size of the following GOP so they can only use the preceding GOPs.

    In what way are these the same steps? With 2 pass you analyze the video, can use both previous and future GOPs and they have a simple formula for calculating how many bits to use for that frame, with CRF they reduce the size of the frame to half, run a more complex math formula, take the left overs from those calculations and there's no way to know anything about future GOPs.
    Maybe I'm missing the obvious, because I'm not sure I fully understand it yet, but the way it appears to me is the differences are in how complexity is estimated. 2 pass has the 1st pass to work with, while CRF might do it through all sorts of voodoo and crystal ball gazing, but once it's done, both methods use the same formula/algorithm to encode it. So in that respect, it's exactly as it's always been described.
    If that's correct, then the differences between 2 pass and CRF would come down to the differences in complexity estimation, not the method used to then encode the video.
    I'm not quite sure how rc-lookahead fits into all that though. I'm still trying to get my head around that one too.

    So far, the 2 pass vs CRF "evidence" offered by other posters seems to prove 2 pass and CRF produce virtually identical results, despite any voodoo or crystal ball gazing the latter might be require. I'm not seeing anything that proves otherwise yet.
    I recall a Dark Shikari post at doom9, describing the difference between 2 pass and CRF quality at a given bitrate, and he said quality wise they could be considered identical, while his testing indicated CRF had a slight quality edge, but I'm pretty sure his conclusion was it's so insignificant you'd need to have OCD to care, and based on my limited testing, I'd have to agree.

    Originally Posted by sophisticles View Post
    It's primarily based on reasoning. You do agree with my assessment that x264, by default, under-allocates bits to areas where it thinks you won't notice so that it can use them in areas that will be more noticeable. This is well established fact, I'm not claiming credit as the originator of this idea. This is well documented and is one of the hallmarks of x264 in general and CRF in particular.
    Doesn't that depend on perspective?
    Obviously x264 allocates bits according to where they're going to be most effective in respect to perceived quality, but as the CRF value increases it's probable some areas are going to start showing a quality drop compared to others, while a further increase in CRF value might start to noticeably effect the quality of everything. Is it a bad thing though, if the quality reduction is at first mainly in the areas where the viewer isn't likely to find it too objectionable?

    If you're claiming x264 always under allocates bits in certain areas regardless of CRF value that's one thing, but if at bitrate "x", encoder "A" looks bad 90% of the time, while at the same bitrate x264 only looks bad 20% of the time, that's another thing entirely.

    Originally Posted by sophisticles View Post
    As a side note, am I the only one that would like to see a highly simplified encoder, one without any encoding parameters other than profile and level and GOP length and composition, that is coded from the ground up with one goal in mind, namely the highest possible achievable quality at any bit rate.
    Wouldn't " highest possible quality at any bitrate" be the primary goal for most encoders? Why do all the encoding parameters have to be removed for it not to be?

    Originally Posted by sophisticles View Post
    I long for someone to say this is the absolute best quality I can get with this source material and this is the absolute fastest I can get this encoder to run while achieving that quality and just be done with it.
    Yet the way an encoder allocates bits for the same perceived quality can't ever be the same regardless of the source. Not unless quite high bitrates are always used. At lower bitrates there's always going to be some areas where a particular encoder has weaknesses.

    Originally Posted by sophisticles View Post
    Originally Posted by Detmek View Post
    About quality at higher bitrates. I didn't test that myself but almost all movies that one can find on public or private torrent trackers are re-encoded using x264 encoder. And those guys who know what are they doing are creating completely transparent re-encoded versions of Blu-Ray sources at half or 1/3 of original Blu-Ray bitrates.
    Of course they are re-encoded with x264, you don't think movie pirates are going to spend thousands on a high end encoder, do you?

    As for being "completely transparent" at 1/3 to 1/2 the original bit rate that is a flat out lie. I have downloaded movies where the scene group claims that it is transparent to the source and in every case the results where substandard, the encodes are of good quality because the source Blu-Ray they used was of very high quality, not because of the encoder they used.
    I'd disagree. Not that I can compare encode to source all that often but I'd consider the quality to be mostly quite high. At least when you're in the DVD5 for 720p and DVD9 for 1080p bitrate ballpark.
    For reasons I don't quite understand, ABR rather than CRF seems to be commonly used for movies, even though the encode file sizes vary considerably.
    Despite what Mephesto said (sorry) I'd disagree when it comes to 700MB encodes. Most of the ones I've seen are obviously compressed pretty hard. Even at 720p they can still generally look better than DVD quality though, maybe on a scale of "DVD to Bluray" they'd be placed closer to the Bluray end, but for something "transparent" you'd need a much higher bitrate.

    Is there a particular CRF value for which even you'd agree x264 is pretty much transparent? I'd put it at around CRF18 as seems to be the general consensus. For encodes where even the noise is encoded pretty accurately you'd need to go lower, and at CRF18 there's occasional banding issues that might require some TLC and a lower CRF value..... that sort of thing..... but do you consider CRF to be bad at any value?
    Quote Quote  
  24. Originally Posted by sophisticles View Post
    Originally Posted by Detmek View Post
    About quality at higher bitrates. I didn't test that myself but almost all movies that one can find on public or private torrent trackers are re-encoded using x264 encoder. And those guys who know what are they doing are creating completely transparent re-encoded versions of Blu-Ray sources at half or 1/3 of original Blu-Ray bitrates.

    Of course they are re-encoded with x264, you don't think movie pirates are going to spend thousands on a high end encoder, do you?

    As for being "completely transparent" at 1/3 to 1/2 the original bit rate that is a flat out lie. I have downloaded movies where the scene group claims that it is transparent to the source and in every case the results where substandard, the encodes are of good quality because the source Blu-Ray they used was of very high quality, not because of the encoder they used.
    What?! That does not make any sence. If you encode clean video with crappy encoder output will be crap. But if you encode a clean video with good encoder it will look transparent.

    Here are a few screenshots from movie Oblivion that I found on local forum. It is a comparison of a source vs EbP version.


    https://www.dropbox.com/sh/iobxsfo022is5gj/AABifcrPseu6qzTMIfXStvXfa?dl=0



    Originally Posted by sophisticles View Post
    Originally Posted by poisondeathray View Post
    Or how did you come to these "conclusions" ? Was it just "reasoning" or based on actual testing ?
    It's primarily based on reasoning.
    Well, this might be a problem and reason why we have such a long discussion and why some of your statements contradict to what most people here claim.

    Your reasoning is just a starting point, hypothesis that needs empirical verification. And you lack that or at least you did not provide any evidence to us. According to my tests your conclusions are mostly wrong. I would like to see some of your tests, or at least how can I reproduce the results you got.

    Originally Posted by sophisticles View Post
    You do agree with my assessment that x264, by default, under-allocates bits to areas where it thinks you won't notice so that it can use them in areas that will be more noticeable. This is well established fact, I'm not claiming credit as the originator of this idea. This is well documented and is one of the hallmarks of x264 in general and CRF in particular.
    I agree with this. It is a known fact and it is a feature of x264 encoder. MB-tree, Q-comp and Adaptive Quantization work on that principle. Why it is a feature?

    Primary goal was to create encoder that can encode at low bitrates and give you a good quality. If you have limited bit resources you need to make compromise. One way to do that is to first take bits from areas where it will be the least noticeable. I don't see a problem with that.


    Originally Posted by sophisticles View Post
    There is also this other fact, and I really hesitate to bring it up because people will again accuse me of being a member of a different species that is also expired or they will accuse me of "having an axe to grind" or both. However, I do think many will find it interesting.

    When it comes to x264, it seems to me that it works despite the x264 developers not because of them. At times it almost seems like they got lucky. I have searched through the doom9 forums where the developer's maintained ongoing threads about the new features of x264 and the progress of their development and at times I have run across posts by the two main developers that leave me scratching my head, they are so bizarre that I am left to wonder if they were just goofing around or if someone had hacked their accounts and made those posts in their names.

    Look at this thread started by Dark Shikari, entitled "What in the world is the inloop deblocker doing?"

    http://forum.doom9.org/showthread.php?t=129071

    Initially I chuckled because I honestly thought it was a joke, an x264 developer asking why the inloop deblocking filter is behaving this way and what is going on. The more I read I realized that this guy was serious, which really made me question the wisdom of relying on any software written by any of them.
    Those were early days for DS when he was still learning about H.264 internals. I don't know why are bringing it now. It is irrelevant. Actually, it is completely irrelevant to talk about developers and if they achived this using their knowledge or pure luck. The question is if encoders works or does not work as it should.


    Originally Posted by sophisticles View Post
    As a side note, am I the only one that would like to see a highly simplified encoder, one without any encoding parameters other than profile and level and GOP length and composition, that is coded from the ground up with one goal in mind, namely the highest possible achievable quality at any bit rate.

    If this mean that behind the scenes it used psy optimizations, trellis, weighted frames, whatever, so be it but not have a potential 1000+ different combination of settings.

    I long for someone to say this is the absolute best quality I can get with this source material and this is the absolute fastest I can get this encoder to run while achieving that quality and just be done with it.

    And for the record I'm talking about all encoders, vp9, x265, xvid, divx, x264, all of them.
    You want encoder with Artificial Intelligence (AI) to make all the decisions instead of you? Sorry, you and all of us are out of luck for now and the next few years/decades.

    Also, I would still want to see some evidence that CRF does a worse job compared to 2-pass encoding.
    Quote Quote  
  25. Originally Posted by Detmek View Post
    What?! That does not make any sence. If you encode clean video with crappy encoder output will be crap. But if you encode a clean video with good encoder it will look transparent.
    I can conclusively prove you 100% wrong on this point and quite easily at that. Are you familiar with the movie Tears of Steel? download the mp4 version encoded with x264 and compare it with the webm version encoded with vp8, I defy anyone to find any differences between the two encodes. In fact the x264 version was encoded at a higher bit rate, check it out for yourself:

    http://ftp.nluug.nl/pub/graphics/blender/demo/movies/ToS/

    Quality of video is influenced primarily by how it's shot, the quality of the camera, quality of lens, settings used, lighting, film stock (if shot on film), scanning procedure (for film transfers), proper framing and grading, filters used and quality of intermediate codec used, often a lossless or high quality mezzanine codec, usually ProRes or Avid's offerings.

    At this point we have the "master" which is used as source for creating the distribution format, that's where codecs like x264 come into play.

    If we have a crisp, clear master, it doesn't matter what codec you use, divx, xvid, vp8, x264, x265, whatever, you will get a clean encode.

    And I will prove it for you. I am downloading the 66gb y4m ToS 4k master used to create the above linked webm and mov files, I will resize it down to 2k and encode it down to 500mb so I can upload the files to this forum and I will use mpeg-2, vp8, vp9, x264+placebo, x265 and xvid and I guarantee you that you will not be able to tell the difference between any of them.

    The two biggest factors in a high quality encode is quality of the master and bit rate used. this is well known in production circles, garbage in garbage out, just because you use encoder "x" does not mean that somehow your encode will magically be improved.
    Quote Quote  
  26. Originally Posted by Detmek View Post
    I agree with this. It is a known fact and it is a feature of x264 encoder. MB-tree, Q-comp and Adaptive Quantization work on that principle. Why it is a feature?

    Primary goal was to create encoder that can encode at low bitrates and give you a good quality. If you have limited bit resources you need to make compromise. One way to do that is to first take bits from areas where it will be the least noticeable. I don't see a problem with that.
    MB-Tree, Q-comp and AQ act to counter balance x264's tendency to under-allocate bits, which brings me back to my original statement that if x264 were not designed to do this there would be no need for them, and since CRF mode is the biggest perpetrator of robbing Peter to pay Paul then not using CRF also reduces the need for AQ and MB-Tree.

    As for what it's a problem the reason is simple, while using this method of improving quality may work for getting the best possible quality at low bit rates, when there is ample bit rate it starts to become detrimental to quality. Going back to x264's well known banding issues, at low bit rates where you're trying to maintain acceptable quality due to the overall reduced quality it's not really an issue but once you start upping the bit rate and the overall quality improves the banding issues start stick out like a sore thumb.

    Similarly psy-rd and psy-trellis, which are designed to keep detail at low bit rates tend to cause artifacts in low bit rate scenarios and AQ, because it takes bits away from edges to try and redistribute tends to cause artifacts on the border of a high detail low detail area. Similarly the problems with fades and cross fades is because x264 under allocates bits in those areas in order to put them in more static scenes.

    What x264 should have is a high bit rate encoding mode, tailored specifically for BD type scenarios, where it doesn't under allocate bits based on how fast a scene moves or how dark it is, a mode with no AQ, Psy or MB-Tree, where a different algorithm for distribution of bits is employed.

    I would like to see an encoder that uses the following simple algorithm for encoding a frame:

    Check the PSNR and SSIM values of each source frame and encode each frame with a minimum PSNR and SSIM value, 45db for PSNR and 20db for SSIM and that's it, be done with it.

    This way you could be assured of a minimum level of quality.
    Quote Quote  
  27. then just find your qp quantizer and use it all the time and disable what you want to disable ... but a tool should be used as it was designed to, because those values play like an orchestra ( after some time, tuning it, just found to be clicking well) , so for example what --keyint you choose etc... ,or you get rid of some instruments and it does not sound well ...

    for conversation there some answered are needed, I'm not sure you get that point about playability, variability in encoding, so, For what player that encoder of yours would encode? BD players? Then you'd get some limits as well, bitrate is first on the line and others., I do not understand why you not responding to those questions,

    because encoding is delivery for A device, not what I want. For storing there is original to be stored or intermediate
    Quote Quote  
  28. Originally Posted by sophisticles View Post
    Originally Posted by Detmek View Post
    What?! That does not make any sence. If you encode clean video with crappy encoder output will be crap. But if you encode a clean video with good encoder it will look transparent.
    I can conclusively prove you 100% wrong on this point and quite easily at that. Are you familiar with the movie Tears of Steel? download the mp4 version encoded with x264 and compare it with the webm version encoded with vp8, I defy anyone to find any differences between the two encodes. In fact the x264 version was encoded at a higher bit rate, check it out for yourself:

    http://ftp.nluug.nl/pub/graphics/blender/demo/movies/ToS/

    Quality of video is influenced primarily by how it's shot, the quality of the camera, quality of lens, settings used, lighting, film stock (if shot on film), scanning procedure (for film transfers), proper framing and grading, filters used and quality of intermediate codec used, often a lossless or high quality mezzanine codec, usually ProRes or Avid's offerings.

    At this point we have the "master" which is used as source for creating the distribution format, that's where codecs like x264 come into play.

    If we have a crisp, clear master, it doesn't matter what codec you use, divx, xvid, vp8, x264, x265, whatever, you will get a clean encode.

    And I will prove it for you. I am downloading the 66gb y4m ToS 4k master used to create the above linked webm and mov files, I will resize it down to 2k and encode it down to 500mb so I can upload the files to this forum and I will use mpeg-2, vp8, vp9, x264+placebo, x265 and xvid and I guarantee you that you will not be able to tell the difference between any of them.

    That' s nice, but we're talking about lossy compression. Compressing whatver is the input. You don't necessarily want to limit your testing scenarios to only pristine sources... because the conclusions you draw from those obserations will be only applicable to pristine sources. In the real world, how often does that occur ? If you want to learn about how an encoder performs, you need to test a variety of different sources, different scenarios and setups

    So you want to design the test so that you can see differences. So if you arbitrarily pick 500MB, there might not be much difference for some sources. ie. 1 data point doesn't show much obviously. So what you need to do is include more data points. Encode at 100, 200, 300, 400, 500 etc... or whatever bitrates or filesize. There will be differences without a doubt.




    The two biggest factors in a high quality encode is quality of the master and bit rate used. this is well known in production circles, garbage in garbage out, just because you use encoder "x" does not mean that somehow your encode will magically be improved.

    Yes of course. But you are starting to confusing a few concepts. We are talking about lossy compression, and compression efficiency. This thread started with a question about if x264 is the best. You're stating obvious facts like "gravity exists". No really ?

    A technically good encoder , doesn't care what the source quality is. It will try to reproduce the source +/- "fudge" with perceptive distortions

    Source content matters. High motion , noise, these are very taxing for encoders. Recall what long GOP encoding is - temporal compression. The differences are stored between frames. Therefore a clean source, low motion will compress better than something that is noisy, shaky like a handheld home video, or even a pristine master of a feature with heavy grain. So if you take a low motion, professionally shot, clean source with little to no grain - that should compress very well. The threshold where "transparency" (whatever you define that as) will be hit very early when examining your compression curves . So if you take 1 data point high up in bitrate relative to content complexity, everything will indeed look similar

    With garbage in, a good encoder will reproduce that garbage better than a poor encoder, the latter which will drop some details add more garbage

    Simply put, everything looks good at very high bitrates. The entire point of lossy compression is to reduce the filesize. That' s why we are discussing compression efficiency and lossy encoders
    Quote Quote  
  29. Originally Posted by sophisticles View Post

    MB-Tree, Q-comp and AQ act to counter balance x264's tendency to under-allocate bits, which brings me back to my original statement that if x264 were not designed to do this there would be no need for them, and since CRF mode is the biggest perpetrator of robbing Peter to pay Paul then not using CRF also reduces the need for AQ and MB-Tree.
    It's a nice theory but doesn't hold any truth in real testing. There is no evidence to back it up (and believe me, I have looked). Eitherway x264 still needs AQ, without a doubt in 2pass mode. Yes, there are minor differences when using CRF


    As for what it's a problem the reason is simple, while using this method of improving quality may work for getting the best possible quality at low bit rates, when there is ample bit rate it starts to become detrimental to quality. Going back to x264's well known banding issues, at low bit rates where you're trying to maintain acceptable quality due to the overall reduced quality it's not really an issue but once you start upping the bit rate and the overall quality improves the banding issues start stick out like a sore thumb.
    If what you say were true, then you would expect your theory to suddenly become validated at high bitrates relative to content complexity. ie. That there becomes a point at higher bitrates where there is a larger difference between CRF and 2 pass. That' s not the case either in real testing

    The banding issues are complex. There are different types and causes of "banding", and not all of them are necessarily attributed to x264 (or any encoder). Also, everyone talks about how bad the banding is here and there, but it's all relative. The "banding" can be just as poor or worse in other encoders.


    Similarly psy-rd and psy-trellis, which are designed to keep detail at low bit rates tend to cause artifacts in low bit rate scenarios and AQ, because it takes bits away from edges to try and redistribute tends to cause artifacts on the border of a high detail low detail area. Similarly the problems with fades and cross fades is because x264 under allocates bits in those areas in order to put them in more static scenes.
    Agree with the 1st, not necessarily so much with the 2nd. Low bitrate encodes definitely suffer from higher psy-rd and psy-trellis and AQ values. But fades aren't necessarily that straightforward either; for example, you can have a fade with motion, a fade with static content.
    Quote Quote  
  30. Originally Posted by sophisticles View Post
    MB-Tree, Q-comp and AQ act to counter balance x264's tendency to under-allocate bits, which brings me back to my original statement that if x264 were not designed to do this there would be no need for them, and since CRF mode is the biggest perpetrator of robbing Peter to pay Paul then not using CRF also reduces the need for AQ and MB-Tree.
    If you could compare two encodes, one a constant quantizer encode and the other a CRF encode, and both encodes resulted in the same average bitrate (obviously the CQ and CRF values would need to be different) with all other relevant settings identical, which would look better? Especially at low bitrates.

    If the answer is CQ then obviously CRF is the work of satan, but I suspect the answer would be CRF, and therefore any complaints regarding the way CRF allocates bits would be somewhat negated, given it's "robbing of Peter to Pay Paul" encoding method does what it's supposed to do, even if it's not perfect. If CQ always looks as good as CRF at the same bitrate, I'll switch encoding methods. Otherwise.....

    since CRF mode is the biggest perpetrator of robbing Peter to pay Paul then not using CRF also reduces the need for AQ and MB-Tree
    Isn't CRF without AQ pretty much the definition of CQ? And isn't CQ by definition a MB-Tree free zone? Maybe that's not quite right (I'd have to check) but I'm just trying to work out what constitutes the "non CRF with a reduced need for AQ and MB-Tree" encoding method you're referring to.
    Last edited by hello_hello; 18th Jan 2015 at 14:34.
    Quote Quote  
Visit our sponsor! Try DVDFab and backup Blu-rays!