VideoHelp Forum
+ Reply to Thread
Results 1 to 9 of 9
Thread
  1. I'm starting this thread because I don't want to hijack any other thread and to share some thoughts on doing a proper encoder test.

    Rule 1: No short clips! Video encoders have all sorts of analysis algorithms that look for redundancies, in the case of lossless encoders they look for redundancies that can be reversibly discarded, i.e. you can reconstruct the data that was thrown out; in the case of lossy encoders they look for redundancies that can be discarded with the least amount of quality/detail loss as possible. A short clip doesn't allow the algorithms to be used to their fullest potential and thus it can have an unfair impact on the test.

    Rule 2: No lossy previously compressed clips! Every encoder is free to decide on it's own what details to discard and what details to keep, within the parameters establish by the settings used; the details one encoder decides to discard may not be the details a different encoder may decide to discard. As such it's not fair to use a source compressed with encoder "A" and then run an encoder comparison where that same encoder is one of the participants because that encoder is likely to make the same or at the least similar decisions as when it compressed the source in the first place. This means that Encoder "A" will likely discard less details than the other competitors thus making it look like encoder "A" is the superior encoder.

    So, with these 2 basic rules in mind, what do I consider a proper source for testing? A DVD? No! A Blu-ray? Nope! Something downloaded from insert-site-here? No way!

    The sad truth is that there are very few sources that I consider the be of sufficient quality for a proper test, here are some of my favorites:

    http://media.xiph.org/tearsofsteel/tearsofsteel-4k.y4m.xz

    https://mango.blender.org/download/

    A few words about Tears of Steel, this is an excellent movie to have on your hard drive, it has close ups, special effects, great lighting, more importantly you can download the whole movie in a y4m format or DCP format. The live action footage was filmed using a Sony F65 CineAlta, the source footage is 4TB's of OpenEXR half float file encoding, ToS was edited and produced with Blender, the master was rendered in y4m DCI 4k CinemaScope cropped, this version is available and linked to above, it's 66GB in a .xz archive and 185GB when extracted.

    On my system, 16GB DDR3, a Xeon equivalent to the i7 4790 and a Radeon R7 265, it's impossible to play this version smoothly or edit it smoothly if it's on a traditional rotating disk hard drive; only if I put it on my 960GB SanDisk SSD connected via a PCI-E card can I play it smoothly or edit it in a NLE.

    Also, thanks to the fact that it's uncompressed, there is no decoding bottleneck, it does aid in showing where possible bottlenecks exist in the video processing pipeline, for instance with x264 + medium with no resizing, this 4 core 8 thread cpu will not hit 100% utilization, though it will come close. If a resize down to 720p is performed, the cpu utilization is less than 50%. Likewise with x265.

    There is one other source that ranks right up there with ToS and that's Valkaama. What makes this a must have is that like ToS they too released all their sources, they shot the movie using HDV cameras and they have a torrent file that downloads all the scenes in an unprocessed condition. What they did was film each scene using multiple cameras for multiple angles, each scene has multiple angles. Not all scenes or angles are included in the movie meaning you could take all the source footage and produce more than a few new different movies with the footage they shot.

    Though not as high quality footage as ToS, the fact that it's unprocessed and untainted, no post production filtering, nothing, just high quality footage used in a movie before it's been tampered with.

    If anyone else knows of sources like these, that are at mastering quality or above, please share in this thread.
    Quote Quote  
  2. This is true but not practical for a regular user. Back in the day I would use platformer game recordings as 'pure' sources since they were cost effective but they are so far removed from what typical content looks like that they're useless. Other options are 1080p short movies encoded at high bitrates. Once you halve the resolution, the tiniest compression artifacts that might've existed are pretty much gone.
    Quote Quote  
  3. Testing very high quality sources is only a subset of scenarios. Yes for sure it's important to do, but if that's all you do, you will only come to a rudimentary understanding of how an encoder performs. You need to test a variety of scenarios and sources.

    You can construct a test however you like. As long as details are provided, and you can justify your choices or rationale. You just have to be transparent with the testing methodology and state your assumptions, and what you're actually trying to test, and to be careful on how to extrapolate from those tests. For example, you wouldn't be able to apply your observations and interpretations from a super high quality source professionally shot and lit source, necessarily to , say a noisy, shaky consumer handy cam video. If I'm a VOD company with broadcast sources (these are roughly worse than retail BD quality) asking about an encoder comparison, your test observations might not be completely applicable ,etc...
    Quote Quote  
  4. Member hech54's Avatar
    Join Date
    Jul 2001
    Location
    Yank in Europe
    Search PM
    Originally Posted by sophisticles View Post
    share some thoughts on doing a proper encoder test.
    Why? Beauty (of an encode) is in the eye of the beholder.
    Quote Quote  
  5. I don't think either of your rules are valid for real world testing.

    Rule 1: You're saying that if I encode 10,000 frames of a clip using x264 2 pass encoding, split off the first 1000 frames and determine the average bitrate used for those, then encode the first 1000 frames again while specifying that average bitrate, the second time they'll be encoded quite differently? If that was the case, would it be unreasonable to say something's wrong?
    What if I use CRF encoding? If I encode 10,000 frames at CRF18, will the first 1000 frames be encoded differently than if I'd only encoded them on their own? Rule 1 possibly applies to using average bitrate encoding, but that's about it, and if you're going to use average bitrate encoding, which I never have, you'd want to be throwing enough bits at the video to ensure a high quality, even if a large proportion of them are effectively wasted.

    Rule 2: Why would you only want to test encoders using only the cleanest sources, and why wouldn't you test them using the same sources you're going to re-encode in the real world. Most of us re-encode lossy sources. I don't care too much how an encoder might re-encode a lossless source direct from a camera, whereas I care a lot more about how an encoder will encode my TV captures, DVD and Bluray video etc.
    A source is a source. The encoder is simply fed uncompressed video. It shouldn't need to know it was uncompressed from a lossless source to encode it well.
    Quote Quote  
  6. Content selection is important - mentioned sources are CG and as such they may give biased results - various sequences should be used - preferably lossless 4:4:4 and sufficiently oversampled.
    Quote Quote  
  7. Originally Posted by poisondeathray View Post
    Testing very high quality sources is only a subset of scenarios.
    This. And I would add a very small subset as well that I would never extrapolate to real world scenarios. For those of us who shoot our own footage, how about testing the codec on the source footage, or better yet, the final intermediate just prior to rendering? As much as I wish I had a RED and that all my shots were perfectly lit, the reality is much different. Also, I find there are some things in my footage that no encoder can handle well (and I see it in professional broadcasts as well, so I know it is not just me). Therefore I account for that in my post-processing. So it is not just the codec, it is the entire workflow from acquisition to post to encode.
    Quote Quote  
  8. Originally Posted by hello_hello View Post
    I don't think either of your rules are valid for real world testing.

    Rule 1: You're saying that if I encode 10,000 frames of a clip using x264 2 pass encoding, split off the first 1000 frames and determine the average bitrate used for those, then encode the first 1000 frames again while specifying that average bitrate, the second time they'll be encoded quite differently? If that was the case, would it be unreasonable to say something's wrong?
    What if I use CRF encoding? If I encode 10,000 frames at CRF18, will the first 1000 frames be encoded differently than if I'd only encoded them on their own? Rule 1 possibly applies to using average bitrate encoding, but that's about it, and if you're going to use average bitrate encoding, which I never have, you'd want to be throwing enough bits at the video to ensure a high quality, even if a large proportion of them are effectively wasted.

    Rule 2: Why would you only want to test encoders using only the cleanest sources, and why wouldn't you test them using the same sources you're going to re-encode in the real world. Most of us re-encode lossy sources. I don't care too much how an encoder might re-encode a lossless source direct from a camera, whereas I care a lot more about how an encoder will encode my TV captures, DVD and Bluray video etc.
    A source is a source. The encoder is simply fed uncompressed video. It shouldn't need to know it was uncompressed from a lossless source to encode it well.
    With regards to the first part, it depends on whether or not you have specified the non-deterministic option in x264, but what I was actually saying is that if you take an unprocessed source and encode it, you are throwing away certain information. It's unfair to then take the encoded output and ask a different encoder to process it again because it will make different decisions as to what to throw away. Example: you take a source encoded with x264 and then encode it with Main Concept and x264 (again) and conclude x264 is better; of course it's going to be better the second time around it's using the same algorithms to decide what to keep and what to throw away. Likewise if you take an uncompressed source and encode it with Main Concept and use that as a source for an encoder comparison you have now given an unfair advantage to Main Concept since both times the encoder will make similar decisions as to what to throw away and what to keep.

    The only fair way to test is if the source is either uncompressed or encoded with a non-competing encoder.
    Quote Quote  
  9. Originally Posted by sophisticles View Post
    With regards to the first part, it depends on whether or not you have specified the non-deterministic option in x264....
    Are you saying you do?
    According to what I've read, it makes very little difference anyway.
    http://forum.doom9.org/showthread.php?p=1241291#post1241291

    Originally Posted by sophisticles View Post
    ....but what I was actually saying is that if you take an unprocessed source and encode it, you are throwing away certain information. It's unfair to then take the encoded output and ask a different encoder to process it again because it will make different decisions as to what to throw away. Example: you take a source encoded with x264 and then encode it with Main Concept and x264 (again) and conclude x264 is better; of course it's going to be better the second time around it's using the same algorithms to decide what to keep and what to throw away. Likewise if you take an uncompressed source and encode it with Main Concept and use that as a source for an encoder comparison you have now given an unfair advantage to Main Concept since both times the encoder will make similar decisions as to what to throw away and what to keep.

    The only fair way to test is if the source is either uncompressed or encoded with a non-competing encoder.
    As you say, one way around it might be to use a source encoded by an encoder not taking part in the comparison. mpeg2, for example. Or filter a source and save it as uncompressed. I often noise filter film with QTGMC in progressive mode or with SMDegrain or MCTemporalDenoiseMod, often crop and/or resize, there's sometimes a little sharpening involved and I always add dithering to the end of the script. Some of those filters will also add back grain or you can simulate it. Generally the idea is for the re-encoded version to look better than the source, or at least as good, and even though beauty is in the eye of the beholder, the result is probably far enough removed from the source it wouldn't matter how it was encoded originally as long it was high bitrate.

    On one end of the scale I might have a noisy old VHS capture I need to encode, while on the other end there's a pristine, never before compressed source. The average video I encode probably sits somewhere in the middle, but if you want some lossless sources I can capture a few VHS tapes for you.
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!