VideoHelp Forum




Closed Thread
Page 1 of 2
1 2 LastLast
Results 1 to 30 of 35
  1. I'm working with MPEG-2 sources at the moment (DVD MPEG-2 VOB video streams more specifically), one thing I know about MPEG-2 is it's a lossy format, as such I generally avoid the format if at all possible. It seems rather apparent that the same codecs used for encoding are also used for decoding, which makes things a bit easier to understand. However, I have questions about decoding in this particular case.

    1. I'm already aware of some of the lossless codec options for video and audio streams, however in the case of MPEG-2, there isn't a lot of information on what could inherently be utilized for said format. From what I understand at this moment in time, in order to even encode into a specific format, the original stream has to be decoded first. If one uses an MPEG-2 codec solution as the encoder for a video stream, the end result is obviously always lossy, but this begs the question, if the same codec used for encoding is also used for decoding, does that mean decoding is not lossless in a case like this? Part of what brought this on was when I encountered a particularly strange situation with DGIndex's iDCT algorithms, using the different algorithm options yielded varying results when performing frame image comparisons, slower options obviously yielded more accuracy, but to determine if the result is lossless is hard to say, I can't really tell if the decoder in the video editing software plus DGIndex's algorithms together are causing damage of sorts, or if it's just one or the other, either way, it's very apparent when comparing frame images between the algorithms, let alone exporting a frame directly from the video editor without DGIndex's intervention, information is being destroyed by something.

    2. If the truth turns out to be that decoding is lossy for MPEG-2 sources much like its encoding counterpart, is it possible for a codec to exist that can at least decode MPEG-2 sources losslessly? Theoretically speaking, this could make it possible to at least convert into another lossless format of choice without having to worry about quality loss due to do the lossless decode. At face value, it seems easy to assume that decoding is lossless, but if that was the case, why isn't the same algorithm to decode losslessly able to be repurposed to encode losslessly in particular cases like these?

    I know these questions kind of make the idea of decoding and encoding seem over simplified, this might seem excessive, maybe I just sound nuts, however I'm not trying to underestimate the complexity of these algorithms, let alone am I trying to confuse anyone with these questions, I'm just concerned with how I am handling my stuff, and if there is a better path that can be taken for the sake of preserving the quality as much as possible, losslessly if I can help it. Speed and file size aren't a concern to me, as such, I try to avoid any lossy methods if at all possible. I would rather only resort to lossy approaches under more specific circumstances where that is the only approach that works under particular conditions, in such cases, I would utilize my lossless encodes as direct sources. I'm fully aware that any sort of editing and such that I make to video streams could be classified as its own sort of "damage" if one wants to get technical, however to me that's more of a controlled situation and not of the same equivalence to the questions asked. If something, if not everything, I have asked or have said is wrong, please inform me and correct me, that way I can actually learn and understand more appropriately. Hopefully this provides some fruitful insights, my current research thus far has led me to mostly dead ends on finding exact answers for things like this.

  2. Video Restorer lordsmurf's Avatar
    Join Date
    Jun 2003
    Location
    dFAQ.us/lordsmurf
    Search Comp PM
    Originally Posted by DeadSkullzJr View Post
    one thing I know about MPEG-2 is it's a lossy format
    This is actually not true. It fully depends on factors.

    - If you refer to DVD-Video, then yes, lossy.
    - But broadcast use? Not so fast!
    Want my help? Ask here! (not via PM!)
    FAQs: Best Blank DiscsBest TBCsBest VCRs for captureRestore VHS

  3. But broadcast use? Not so fast!
    @lordsmurf: How come? Last I checked, there was here is no truly lossless mode with MPEG-2 thus it's a lossy format.


    @ DeadSkullzJr: Maybe this helps a bit:
    MPEG-2 decoder compliance: The MPEG-2 standard specifies certain requirements for decoders, but it allows for some flexibility. Like how certain operations, such as the iDCT, are implemented. The standard defines the acceptable error tolerance for iDCT implementations, meaning that as long as an iDCT algorithm stays within this tolerance, it can be considered MPEG-2 compliant.
    But yes, this means not all standard conform MPEG-2 decoders will output the exact same output, but this by design.

    Side note about IEEE-1180 compliance: The IEEE-1180 standard provides stricter guidelines for the accuracy of the iDCT implementation. An iDCT that is IEEE-1180 compliant will meet higher precision requirements than those mandated by the MPEG-2 standard. Therefore, while IEEE-1180 compliance ensures high accuracy, it is not a requirement for MPEG-2 compliance.

    it's very apparent when comparing frame images between the algorithms, let alone exporting a frame directly from the video editor without DGIndex's intervention, information is being destroyed by something.
    This should not be due to iDCT, those differences are miniscule and should not really be visible.
    Not sure what you mean by DGIndex's intervention.


    Cu Selur
    users currently on my ignore list: deadrats, Stears555, marcorocchini

  4. Captures & Restoration lollo's Avatar
    Join Date
    Jul 2018
    Location
    Italy
    Search Comp PM
    Originally Posted by Selur View Post
    But broadcast use? Not so fast!
    @lordsmurf: How come? Last I checked, there was here is no truly lossless mode with MPEG-2 thus it's a lossy format.
    He’s joking.

  5. Ah,..okay.
    users currently on my ignore list: deadrats, Stears555, marcorocchini

  6. Member Cornucopia's Avatar
    Join Date
    Oct 2001
    Location
    Deep in the Heart of Texas
    Search PM
    A few points of clarification to the OP.

    1. Codecs are compressors/encoders and decompressors/decoders. They are reciprocal in their purpose, but they are not a "reversal" of each other. They are considered paired, but they each have unique functions. Note that compression & encoding are similar, but mean slightly different processes (reduction in bitrate/bandwidth/size, and assigning a code, respectively).

    2. Lossy codecs, as a universal rule, lose their info/quality during the encoding/compression stage. And once lost, it is never fully recoverable. Decoders/decompressors return the clip's code faithfully back to what quality/info remains in the code, so in that sense, their operation IS lossless.

    3. Mpeg2 doesn't have a defined mode that explicitly provides lossless functionality, but it can be equivalently lossless in quality at and above certain bitrates (which are still much below that of its uncompressed source). For example, at ~25Mbps SD Mpeg2 is virtually lossless.

    4. Many codec families, and mpeg's in particular, work to provide "reference/archetypical" bitstreams and "reference/archetypical" decoding models. Meaning, they code their results so that a reference/model decoder would provide a given base output. But there is no restriction on how it algorithmically arrives at that solution. This allows for different groups to uniquely create their own form of the decoder, which all should provide the intended output. But some might do that more quickly, or efficiently, reliably, etc.


    Hope that helps,


    Scott

  7. Originally Posted by DeadSkullzJr View Post
    it's very apparent when comparing frame images between the algorithms, let alone exporting a frame directly from the video editor without DGIndex's intervention, information is being destroyed by something.
    "very apparent" as in normal viewing, zoomed in or using difference amplification ?

    For the video editor, if differences are "very apparent" - it might be that other processes being applied - such as deinterlacing, mismatched settings causing the editor to apply other things such as resampling differences. If "exporting a frame" means an image format like PNG, then there can differences between YUV to RGB algorithms, and chroma upsampling differences.

    I've seen minor differences between the algorithms, but nothing I would call "vary apparent". Do you have an example that demonstrates "very apparent information destruction ?"

  8. Capturing Memories dellsam34's Avatar
    Join Date
    Jan 2016
    Location
    Member Since 2005, Re-joined in 2016
    Search PM
    I've always seen MPEG-2 compression as horrible being there when the transitioning from analog satellite or off air to digital happened, not that it's 100% MPEG-2's fault but broadcasters abused the codec, they would cram a dozen of SD channels and a couple or more HD channels in one frequency that used to broadcast one SD analog channel. Off course digital was a necessary evil and it actually improved things a lot considering moving from VHS to DVD then Blu-ray, SD analog TV to HD digital such as ATSC and HD cable/satellite.

    I have never been a good fan of MPEG-2 SD, particularly DVD, But the OP concern is very important as we have seen already countless of threads talking about improving DVD quality. Just like VHS, a lot of people amassed a collection of DVD materials from titles that never made it to other formats or home videos that got converted to DVD's and the tapes are thrown away, I would really like to see a nice write up on how to go about stripping MPEG-2 compression and improving its quality a bit including de-interlacing and resizing to HD and then encoding to a more efficient lossless or lossy modern codec.

  9. Originally Posted by Cornucopia View Post
    A few points of clarification to the OP.

    1. Codecs are compressors/encoders and decompressors/decoders. They are reciprocal in their purpose, but they are not a "reversal" of each other. They are considered paired, but they each have unique functions. Note that compression & encoding are similar, but mean slightly different processes (reduction in bitrate/bandwidth/size, and assigning a code, respectively).

    2. Lossy codecs, as a universal rule, lose their info/quality during the encoding/compression stage. And once lost, it is never fully recoverable. Decoders/decompressors return the clip's code faithfully back to what quality/info remains in the code, so in that sense, their operation IS lossless.

    3. Mpeg2 doesn't have a defined mode that explicitly provides lossless functionality, but it can be equivalently lossless in quality at and above certain bitrates (which are still much below that of its uncompressed source). For example, at ~25Mbps SD Mpeg2 is virtually lossless.

    4. Many codec families, and mpeg's in particular, work to provide "reference/archetypical" bitstreams and "reference/archetypical" decoding models. Meaning, they code their results so that a reference/model decoder would provide a given base output. But there is no restriction on how it algorithmically arrives at that solution. This allows for different groups to uniquely create their own form of the decoder, which all should provide the intended output. But some might do that more quickly, or efficiently, reliably, etc.


    Hope that helps,


    Scott
    So basically, if my understanding is right, decoding is lossless, and encoding is where things can either be lossy or lossless depending on the format and algorithm utilized, among other factors (MPEG-2 in this case always being lossy when encoding). With this in mind if my understanding is right so far, there is no such thing as encoding losslessly with MPEG-2, but we can come close via a means of perceptual losslessness when using higher bitrates. For about four or so years now with my ventures in videography, I assumed this was the case from prior research, but after dealing with my DVD backups recently, toying with DGIndex, and after decoding was brought to my attention, I essentially second guessed myself and thus born the questions seen here (not to mention other factors that didn't want to help me at all). I'm extremely grateful on the clarifications made, thank you! If you wouldn't mind reading forward in this post, I actually have frame images demonstrating differences between the iDCT algorithms present in DGIndex, and also included comparison images between the original source and the bundled algorithms.

    Originally Posted by poisondeathray View Post
    Originally Posted by DeadSkullzJr View Post
    it's very apparent when comparing frame images between the algorithms, let alone exporting a frame directly from the video editor without DGIndex's intervention, information is being destroyed by something.
    "very apparent" as in normal viewing, zoomed in or using difference amplification ?

    For the video editor, if differences are "very apparent" - it might be that other processes being applied - such as deinterlacing, mismatched settings causing the editor to apply other things such as resampling differences. If "exporting a frame" means an image format like PNG, then there can differences between YUV to RGB algorithms, and chroma upsampling differences.

    I've seen minor differences between the algorithms, but nothing I would call "vary apparent". Do you have an example that demonstrates "very apparent information destruction ?"
    Sure do! Now with the naked eye you won't notice every blemish or artifact when viewing the end results, on occasions however, because I know the quality of the source so well, I end up spotting some things out of place here and there that I know shouldn't be there or happening when testing the video results out. To set the record straight, I DO NOT encode into MPEG-2 at all, as stated in the main post I try to encode losslessly if I can help it. In this particular case, I encode into H.264 with lossless settings using FFmpeg. Based on Cornucopia's response, decoding is handled losslessly (assuming I understood correctly), which implies that my issue really comes down to DGIndex now. I put together a collection of frame images, all of which are of the same frame from the film, but each of which differ in the sense of what iDCT algorithm is utilized in DGIndex. The frame chosen isn't inherently one that will be noticeable with the naked eye, but it was chosen because it's a progressive frame, as such, technically when using DGIndex to remove pulldown and such, the original progressive frames aren't supposed to be touched at all. These differences occur on every frame no matter the MPEG-2 source utilized.

    Source
    Image
    [Attachment 84342 - Click to enlarge]


    DGIndex: 32-bit MMX
    Image
    [Attachment 84343 - Click to enlarge]


    DGIndex: 32-bit SSE MMX
    Image
    [Attachment 84344 - Click to enlarge]


    DGIndex: 32-bit SSE2 MMX
    Image
    [Attachment 84345 - Click to enlarge]


    DGIndex: 64-bit Floating Point
    Image
    [Attachment 84346 - Click to enlarge]


    DGIndex: IEEE-1180 Reference
    Image
    [Attachment 84347 - Click to enlarge]


    DGIndex: Simple MMX
    Image
    [Attachment 84348 - Click to enlarge]


    DGIndex: Skal SSE MMX
    Image
    [Attachment 84349 - Click to enlarge]


    Source vs. DGIndex: 32-bit MMX (Gray means no differences, red means differences)
    Image
    [Attachment 84350 - Click to enlarge]


    Source vs. DGIndex: 32-bit SSE MMX (Gray means no differences, red means differences)
    Image
    [Attachment 84351 - Click to enlarge]


    Source vs. DGIndex: 32-bit SSE2 MMX (Gray means no differences, red means differences)
    Image
    [Attachment 84352 - Click to enlarge]


    Source vs. DGIndex: 64-bit Floating Point (Gray means no differences, red means differences)
    Image
    [Attachment 84353 - Click to enlarge]


    Source vs. DGIndex: IEEE-1180 Reference (Gray means no differences, red means differences)
    Image
    [Attachment 84354 - Click to enlarge]


    Source vs. DGIndex: Simple MMX (Gray means no differences, red means differences)
    Image
    [Attachment 84355 - Click to enlarge]


    Source vs. DGIndex: Skal SSE MMX (Gray means no differences, red means differences)
    Image
    [Attachment 84356 - Click to enlarge]

  10. Originally Posted by DeadSkullzJr View Post
    In this particular case, I encode into H.264 with lossless settings using FFmpeg. Based on Cornucopia's response, decoding is handled losslessly (assuming I understood correctly), which implies that my issue really comes down to DGIndex now.
    Some issues with that comparison is "source" - how did you get the "source" to compare? and how are you comparing exactly to generate "red" differences

    eg. if that was obtained with ffmpeg, then you're using ffmpeg mpeg2 decoder and whatever idct algorithm it's using. That could be an additional source of "error" in the source

    Problem #2a - are you converting RGB then comparing in RGB ? Because that is technically lossy

    Problem #2b - You could be measuring differences in upscampling and RGB conversion algorithms.


    These differences occur on every frame no matter the MPEG-2 source utilized.
    Yes, but these are types of expected idct differences that selur explained above

    But the rough idea is what I was referring to earlier . The mpeg2 decoder algorithm differences are very minor .They are essentially rounding differences. Eg. instead of a pixel being YUV (138,97,100) , it might be rounded to YUV(138,91,101). The actual Cr value in this hypothetical 101 migh t have been 100.35, but it gets rounded differently due to precision - and that value is actually lossy eitherway. It might have actually been 99 in the original, original, pre mpeg2 source.

    In theory, the IEEE-1180 Reference decoding for mpeg2 is supposed to be the highest quality. One way you could test that is use a source (A), encode that to mpeg2 (B). Run through the different idct decoding and measure against the source (A) using whatever measurement (metrics such as PSNR, SSIM, VMAF , subjective assessment, etc.)


    Originally Posted by DeadSkullzJr View Post

    Sure do! Now with the naked eye you won't notice every blemish or artifact when viewing the end results, on occasions however, because I know the quality of the source so well, I end up spotting some things out of place here and there that I know shouldn't be there or happening when testing the video results out.

    Can you post an example where "spotting some things out of place here and there that I know shouldn't be there or happening" - because that's not due to MPEG2 decoder idct differences, it's going to be from something else

    ie. can you point out a frame with a material difference that affects visible quality. Something you can see that's noticably out of place or definitely worse. Because right now you could argue some of the pixels in the "source" are "worse" - both subjectively and objectively - because it's not the real YUV source, but a lossy RGB converted representation of the source

    e.g. If I did an unlabelled blind test, could you pick out which of those screenshots are "worse", and explain "why"
    Last edited by poisondeathray; 26th Dec 2024 at 16:11.

  11. Video Restorer lordsmurf's Avatar
    Join Date
    Jun 2003
    Location
    dFAQ.us/lordsmurf
    Search Comp PM
    Originally Posted by lollo View Post
    He’s joking.
    I am not.

    Originally Posted by Selur View Post
    @lordsmurf: How come? Last I checked, there was here is no truly lossless mode with MPEG-2 thus it's a lossy format.
    Correct, there is no labeled "lossless" mode. But it's just a label.

    There is also no labeled "lossless' mode for ProRes422 either, but you'll have a hard time making that argument. The key here is "visually lossless", not "mathematically lossless". When it comes to video, nobody (99%+ of users) cares about math. It's about visual loss.

    At a high enough bitrate (25-50), with low/no GOP, 4:2:2, etc, you can easily make the case that MPEG is visually lossless. In the 90s-00s, quite a few post houses used MPEG-2 editing specs.

    The MXF wrapper was largely created for these files, using Sony's low/no GOP MPEG-2. MXF likely would have had much greater adoption at the consumer end, if Panasonic hadn't f'd it all up by created problems with MXF workflows. I often used MXF files during my studio days, where a lot of acceptance was MPEG-2 MXF before expanding back out for the fuller MXF DNxHD/ProRes projects. I hated the MXF MPEG, because it often came with multiplexed metadata (which could confuse the NLE).

    A lot of this has to do with semantics, and the fact that "lossless" was really not a valued term at the time. Hence no label.

    It's also been pointed out before how the lossless formats like Huffyuv, Lagarith, etc, are not 100% perfectly mathematically lossless either. For an easily example, Lagarith can have byte encode errors, which Avisynth trips over. Lossless vs. uncompressed is often achieved by rounding, which creates rounding errors. Those are usually tiny and undetectable, but must be pointed out to lossless purists (anal retentives).

    Originally Posted by Cornucopia View Post
    3. Mpeg2 doesn't have a defined mode that explicitly provides lossless functionality, but it can be equivalently lossless in quality at and above certain bitrates (which are still much below that of its uncompressed source). For example, at ~25Mbps SD Mpeg2 is virtually lossless.
    4. Many codec families, and mpeg's in particular, work to provide "reference/archetypical" bitstreams and "reference/archetypical" decoding models. Meaning, they code their results so that a reference/model decoder would provide a given base output. But there is no restriction on how it algorithmically arrives at that solution. This allows for different groups to uniquely create their own form of the decoder, which all should provide the intended output. But some might do that more quickly, or efficiently, reliably, etc.
    Hope that helps,
    Scott
    ^ This. You worked in the field, you know what's what.
    Last edited by lordsmurf; 26th Dec 2024 at 17:22.
    Want my help? Ask here! (not via PM!)
    FAQs: Best Blank DiscsBest TBCsBest VCRs for captureRestore VHS

  12. Captures & Restoration lollo's Avatar
    Join Date
    Jul 2018
    Location
    Italy
    Search Comp PM
    At a high enough bitrate (25-50), with low/no GOP, 4:2:2, etc, you can easily make the case that MPEG is visually lossless. In the 90s-00s, quite a few post houses used MPEG-2 editing specs.
    You were talking about broadcast. No DVB-S or DVB-T channels were brodcasted at that bitrate; many just at 1/10th of it.

    Definetely, you were joking.

    I've always seen MPEG-2 compression as horrible being there when the transitioning from analog satellite or off air to digital happened, not that it's 100% MPEG-2's fault but broadcasters abused the codec
    ^ This. You worked in the field, you know what's what.

  13. Video Restorer lordsmurf's Avatar
    Join Date
    Jun 2003
    Location
    dFAQ.us/lordsmurf
    Search Comp PM
    Originally Posted by lollo View Post
    You were talking about broadcast. No DVB-S or DVB-T channels
    Satellite transponders were notorious for overstuffing, crippling bitrates, and shrunk resolutions. DVB-S (and DSS) really isn't broadcast, but rather satellite delivery. It has more in common with internet streaming than terrestrial broadcast.

    I know more about ATSC, less about DVB-T.

    I've not messed with any of that stuff since the 2000s, and don't care to. (Although, if the right job came along, I'd change my mind. But I doubt that'll happen.)

    You worked in the field, you know what's what
    dellsam34 isn't from this field.
    Want my help? Ask here! (not via PM!)
    FAQs: Best Blank DiscsBest TBCsBest VCRs for captureRestore VHS

  14. Originally Posted by DeadSkullzJr View Post
    So basically, if my understanding is right, decoding is lossless,
    Not necessarily true between MPEG2 decoder implementations. That is probably where the confusion is from

    Decoding - To clarify, MPEG2 is different from other modern codecs in that the output is not necessarily same between decoder implementations. ie. It is not necessarily bit identical to each other, therefore not necessarily lossless when compared to each other. The decoded output is mathematically lossless with respect to that specific decoder implementation only.

    Different mpeg2 decoder implementations (e.g. sony mpeg2 decoder, vs. ffmpeg/libavcodec mpeg2 decoder, vs. mainconcept mpeg2 decoder etc...) can yield slightly higher or lower quality when decoding that video, compared to the original master source (the source the DVD was made from). Differences are measureable, but the differences are mostly negligible

    Conversely, other codecs like AVC/h264 stipulate in their specifications that the decoded output have to be bit identical . So you should get mathematically lossless output compared to ANY spec compliant decoders for "normal" streams (excluding edge cases, broken streams, etc..)


    Originally Posted by lordsmurf View Post

    It's also been pointed out before how the lossless formats like Huffyuv, Lagarith, etc, are not 100% perfectly mathematically lossless either. For an easily example, Lagarith can have byte encode errors, which Avisynth trips over. Lossless vs. uncompressed is often achieved by rounding, which creates rounding errors. Those are usually tiny and undetectable, but must be pointed out to lossless purists (anal retentives).
    They should be 100% mathematically lossless when handled properly . Lossless compression does not have rounding errors - that's the whole point of using lossless compression.

    Uncompressed can have errors too when not handled properly, or there is a write error. eg. You could make the claim that zip compression isn't lossless if a hard drive crashes. Byte errors can occur with data transfers from CPU<=>memory. These types of issues are not from the lossless compression, rather from external factors.

    Purists will use FFV1 CRC checksum for video. Same for file compression like zip

  15. Originally Posted by poisondeathray View Post
    Originally Posted by DeadSkullzJr View Post
    So basically, if my understanding is right, decoding is lossless,
    Not necessarily true between MPEG2 decoder implementations. That is probably where the confusion is from

    Decoding - To clarify, MPEG2 is different from other modern codecs in that the output is not necessarily same between decoder implementations. ie. It is not necessarily bit identical to each other, therefore not necessarily lossless when compared to each other. The decoded output is mathematically lossless with respect to that specific decoder implementation only.

    Different mpeg2 decoder implementations (e.g. sony mpeg2 decoder, vs. ffmpeg/libavcodec mpeg2 decoder, vs. mainconcept mpeg2 decoder etc...) can yield slightly higher or lower quality when decoding that video, compared to the original master source (the source the DVD was made from). Differences are measureable, but the differences are mostly negligible

    Conversely, other codecs like AVC/h264 stipulate in their specifications that the decoded output have to be bit identical . So you should get mathematically lossless output compared to ANY spec compliant decoders for "normal" streams (excluding edge cases, broken streams, etc..)


    Originally Posted by lordsmurf View Post

    It's also been pointed out before how the lossless formats like Huffyuv, Lagarith, etc, are not 100% perfectly mathematically lossless either. For an easily example, Lagarith can have byte encode errors, which Avisynth trips over. Lossless vs. uncompressed is often achieved by rounding, which creates rounding errors. Those are usually tiny and undetectable, but must be pointed out to lossless purists (anal retentives).
    They should be 100% mathematically lossless when handled properly . Lossless compression does not have rounding errors - that's the whole point of using lossless compression.

    Uncompressed can have errors too when not handled properly, or there is a write error. eg. You could make the claim that zip compression isn't lossless if a hard drive crashes. Byte errors can occur with data transfers from CPU<=>memory. These types of issues are not from the lossless compression, rather from external factors.

    Purists will use FFV1 CRC checksum for video. Same for file compression like zip
    Then what should I use to decode MPEG-2 streams in this case? Is DGIndex's approach when needed even viable or good, or as good as it could possibly get, or are there better approaches, algorithms, etc.? VirtualDub2 and FFmpeg seem to use the same decoder(s) when I exported frames from a few MPEG-2 sources, they matched identically, implying the same decoding algorithm is being utilized, unless multiple exist and are capable of equivalent decoding accuracy. Is there a chance for a lossless decoder to exist, is it even possible for one to exist? I have explored my options, more recently I checked out all of FFmpeg's current decoders in the recent updates, the only one that works is mpeg2video when handling MPEG-2 streams, which I half expected, figured maybe with how fleshed out FFmpeg is, maybe there were hidden options related to the matter that could be utilized. I don't think there is any real way for me to test considering the format has no real guarantees, and even if I encode into a lossless format, the damage could potentially already be done just by decoding. Which irks me because I feel like I am not pushing the absolute best for my projects, I don't like settling for less if it can be helped, I tend to take my passion and work seriously so, when something doesn't go the way I want or as planned, I tend to really dig, no matter how microscopic or complicated the detail(s) tend to be. My understanding so far I think is improving, but there's definitely still some confusion in the mix, definitely some mixed information too...
    Last edited by DeadSkullzJr; 26th Dec 2024 at 23:38.

  16. Originally Posted by DeadSkullzJr View Post
    Is DGIndex's approach when needed even viable or good, or as good as it could possibly get, or are there better approaches, algorithms, etc.?
    It's good in the sense that of consistency and frame accuracy. You will not encounter the some of the issues DVD issues people have with a 100% pure soft telecine source - that's the "best case scenario". But just have a look around - every other post about DVD is about those sorts of problems. And you need that frame accuracy to perform some of the other operations. Those types of problems such as mixed up frames, wrong ivtc are MUCH worse easily visible problems than a few pixels slightly different


    Then what should I use to decode MPEG-2 streams in this case?
    For 100% soft telecine, I'd use whatever gets the highest "quality"

    In terms of "quality", I was curious myself so I tested a few sources , different mpeg2 encoders to test different mpeg2 decoders - and modern ffmpeg/libavcodec (also ffvideosource) scores slightly higher in psnr consistently than DgDecode.dll IEEE-1180 Reference (or other idct). "psnr" as a measurement has a bunch of issues, but it's one type of "measuring stick". It might be that the old dgdecode.dll is still based on an older libavcodec implementation, and could be improved. Curiously nvdec /purevideo (used in DGDecNV, consistently scores the lowest). Mind you we are talking like < 0.1 db PSNR differences between decoder implementations - it's like splitting hairs . I'll eventually test some other decoders , commercial ones like Mainconcept, Sony, but 0.1 db is the max diff. Usually it's like 0.01 or 0.02 db.

    VirtualDub2 and FFmpeg seem to use the same decoder(s) when I exported frames from a few MPEG-2 sources, they matched identically, implying the same decoding algorithm is being utilized, unless multiple exist and are capable of equivalent decoding accuracy.
    Yes, they will be using a current version of libavcodec . They should be perfectly fine for 100% soft pulldown sources that do not have other issues

    Is there a chance for a lossless decoder to exist, is it even possible for one to exist?
    Not really from MPEG2, because there is no defined lossless output for a given lossy input for MPEG2. The outputs vary (very slightly) between decoders. You can measure "quality" if you have the master, but all quality measures have various issues - it's difficult to measure "quality"

    AVC specs dictate you have to have bit identical output - so even lossy streams can have lossless decoded output for that lossy stream. Not true for MPEg2

    even if I encode into a lossless format, the damage could potentially already be done just by decoding.
    For MPEG2 yes. But the "damage" is very minor. The delta between MPEG2 encoders is small according to PSNR. Can you tell the difference in an unlabelled blind test which of those screenshots is the "best" or the "worst" ?

    Still, I was surprised that IEEE-1180 wasn't the "best". Even if it was just a few tests, and PSNR is not a great "quality metric", the trends are pretty consistent

  17. Captures & Restoration lollo's Avatar
    Join Date
    Jul 2018
    Location
    Italy
    Search Comp PM
    Originally Posted by lordsmurf View Post
    Satellite transponders were notorious for overstuffing, crippling bitrates, and shrunk resolutions. DVB-S (and DSS) really isn't broadcast, but rather satellite delivery. It has more in common with internet streaming than terrestrial broadcast.
    I suspect you (or me) are just confusing broadcasting with production.

    But back to the original questions and your replies:
    1- No, it is not true that MPEG2 is lossless (can be "visually" lossless at high bitrate, like other codecs).
    2- No, it is not true that lossless codecs are not "100% perfectly mathematically lossless" (as pdr just pointed out).
    All the rest is speculation.

  18. It's about visual loss.
    At least for you. Okay, so for you MPEG-2 can be lossless, but you actually mean 'visually lossless' like ProRes can be.
    Personally I think, that the distinction between 'lossless' and 'visually lossless' is important, specially when using intermediates, but thanks for the explanation of what you actually meant to say.
    users currently on my ignore list: deadrats, Stears555, marcorocchini

  19. Originally Posted by poisondeathray View Post
    Originally Posted by DeadSkullzJr View Post
    Is DGIndex's approach when needed even viable or good, or as good as it could possibly get, or are there better approaches, algorithms, etc.?
    It's good in the sense that of consistency and frame accuracy. You will not encounter the some of the issues DVD issues people have with a 100% pure soft telecine source - that's the "best case scenario". But just have a look around - every other post about DVD is about those sorts of problems. And you need that frame accuracy to perform some of the other operations. Those types of problems such as mixed up frames, wrong ivtc are MUCH worse easily visible problems than a few pixels slightly different


    Then what should I use to decode MPEG-2 streams in this case?
    For 100% soft telecine, I'd use whatever gets the highest "quality"

    In terms of "quality", I was curious myself so I tested a few sources , different mpeg2 encoders to test different mpeg2 decoders - and modern ffmpeg/libavcodec (also ffvideosource) scores slightly higher in psnr consistently than DgDecode.dll IEEE-1180 Reference (or other idct). "psnr" as a measurement has a bunch of issues, but it's one type of "measuring stick". It might be that the old dgdecode.dll is still based on an older libavcodec implementation, and could be improved. Curiously nvdec /purevideo (used in DGDecNV, consistently scores the lowest). Mind you we are talking like < 0.1 db PSNR differences between decoder implementations - it's like splitting hairs . I'll eventually test some other decoders , commercial ones like Mainconcept, Sony, but 0.1 db is the max diff. Usually it's like 0.01 or 0.02 db.

    VirtualDub2 and FFmpeg seem to use the same decoder(s) when I exported frames from a few MPEG-2 sources, they matched identically, implying the same decoding algorithm is being utilized, unless multiple exist and are capable of equivalent decoding accuracy.
    Yes, they will be using a current version of libavcodec . They should be perfectly fine for 100% soft pulldown sources that do not have other issues

    Is there a chance for a lossless decoder to exist, is it even possible for one to exist?
    Not really from MPEG2, because there is no defined lossless output for a given lossy input for MPEG2. The outputs vary (very slightly) between decoders. You can measure "quality" if you have the master, but all quality measures have various issues - it's difficult to measure "quality"

    AVC specs dictate you have to have bit identical output - so even lossy streams can have lossless decoded output for that lossy stream. Not true for MPEg2

    even if I encode into a lossless format, the damage could potentially already be done just by decoding.
    For MPEG2 yes. But the "damage" is very minor. The delta between MPEG2 encoders is small according to PSNR. Can you tell the difference in an unlabelled blind test which of those screenshots is the "best" or the "worst" ?

    Still, I was surprised that IEEE-1180 wasn't the "best". Even if it was just a few tests, and PSNR is not a great "quality metric", the trends are pretty consistent
    Alright, I have another question. I wasn't able to find what iDCT algorithm FFmpeg defaults to, the most I can find is that it uses mpeg2video as the decoder and encoder for the format. Is whatever algorithm being utilized in libavcodec for FFmpeg or something like VirtualDub2 any more accurate than IEEE-1180? If we had to pick and choose the most accurate algorithm, with the least amount of damage, which one should be utilized?

  20. Originally Posted by DeadSkullzJr View Post
    I wasn't able to find what iDCT algorithm FFmpeg defaults to, the most I can find is that it uses mpeg2video as the decoder and encoder for the format. Is whatever algorithm being utilized in libavcodec for FFmpeg or something like VirtualDub2 any more accurate than IEEE-1180? If we had to pick and choose the most accurate algorithm, with the least amount of damage, which one should be utilized?
    The libavcodec/ffmpeg ENcoder is defintely worse than other options. It's probably one of the worst MPEG2 encoders. There are threads discussing this, and many tests.

    For decoding, so far, based a few tests based on PSNR, only on live action film sources, only progressive encoding - the current libavcodec is slightly more accurate than older libavcodec and IEEE-1180 . That's not enough to say for certain. You'd have to do hundreds/thousands of tests, a variety of sources (e.g. animation, anime, film, documentary etc...), interlaced content, mixed content... Maybe the few that I tested just happened to be slightly better. Maybe the next hundred end up worse.... PSNR also is a "can of worms" with many issues. PNSR is great for determining lossless vs. not lossless, but everything in between it's not necessarily great for measuring. There are many ways to "trick" PSNR , same for other metrics. An encoder can optimize for PSNR (make some choices to artificially score higher for a given metric), so presumably a decoder that does not require bit identical output (e.g mpeg2) could too

    I don't know what your project is about, but generally you want to start with the best sources available. So I would avoid all this and start from the BD, or UHD-BD . Very rarely BD's use MPEG2 these days (mostly AVC), but in general , even a crappy low bitrate BD MPEG2 release would be better starting place than a highest quality DVD release. Only if that was some old release only available on DVD would I go down this route (that' s not the case for the example you posted in the other thread)
    Last edited by poisondeathray; 27th Dec 2024 at 11:15.

  21. Originally Posted by poisondeathray View Post
    Originally Posted by DeadSkullzJr View Post
    I wasn't able to find what iDCT algorithm FFmpeg defaults to, the most I can find is that it uses mpeg2video as the decoder and encoder for the format. Is whatever algorithm being utilized in libavcodec for FFmpeg or something like VirtualDub2 any more accurate than IEEE-1180? If we had to pick and choose the most accurate algorithm, with the least amount of damage, which one should be utilized?
    The libavcodec/ffmpeg ENcoder is defintely worse than other options. It's probably one of the worst MPEG2 encoders. There are threads discussing this, and many tests.

    For decoding, so far, based a few tests based on PSNR, only on live action film sources, only progressive encoding - the current libavcodec is slightly more accurate than older libavcodec and IEEE-1180 . That's not enough to say for certain. You'd have to do hundreds/thousands of tests, a variety of sources (e.g. animation, anime, film, documentary etc...), interlaced content, mixed content... Maybe the few that I tested just happened to be slightly better. Maybe the next hundred end up worse.... PSNR also is a "can of worms" with many issues. PNSR is great for determining lossless vs. not lossless, but everything in between it's not necessarily great for measuring. There are many ways to "trick" PSNR , same for other metrics. An encoder can optimize for PSNR (make some choices to artificially score higher for a given metric), so presumably a decoder that does not require bit identical output (e.g mpeg2) could too

    I don't know what your project is about, but generally you want to start with the best sources available. So I would avoid all this and start from the BD, or UHD-BD . Very rarely BD's use MPEG2 these days (mostly AVC), but in general , even a crappy low bitrate BD MPEG2 release would be better starting place than a highest quality DVD release. Only if that was some old release only available on DVD would I go down this route (that' s not the case for the example you posted in the other thread)
    For this particular case, part of the reason I'm using DVD sources is because I have older devices in the mix that I want to use. You are absolutely correct about using better sources, especially since the Blu-ray side of things has a lot more benefits to work with. Problem is many of the older devices that I'm using aren't capable of running high resolution contents past 480p, considering I also want to burn some discs and such as well, definitely can't toss these things interchangeably around. One of the biggest issues I struggle with is getting my hands on some movies in Blu-ray, some of the movies I want I simply can't afford because some people are greedy jerks and charge $80-$100+ for certain movies, as such I have no choice but to get DVD versions instead. Luckily I haven't run into a situation where a DVD version of a movie I want is this absurdly expensive, but I'm sure there's going to be a point when that happens too, resulting in the last ditch effort to use VHS sources. My setup is part of a work in progress multimedia project that I'm working on, and it's going to intertwine with videography and some aspects of filmography. Been trying to work it out for years, I run into too many issues though that tends to throw wrenches in the equation, so it's a slow process. Basically, the setup consists of projects within projects. It's simple in my head to understand, but to explain it and execute the plan is much harder to happen in person.

  22. Originally Posted by DeadSkullzJr View Post

    For this particular case, part of the reason I'm using DVD sources is because I have older devices in the mix that I want to use. You are absolutely correct about using better sources, especially since the Blu-ray side of things has a lot more benefits to work with. Problem is many of the older devices that I'm using aren't capable of running high resolution contents past 480p,
    You can downscale a BD source and it will still be much higher quality than the retail DVD. The oversampling has huge benefits . There were many comparisons in the past. Here you are hesitant considering <0.1 db differences between decoders, when the differences for a downscaled BD will be easily 100x more .


    considering I also want to burn some discs and such as well, definitely can't toss these things interchangeably around. One of the biggest issues I struggle with is getting my hands on some movies in Blu-ray, some of the movies I want I simply can't afford because some people are greedy jerks and charge $80-$100+ for certain movies, as such I have no choice but to get DVD versions instead.
    Fair enough, and some might not have a BD release

    Other avenues to look at are fan forums. eg. Sometimes they get a hold of unreleased studio materials. Some have fundraisers to remaster and rescan lower quality releases

  23. Originally Posted by poisondeathray View Post
    Originally Posted by DeadSkullzJr View Post

    For this particular case, part of the reason I'm using DVD sources is because I have older devices in the mix that I want to use. You are absolutely correct about using better sources, especially since the Blu-ray side of things has a lot more benefits to work with. Problem is many of the older devices that I'm using aren't capable of running high resolution contents past 480p,
    You can downscale a BD source and it will still be much higher quality than the retail DVD. The oversampling has huge benefits . There were many comparisons in the past. Here you are hesitant considering <0.1 db differences between decoders, when the differences for a downscaled BD will be easily 100x more .


    considering I also want to burn some discs and such as well, definitely can't toss these things interchangeably around. One of the biggest issues I struggle with is getting my hands on some movies in Blu-ray, some of the movies I want I simply can't afford because some people are greedy jerks and charge $80-$100+ for certain movies, as such I have no choice but to get DVD versions instead.
    Fair enough, and some might not have a BD release

    Other avenues to look at are fan forums. eg. Sometimes they get a hold of unreleased studio materials. Some have fundraisers to remaster and rescan lower quality releases
    Your first point is true, but I forgot to mention I don't have a viable means to rip Blu-ray discs yet. So even if I wanted to rip any (I absolutely do), there's no way I can do that right now. I prepare ahead though by getting the movies I want in specific formats, and bide my time until I can carry out the next steps. Might seem simpler on paper to just find sources online, but I tend to not trust a lot of that stuff since I have no idea what people did to said sources, I would rather get it done myself instead and have ease of mind that I took the best route possible to not only preserve what I like, but have the confidence of the source being viable and usable. The only time I would trust anything enough outside of that is if untouched master / studio materials were provided as you mentioned, stuff I obviously doubt I would ever get my hands on in my life time.

  24. Originally Posted by DeadSkullzJr View Post
    I don't have a viable means to rip Blu-ray discs yet. So even if I wanted to rip any (I absolutely do), there's no way I can do that right now.
    If you mean the hardware, ask around friends/family/neighbors. Public libraries, universities/colleges often have AV equipment for public use . But a potential hurdle is the software. Although MakeMKV supports BD, many public institutions will not have it installed, and you might not have permissions to use even a portable version from a usb key - you might have to "befriend" someone working in the AV department and/or be nice to friends/family/neighbors

    If "quality" is something important to you for this project, then you should do it properly.

  25. Originally Posted by poisondeathray View Post
    Originally Posted by DeadSkullzJr View Post
    I don't have a viable means to rip Blu-ray discs yet. So even if I wanted to rip any (I absolutely do), there's no way I can do that right now.
    If you mean the hardware, ask around friends/family/neighbors. Public libraries, universities/colleges often have AV equipment for public use . But a potential hurdle is the software. Although MakeMKV supports BD, many public institutions will not have it installed, and you might not have permissions to use even a portable version from a usb key - you might have to "befriend" someone working in the AV department and/or be nice to friends/family/neighbors

    If "quality" is something important to you for this project, then you should do it properly.
    Can't say I know anyone into this stuff, at most I know someone who has dabbled with computers for over 20+ years, he doesn't know a whole lot about stuff like this though, and he doesn't pack the necessary equipment that I could borrow. None of my other relatives dabble with this stuff at all. I'll be honest, I feel like me befriending someone for stuff like this kind of implies using them for what they have, maybe I'm just overreaching and or over thinking on the logic, it rubs me the wrong way thinking about it though. I'm probably just going to have to explore about what's publicly accessible to get something done, otherwise I just have to cobble up something and add a new piece of equipment to the setup eventually.

  26. Video Restorer lordsmurf's Avatar
    Join Date
    Jun 2003
    Location
    dFAQ.us/lordsmurf
    Search Comp PM
    Originally Posted by Selur View Post
    It's about visual loss.
    At least for you. Okay, so for you MPEG-2 can be lossless, but you actually mean 'visually lossless' like ProRes can be.
    Personally I think, that the distinction between 'lossless' and 'visually lossless' is important, specially when using intermediates, but thanks for the explanation of what you actually meant to say.
    You can't make that distinction. You can differentiate "mathematically lossless" from "visually lossless", but both are lossless.

    AV (audio/video) has a special history with loss incurred at each step (mastering, delivery, etc), and "lossless" is the prevention of it. Over time, computer programmers distorted it to be "mathematically lossless" (and even that can be highly misleading) when that was never really the intended use in AV. And that's why "lossless" is so often used to refer to both visual and mathematical loss prevention.

    This is why I specifically refer to ProRes422 as a "lossy" lossless (from a 4:2:2 source), with the term quoted due to misleading use of jargon. It's not necessarily incurring any visual loss. And no, I don't refer to "I can't see it" (with my eyes closed), but literally "pixel peeping" to look for flaws that don't visually exist.

    I'd actually argue that referring to any H.264 as lossless is silly.

    Because these terms have never really been fully defined, you also end up with weasel use of the terms, such as referring something as "almost lossless", or even misusing "visually lossless", when it's obviously visually compressed. Cue all the Chinese junk software peddlers.

    This argument is similar to JPEG vs. raw, where raw purists are like math purists. Whether or not the raw/math is "better" heavily depends on many factors, including source, software, hardware, etc. There's no single "best" answer, but rather multiple "best for __" answers.

    Originally Posted by poisondeathray View Post
    can have errors too when not handled properly
    Purists will use FFV1 CRC checksum for video. Same for file compression like zip
    That's my point. The errors are a feature, not a bug.
    Last edited by lordsmurf; 28th Dec 2024 at 18:26.
    Want my help? Ask here! (not via PM!)
    FAQs: Best Blank DiscsBest TBCsBest VCRs for captureRestore VHS

  27. Originally Posted by lordsmurf View Post
    Originally Posted by Selur View Post
    It's about visual loss.
    At least for you. Okay, so for you MPEG-2 can be lossless, but you actually mean 'visually lossless' like ProRes can be.
    Personally I think, that the distinction between 'lossless' and 'visually lossless' is important, specially when using intermediates, but thanks for the explanation of what you actually meant to say.
    You can't make that distinction. You can differentiate "mathematically lossless" from "visually lossless", but both are lossless.
    Why not? He just made that distinction

    You can't claim that he can't make that distinction, because you don't know his usage case


    AV (audio/video) has a special history with loss incurred at each step (mastering, delivery, etc), and "lossless" is the prevention of it. Over time, computer programmers distorted it to be "mathematically lossless" (and even that can be highly misleading) when that was never really the intended use in AV. And that's why "lossless" is so often used to refer to both visual and mathematical loss prevention.
    It's better to be clear than ambiguous. ""mathematically lossless" and "visually lossless" are 2 different things, why use 1 blanket term to describe 2 different things ?

    An analogy would be 2 people John Doe , and David Doe. You call them both "Doe", but that's less precise than calling them John Doe and David Doe

    You can you disagree with using the term "visually lossless" (I would these days) , but don't lump 2 different things together , because you just confuse people


    This is why I specifically refer to ProRes422 as a "lossy" lossless (from a 4:2:2 source), with the term quoted due to misleading use of jargon. It's not necessarily incurring any visual loss. And no, I don't refer to "I can't see it" (with my eyes closed), but literally "pixel peeping" to look for flaws that don't visually exist.
    Yes the ' "lossy" lossless ' oxymoron is destined to confuse people even more

    "not necessarily incurring any visual loss" implies that sometimes it is incurring visual loss. That' s a wishy washy description with a lot of vague leeway as bad as "visually lossless"


    I'd actually argue that referring to any H.264 as lossless is silly.
    Why ? It can be mathematically lossless


    Because these terms have never really been fully defined

    I'm only quoting Apple because ProRes was mentioned:

    From the Apple ProRes whitepaper
    ...This kind of exact encoding is
    called “lossless” (or sometimes “mathematically lossless”) compression.
    lossless - A type of codec for which putting an image frame through
    encoding followed by decoding results in an image that is mathematical
    guaranteed to have exactly the same pixel values as the original.
    visually lossless - A type of codec for which putting an image frame
    through encoding followed by decoding results in an image that is not
    mathematically lossless, but is visually indistinguishable from the original
    when viewed alongside the original on identical displays.


    , you also end up with weasel use of the terms, such as referring something as "almost lossless", or even misusing "visually lossless", when it's obviously visually compressed.
    I agree, massive abuse of "visually lossless" description. e.g some people call Youtube "visually lossless" when you can clearly see artifacts and problems. "Visually lossless" originally implied one thing, but today who knows what it means. "Visually lossless" is a subjective, ambiguous , non quantifiable term, that can mean very different things to different people (people have different perceptions) - Someone with significant video and film experience will likely be able to see visual loss more easily than some random guy with zero video/film experience. There is a vast range of quality that people might truthfully call "visually lossless". "visually lossless" shouldn't be used at all for those reasons . I suspect the origins of the term "visually lossless" was Apple's Marketing Team .

    In contrast, "Mathematically lossless" is a precise unambiguous term

    Originally Posted by poisondeathray View Post
    can have errors too when not handled properly
    Purists will use FFV1 CRC checksum for video. Same for file compression like zip
    That's my point. The errors are a feature, not a bug.

    There are different types and causes of errors - External hardware related errors are a "feature" of all operations on computers, not just lossless compression, or archival software like 7zip/zip/rar . Hardware errors can and do happen, but rarely, and are not attributed directly to the lossless codec - ie. those types of errors are external and not considered a main feature of a lossless codec

    The distinction is encoding imprecision, rounding errors are intended and expected with "visually lossless" encoding - thus you could say that imprecision and rounding errors are a "main feature" of something like ProRes422HQ - because 100% of the time there is imprecision and rounding errors (I guess Apple Marketing isn't going to hire)

    Since precision and rounding errors are not intended or expected for "mathematically lossless" encoding/decoding , those kinds of errors are not considered a feature of the lossless codec itself. But uncommon external factors can occur, hence the "insurance" policy of CRC checksum to identify if there are any issues
    Last edited by poisondeathray; 28th Dec 2024 at 22:27.

  28. I was always under the impression that lossy and lossless were rather simple concepts to understand.

    Lossy - The concept of loss of information, destruction of information. In the case of a codec, video and audio streams are compressed, altered, edited, etc. in a way where loss and destruction are brought to the original streams. Meaning the information for the respective streams will keep being destroyed more and more if one decodes and encodes repeatedly in this condition to the point that said streams will reach an unrecoverable and unrecognizable result(s), even further than this, and you probably just won't have any data period that's viewable or audible in any capacity.

    Lossless - The concept of preserving information, nothing is lost. In the case of a codec, video and audio streams are not compressed, altered, edited, etc. in a way where loss and or destruction are brought to the original streams. Meaning no matter how many times you decode and encode the exact same streams over and over, the end result will always be the exact same down to the letter, no matter if such actions were done 10, 20, 50, 100, so on and so forth times, in the end it is always truly 1:1.

    I've seen the term "visually lossless" and "perceptually lossless" for years. To be honest, neither of these terminologies make any sense, at least to me, both of these are just counterintuitive ways of saying "lossy, just hard to tell." It would be like saying a problem isn't a problem at all just because you don't see the problem, or that you aren't wrong just because you don't see what you are doing is wrong. By this logic, plenty of codecs classified as lossy could be considered "perceptually lossless" considering the average consumer likely won't spot or be able to audibly tell apart what's inconsistent and what isn't, that doesn't mean it's lossless though in any capacity, it just means that the algorithm is lossy and is very good at not being easily noticeable about how it damages the information presented. I feel this nasty linguistic inconsistency is a bit of a regression that should be corrected and quickly, twisting meanings and such is already a problem in modern language, can't ever keep up with all the whacky changes anymore. Just saying the two words out loud, it should be relative to common sense on what they sound like they mean.

  29. You can differentiate "mathematically lossless" from "visually lossless", but both are lossless.
    Only to your eye after one iteration.

    How I would describe:
    Lossless video compression as a class of compression that allows the original video data to be perfectly reconstructed from the compressed data with no loss of the actual video information (not counting some metadata as video data!).
    This holds true if xy iterations of lossless compression are chained.
    Thus, lossless compression must be mathematical lossless.

    Lossy video compression, on the other hand, uses inexact approximations and partial data discarding to represent the content to reduce data size.

    Visually lossless, is a subclass of lossy compression where the data that is lost after the file is compressed and decompressed is not detectable to the eye, but the original video data can't be reproduced (1:1) from the compressed file.

    With visually lossless, compression, enough iterations can cause visually visible loss.


    To visualize what I mean.
    Taking Pgm1_HD_SQ_25p_0_tc_12_10_43_16.mov as source.

    and the following batch script:
    Code:
    @echo off
    setlocal enabledelayedexpansion
    
    set input=C:\Users\Selur\Desktop\Pgm1_HD_SQ_25p_0_tc_12_10_43_16.mov
    set outputDir=G:\Output
    set ffmpegPath=F:\Hybrid\64bit\ffmpeg.exe
    
    for /L %%i in (1,1,1000) do (
        if %%i==1 (
            set inputFile=%input%
        ) else (
            set /A prev=%%i-1
            set inputFile=%outputDir%\Pgm1_HD_SQ_25p_iteration_!prev!.mov
        )
    
        set outputFile=%outputDir%\Pgm1_HD_SQ_25p_iteration_%%i.mov
    
        echo Processing iteration %%i...
        call :processFile "!inputFile!" "!outputFile!"
        if errorlevel 1 (
            echo Error occurred during processing iteration %%i. Aborting.
            exit /b 1
        )
    
    )
    
    exit /b
    
    :processFile
    setlocal
    set input=%1
    set output=%2
    echo Running ffmpeg with input: %input% and output: %output%
    %ffmpegPath% -y -noautorotate -nostdin -threads 8 -color_primaries bt709 -color_trc bt709 -colorspace bt709 -color_range tv -ignore_editlist true -i "%input%" -map 0:0 -an -sn -color_primaries bt709 -color_trc bt709 -colorspace bt709 -color_range tv -pix_fmt yuv422p10le -strict -1 -fps_mode passthrough -vcodec prores_ks -profile:v 3 -vtag apch -aspect 1920:1080 -f mov "%output%"
    set errorlevel=%errorlevel%
    endlocal & exit /b %errorlevel%
    one can see that the file size is shrinking but stagnates after 700 iterations.
    Comparing the clips with:
    Code:
    # Imports
    import vapoursynth as vs
    # getting Vapoursynth core
    import sys
    import os
    core = vs.core
    # Import scripts folder
    scriptPath = 'F:/Hybrid/64bit/vsscripts'
    sys.path.insert(0, os.path.abspath(scriptPath))
    # loading plugins
    core.std.LoadPlugin(path="F:/Hybrid/64bit/vsfilters/SourceFilter/LSmashSource/LSMASHSource.dll")
    # Import scripts
    import validate
    # Source: 'G:\Output\Pgm1_HD_SQ_25p_iteration_1.mov'
    # Current color space: YUV422P10, bit depth: 10, resolution: 1888x1062, frame rate: 25fps, scanorder: progressive, yuv luminance scale: limited, matrix: 709, transfer: bt.709, primaries: bt.709, format: prores
    # Loading G:\Output\Pgm1_HD_SQ_25p_iteration_1.mov using LibavSMASHSource
    clip1 = core.lsmas.LibavSMASHSource(source="G:/Output/Pgm1_HD_SQ_25p_iteration_1.mov")
    clip1000 = core.lsmas.LibavSMASHSource(source="G:/Output/Pgm1_HD_SQ_25p_iteration_1000.mov")
    diff = core.std.MakeDiff(clip1, clip1000)
    
    clip = core.std.StackHorizontal([clip1, clip1000, diff])
    
    # output
    clip.set_output()
    you can see the difference:

    Depending on the source and the used 'visual lossless' compression, some degregation will occur earlier or later.
    In my book this would count as 'visual lossless' and not 'lossless'.

    ..., I don't refer to "I can't see it" (with my eyes closed), but literally "pixel peeping" to look for flaws that don't visually exist.
    then you should agree that this is not lossless.

    Cu Selur
    Image Attached Files
    users currently on my ignore list: deadrats, Stears555, marcorocchini

  30. Originally Posted by Selur View Post
    You can differentiate "mathematically lossless" from "visually lossless", but both are lossless.
    Only to your eye after one iteration.
    And seem this is what count most - eye as a capture device has own limitations - some of them are known and can be objectively (math) described, some of them are still not covered by objective mathematical models.

    But whenever we say lossless (without any adjective as it is redundant from math perspective) we say about objective lossless (mathematically lossless), term visually lossless cover limited set of application where eye limitations are applied into data path to gain some compression.

    In mathematically lossless you need special mathematical apparatus to detect data changes - most of people don't use such mathematical apparatus when perceive surrounding world trough eye (visual system).




Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!