VideoHelp Forum
+ Reply to Thread
Page 2 of 2
FirstFirst 1 2
Results 31 to 46 of 46
Thread
  1. Originally Posted by poisondeathray View Post
    "render" (verb) in the video context refers to applying calculations and transforms. It does not have to be 3D CG.
    I completely agree with poisondeathray and will add that this is a totally pointless discussion because it doesn't help anyone. Here is the key reason is doesn't help: every NLE has a button labeled "Render" and clicking on that button takes your existing video, the edits to that video, the transformations you've applied, the compositing and, yes, video created from scratch within the computer (like titles), and creates new video.

    That is how the industry defines "render."

    Yes, the word is also used to define the process of taking a wireframe drawing and converting that to 2D surfaces, and 3D objects, but that is not its only meaning!!!

    To narrow its meaning to only apply to video that is entirely created within a computer, like CGI, is the prerogative of whomever is speaking, but by using this word in such a restrictive manner actually confuses everyone because it flies in the face of how everyone else in the industry uses that word, including the companies that make the software we all use.

    For those who remember their literature, Lewis Carroll wrote a follow-up to "Alice in Wonderland" called "Through the Looking-Glass." The most famous quote from that book is when Humpty Dumpty confuses Alice by deciding to make words mean what HE wants them to mean, rather than their commonly accepted meaning. That quote applies perfectly to this discussion.
    "I don't know what you mean by 'glory,' " Alice said.
    Humpty Dumpty smiled contemptuously. "Of course you don't—till I tell you. I meant 'there's a nice knock-down argument for you!'"
    "But 'glory' doesn't mean 'a nice knock-down argument'," Alice objected.
    "When I use a word," Humpty Dumpty said, in rather a scornful tone, "it means just what I choose it to mean—neither more nor less."
    "The question is," said Alice, "whether you can make words mean so many different things."
    "The question is," said Humpty Dumpty, "which is to be master—that's all."
    Last edited by johnmeyer; 28th Sep 2020 at 15:04. Reason: typo
    Quote Quote  
  2. I understand not all laptops have the ability to add a second drive, if these are of that variety, then yes, swapping in a much faster drive should help significantly.
    It should be noted that some so-called “laptop” computers (which one would be foolish to actually put on the top of their lap, for various reasons ranging from coffee spillage to testicle overheating) which are sold with a single storage device have a second 2.5" tray, which can't be used right away because of a stupid extra plastic padding which has to be cut first (probably because it's more economically viable to use the same overall design for a whole range of computers, even though some options are only supposed to be available on the higher end models ; my Toshiba P300 is like that).

    And material like this (big 10x10 flat pixels) might compress much more with UT Video Codec. I don't know how big your CamStudio version is but an upscaled Lagarith version was 3.25 GB. A UT version was only 70.8 MB.
    Can the difference be that dramatic ? How come ? I figured that lossless compressors should be ranked roughly similarly with regards to efficiency, regardless of the type of material, and since in my few tests Lagarith turned out to outperform its main contenders UT Video and MagicYUV by a significant margin, I used it preferably ever since. So which are the situations where other encoders can showcase such an outstanding performance ?
    Quote Quote  
  3. Originally Posted by abolibibelot View Post
    So which are the situations where other encoders can showcase such an outstanding performance ?
    I am not sure there are any blanket answers to that question. Each encoder can exhibit wildly different performance depending on settings. Chief among those are the various "quality" settings. These usually involve, for interframe codecs, looking at more adjacent frames, but I think there are other additional calculations that get made in order to improve quality. You can see multiple integer increases or decreases (i.e. 2x, 3x, ... 10x? ) in performance depending on these settings. Since each codec provides different controls and dials, it is really difficult to get any sort of apples to apples comparison, at least with any codec where compression is involved.

    Then you have the actual coding of the codec. There are lots of h.264 codecs, as one example, and some programmers do a better job than others, including those who are totally brilliant and can figure out all sorts of beyond-clever shortcuts. Thus, the actual implementation of the exact same calculation can take vastly different amounts of time.

    The best illustration of this is the Fast Fourier Transform (FFT) where Cooley and Tukey took the Fourier transform that is used to convert time to frequency (and which most students of calculus can easily understand), and figured out a totally different way to calculate it that takes a small fraction of the computational time and power. I first saw it almost fifty years ago in 1973, when I actually was pretty decent with calculus, but even when I was younger and sharper I couldn't understand one single line of it. It is a work of real genius (and I ain't no genius).
    Quote Quote  
  4. Originally Posted by abolibibelot View Post
    And material like this (big 10x10 flat pixels) might compress much more with UT Video Codec. I don't know how big your CamStudio version is but an upscaled Lagarith version was 3.25 GB. A UT version was only 70.8 MB.
    Can the difference be that dramatic ? How come ? I figured that lossless compressors should be ranked roughly similarly with regards to efficiency, regardless of the type of material, and since in my few tests Lagarith turned out to outperform its main contenders UT Video and MagicYUV by a significant margin, I used it preferably ever since. So which are the situations where other encoders can showcase such an outstanding performance ?
    I was pretty sure I double checked the numbers but it looks like I made a mistake. I repeated the test today with the same source video and got 3.26 GB for the lagarith AVI file but 70.8 GB for the UT file. So it looks like I mistook GB for MB with the UT file. Sorry about that. As a reference an uncompressed RGB file was 516 GB.

    That still leaves the question about how one lossless encoder can compress much better than another. In this case Lagarith compressed about 20 fold better than UT. Aside from the fact that some programmers may be better or more knowlegable than others, they may also have different assumptions and goals. A real movie would almost never have 10x10 blocks of exact duplicate pixels. Or exact duplicate frames. A programmer that was looking at compressing movies might not consider those properties. Another programmer might think that a computer generated presentation full of flat shaded bar graphs and largely static content would have lots of those types of features and add separate algorithms for those cases. One programmer may deal with a frame of video as a 1d string of pixels. Another may consider that a video is a two dimensional array of pixels and deal with it as 2d blocks rather than a 1d string. One can usually optimize compression algorithms if one knows something about the properties of the data beforehand. But those types of optimizations may be useless with data of other properties.
    Quote Quote  
  5. Chief among those are the various "quality" settings.
    That's why I specifically wrote “lossless” codecs, which do not have quality settings, or anything that could produce subjective variations in the output.

    Then you have the actual coding of the codec. There are lots of h.264 codecs, as one example, and some programmers do a better job than others, including those who are totally brilliant and can figure out all sorts of beyond-clever shortcuts. Thus, the actual implementation of the exact same calculation can take vastly different amounts of time.
    This is irrelevant to the question which was specifically : how come a certain codec would generally have an inferior compression efficiency than another, but vastly outperform the same other codec in particular conditions with particular kind of footage -- as seems to be the case with UT Video vs. Lagarith when encoding material with “big 10x10 flat pixels”, according to the statement quoted above.

    The best illustration of this is the Fast Fourier Transform (FFT) where Cooley and Tukey took the Fourier transform that is used to convert time to frequency (and which most students of calculus can easily understand), and figured out a totally different way to calculate it that takes a small fraction of the computational time and power. I first saw it almost fifty years ago in 1973, when I actually was pretty decent with calculus, but even when I was younger and sharper I couldn't understand one single line of it. It is a work of real genius (and I ain't no genius).
    This is certainly interesting but also beside the point.
    Quote Quote  
  6. Originally Posted by abolibibelot View Post
    Originally Posted by johnmeyer View Post
    Chief among those are the various "quality" settings.
    That's why I specifically wrote “lossless” codecs, which do not have quality settings, or anything that could produce subjective variations in the output. <snip>
    Originally Posted by johnmeyer View Post
    Then you have the actual coding of the codec. There are lots of h.264 codecs, as one example, and some programmers do a better job than others ...
    This is irrelevant to the question which was specifically : how come a certain codec would generally have an inferior compression efficiency than another, but vastly outperform the same other codec in particular conditions with particular kind of footage -- <snip>
    Originally Posted by johnmeyer View Post
    The best illustration of this is the Fast Fourier Transform (FFT) where Cooley and Tukey took the Fourier transform that is used to convert time to frequency (and which most students of calculus can easily understand), and figured out a totally different way to calculate it that takes a small fraction of the computational time and power.
    This is certainly interesting but also beside the point.
    If you spent as much time actually trying to understand what I wrote instead of wasting your entire post making nasty remarks to a stranger (me) who sincerely tried to help you, you might realize that I answered your questions.

    I will try one more time, despite your insults.

    Whether the encoder is lossless (Lagarith, HuffYUV, UT Video Codec etc.) or lossy (h.264) is completely irrelevant to the point I was making, so your eagerness to criticize me means that you missed the point. That point, since you missed it, is that the difference in programming algorithm can make a massive difference in encoding speed.

    This applies equally to lossless as well as lossy codecs.

    How much difference?

    Well that's why I provided the example of the FFT. It was not in any way "beside the point," but is in fact probably the answer to your question. In case you didn't know, the discovery in the mid-1960s of the FFT approach to calculating the Jean-Baptiste Fourier transform provides an astounding 100x (i.e., two orders of magnitude) speed improvement under many circumstances.

    So clever programming can indeed be the entire answer.

    As for getting massive increases in compression, while still being lossless, there is no magic to be found here so there is really only one explanation: your benchmark is wrong. I've seen many such benchmarks and have never seen anything other than minor differences in file sizes between the three lossless codecs I mentioned. Here is one such comparison test by a respected member of the doom9.org forum:

    Comparison of Lossless Realtime Codecs

    I can link to half a dozen similar comparisons, and they never show any substantial difference in file size.

    Finally, anticipating that you might still want to make comments about the style of my post rather than its substance, is my post above "snippy?" Yes, it is. You ticked me off. But despite that, I once again did answer your question and provided what I am quite sure are the answers you asked for.
    Last edited by johnmeyer; 28th Sep 2020 at 21:20. Reason: added last sentence.
    Quote Quote  
  7. Originally Posted by johnmeyer View Post
    I've seen many such benchmarks and have never seen anything other than minor differences in file sizes between the three lossless codecs I mentioned.
    That's true for "normal" video. But the OP's case is unusual. Recompress the attached Lagarith video with UT Video Codec. I got a ~20x larger file. I think that qualifies as more than a "minor difference". I verified that the decompressed output of the two files is identical.
    Image Attached Files
    Quote Quote  
  8. Video Restorer lordsmurf's Avatar
    Join Date
    Jun 2003
    Location
    dFAQ.us/lordsmurf
    Search Comp PM
    Originally Posted by abolibibelot View Post
    how come a certain codec would generally have an inferior compression efficiency than another, but vastly outperform the same other codec in particular conditions with particular kind of footage --
    At the core of it all? Math rounding errors.

    More compression = more complex math, more rounding. That's all it is.

    Lossless -- truly lossless -- mostly using different math for data compression, not image compression. But rounding is still involved. I'm not a fan of UT Video. There were discussions here many years ago, about how Huffyuv was not 100% visually the same as uncompressed, but 99.99+%. I have seen odd things in the past, but rarely, few times per decade.

    Originally Posted by poisondeathray View Post
    "render" (verb) in the video context refers to applying calculations and transforms. It does not have to be 3D CG.
    Originally Posted by johnmeyer View Post
    I completely agree with poisondeathray
    I think the origins of the terms have been lost.

    Consider still photos. Back in the 90s, anything that could be done in a darkroom was considered fair game for Photoshop. Yes, you could be more precise, but still same fundamentals were applied. Contrast, burn, dodge, color correct, etc. Content alterations were no longer fair game. Something modern like HDR is not photo, but artwork.

    With video, anything standard, especially anything in hardware, was just editing and filtering. NR, color correction, cut/splice, add effects, deinterlacing, resize, etc. It's only when it went beyond that where it was considered rendered. There are some areas where lines get fuzzy, with some (mostly still terrible beta-grade next-gen method) upscales, or fps increase (fill-in data rendered), but the intention is still basics.

    It's really a retcon to consider anything done in software to be a "render". That's not how it was when I started into video in 90s.

    But so it goes. People still want to "rip" VHS, after all. (Another term that is wrong.)

    Render was a term used exclusively for CG in the 90s. The term "render" did indeed mean content was being fabricated. I still remember SGI render farms. I so wanted to play with one, but didn't have clearance for it. At the time, it was being used for astronomical recreations. The closest that I ever got was to an SGI workstation using MediaBase MPEG T1-streaming encoding.

    Originally Posted by johnmeyer View Post
    totally pointless discussion
    Not at all pointless, but we can table it.
    Last edited by lordsmurf; 29th Sep 2020 at 06:37.
    Want my help? Ask here! (not via PM!)
    FAQs: Best Blank DiscsBest TBCsBest VCRs for captureRestore VHS
    Quote Quote  
  9. Originally Posted by lordsmurf View Post
    I think the origins of the terms have been lost.

    Consider still photos. Back in the 90s, anything that could be done in a darkroom was considered fair game for Photoshop. Yes, you could be more precise, but still same fundamentals were applied. Contrast, burn, dodge, color correct, etc. Content alterations were no longer fair game. Something modern like HDR is not photo, but artwork.

    With video, anything standard, especially anything in hardware, was just editing and filtering. NR, color correction, cut/splice, add effects, deinterlacing, resize, etc. It's only when it went beyond that where it was considered rendered. There are some areas where lines get fuzzy, with some (mostly still terrible beta-grade next-gen method) upscales, or fps increase (fill-in data rendered), but the intention is still basics.

    It's really a retcon to consider anything done in software to be a "render". That's not how it was when I started into video in 90s.

    But so it goes. People still want to "rip" VHS, after all. (Another term that is wrong.)

    Render was a term used exclusively for CG in the 90s. The term "render" did indeed mean content was being fabricated. I still remember SGI render farms. I so wanted to play with one, but didn't have clearance for it. At the time, it was being used for astronomical recreations. The closest that I ever got was to an SGI workstation using MediaBase MPEG T1-streaming encoding.
    2D programs in the 90's referred to applying transforms as "render" as well. e.g. In 1993 after effects referred to "render" and added a "render queue" in 1995. This was long before AE introduced any 3D capabilities

    It's the way professionals use the term "render" in the last 20-30 years whether you like it or not. It's entrenched in most pro software, 2D and 3D.
    Quote Quote  
  10. RE: lossless compression


    Normally , the largest contributing factor for more efficient lossless compression is long GOP. It's less suitable for realtime capture scenarios , because of increased chance of frame drops. It's used for offline, slower, high compression. Lagarith, Huffyuv , UTVideo are I-frame only

    Normally, for YUV (4:2:0, 4:2:2, 4:4:4) or RGB video, x264 lossless will give the highest compression ratio on most types of content, CG, live action, video games, pixel art games. Lagarith has a "null frame" option, which can in rare situations improve compression (duplicate frames)

    Attached below is a pixel art game sample , probably similar to what OP is using . The 256x224 suggests for SNES resolution or similar emulator

    original size 256x224 RGB, 600 frames

    camstudio rgb 31.2MB
    ut video rgb 25.5MB
    lagarith rgb 23.5MB
    x265rgb 4.76MB
    x264rgb 4.05MB
    x264rgb g600 placebo 3.09MB


    But on a 10x nearest neighbor upscale, almost all the lossless encoders increase the filesize substantially, sometimes 2x,4x,10x or more. The explanation for why even long GOP is no longer effective is the 10x nearest neighbor upscale messes up the macroblocks / CTU scale, so prediction is not as effective. The algorithms cannot distiguish very well how a large cluster of 10x10 pixels "blocks" move. Only x265 with 64x64 sized CTUs can achieve better compression than lagarith.

    lagarith 10x NN 56.3MB
    libx265rgb 10x NN 41.9MB
    Image Attached Files
    Quote Quote  
  11. Originally Posted by jagabo View Post
    Originally Posted by johnmeyer View Post
    I've seen many such benchmarks and have never seen anything other than minor differences in file sizes between the three lossless codecs I mentioned.
    That's true for "normal" video. But the OP's case is unusual. Recompress the attached Lagarith video with UT Video Codec. I got a ~20x larger file. I think that qualifies as more than a "minor difference". I verified that the decompressed output of the two files is identical.
    I just re-read the entire thread twice and cannot find any video posted by the OP. I'm downloading your clip to see if I can provide some insight into what would cause such a massive difference. If I find anything, I'll post later today.
    Quote Quote  
  12. Originally Posted by johnmeyer View Post
    Originally Posted by jagabo View Post
    Originally Posted by johnmeyer View Post
    I've seen many such benchmarks and have never seen anything other than minor differences in file sizes between the three lossless codecs I mentioned.
    That's true for "normal" video. But the OP's case is unusual. Recompress the attached Lagarith video with UT Video Codec. I got a ~20x larger file. I think that qualifies as more than a "minor difference". I verified that the decompressed output of the two files is identical.
    I just re-read the entire thread twice and cannot find any video posted by the OP. I'm downloading your clip to see if I can provide some insight into what would cause such a massive difference. If I find anything, I'll post later today.
    I repeated the test with poisondeathray's video (point resize to10x width and 10x height, encode with lagarith and ut, RGB) and got the same relative size difference. The ut file was about 20 times larger than the lagarith file.
    Quote Quote  
  13. I just looked at jagabo's video, and it is what is often called a "pathological case." This means that its characteristics are extreme in one or more dimensions, and the extreme characteristics invalidate any tests done with it. There is so much black that of course the compression will be phenomenal. If you were to compress a video consisting entirely of pure black, it would probably compress down to a few bytes.
    Quote Quote  
  14. Originally Posted by johnmeyer View Post
    I just looked at jagabo's video, and it is what is often called a "pathological case."
    Of course. And it's the OP's case, not mine. I just duplicated the OP's procedure.

    Originally Posted by johnmeyer View Post
    This means that its characteristics are extreme in one or more dimensions, and the extreme characteristics invalidate any tests done with it.
    Not at all. You just misunderstood abolibibelot's question. He wasn't talking about general compression performance but performance with a particular video even though the different encoders give similar results with more general material.

    Originally Posted by johnmeyer View Post
    There is so much black that of course the compression will be phenomenal.
    The darkness of the video isn't the main cause of the difference between lagarith and ut. It's because the video consists solely of large square blocks of identical colors. You can perform the test with a brighter video and you will get similar results: ut will give a much bigger file than lagarith. I already speculated as to why this might be.
    Last edited by jagabo; 29th Sep 2020 at 21:45.
    Quote Quote  
  15. Originally Posted by jagabo View Post
    Originally Posted by johnmeyer View Post
    There is so much black that of course the compression will be phenomenal.
    The darkness of the video isn't the main cause of the difference between lagarith and ut. It's because the video consists solely of large square blocks of identical colors. You can perform the test with a brighter video and you will get similar results: ut will give a much bigger file than lagarith. I already speculated as to why this might be.
    Yes, I totally agree.

    The specific color or luma value makes zero difference; it is the similarity from one pixel to the next, or one block of pixels to the next that makes it "pathological." If, for instance, in RGB space the entire video was 87,155,23 (or any other valid set of numbers), then you'd end up with a file size probably measured in KB, not MB or GB.

    Sorry I wasn't more clear.
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!