Try StreamFab Downloader and download from Netflix, Amazon, Youtube! Or Try DVDFab and copy Blu-rays! or rip iTunes movies!
+ Reply to Thread
Results 31 to 46 of 46
Thread
-
I completely agree with poisondeathray and will add that this is a totally pointless discussion because it doesn't help anyone. Here is the key reason is doesn't help: every NLE has a button labeled "Render" and clicking on that button takes your existing video, the edits to that video, the transformations you've applied, the compositing and, yes, video created from scratch within the computer (like titles), and creates new video.
That is how the industry defines "render."
Yes, the word is also used to define the process of taking a wireframe drawing and converting that to 2D surfaces, and 3D objects, but that is not its only meaning!!!
To narrow its meaning to only apply to video that is entirely created within a computer, like CGI, is the prerogative of whomever is speaking, but by using this word in such a restrictive manner actually confuses everyone because it flies in the face of how everyone else in the industry uses that word, including the companies that make the software we all use.
For those who remember their literature, Lewis Carroll wrote a follow-up to "Alice in Wonderland" called "Through the Looking-Glass." The most famous quote from that book is when Humpty Dumpty confuses Alice by deciding to make words mean what HE wants them to mean, rather than their commonly accepted meaning. That quote applies perfectly to this discussion.
"I don't know what you mean by 'glory,' " Alice said.
Humpty Dumpty smiled contemptuously. "Of course you don't—till I tell you. I meant 'there's a nice knock-down argument for you!'"
"But 'glory' doesn't mean 'a nice knock-down argument'," Alice objected.
"When I use a word," Humpty Dumpty said, in rather a scornful tone, "it means just what I choose it to mean—neither more nor less."
"The question is," said Alice, "whether you can make words mean so many different things."
"The question is," said Humpty Dumpty, "which is to be master—that's all."Last edited by johnmeyer; 28th Sep 2020 at 15:04. Reason: typo
-
I understand not all laptops have the ability to add a second drive, if these are of that variety, then yes, swapping in a much faster drive should help significantly.
And material like this (big 10x10 flat pixels) might compress much more with UT Video Codec. I don't know how big your CamStudio version is but an upscaled Lagarith version was 3.25 GB. A UT version was only 70.8 MB. -
I am not sure there are any blanket answers to that question. Each encoder can exhibit wildly different performance depending on settings. Chief among those are the various "quality" settings. These usually involve, for interframe codecs, looking at more adjacent frames, but I think there are other additional calculations that get made in order to improve quality. You can see multiple integer increases or decreases (i.e. 2x, 3x, ... 10x? ) in performance depending on these settings. Since each codec provides different controls and dials, it is really difficult to get any sort of apples to apples comparison, at least with any codec where compression is involved.
Then you have the actual coding of the codec. There are lots of h.264 codecs, as one example, and some programmers do a better job than others, including those who are totally brilliant and can figure out all sorts of beyond-clever shortcuts. Thus, the actual implementation of the exact same calculation can take vastly different amounts of time.
The best illustration of this is the Fast Fourier Transform (FFT) where Cooley and Tukey took the Fourier transform that is used to convert time to frequency (and which most students of calculus can easily understand), and figured out a totally different way to calculate it that takes a small fraction of the computational time and power. I first saw it almost fifty years ago in 1973, when I actually was pretty decent with calculus, but even when I was younger and sharper I couldn't understand one single line of it. It is a work of real genius (and I ain't no genius). -
I was pretty sure I double checked the numbers but it looks like I made a mistake. I repeated the test today with the same source video and got 3.26 GB for the lagarith AVI file but 70.8 GB for the UT file. So it looks like I mistook GB for MB with the UT file. Sorry about that. As a reference an uncompressed RGB file was 516 GB.
That still leaves the question about how one lossless encoder can compress much better than another. In this case Lagarith compressed about 20 fold better than UT. Aside from the fact that some programmers may be better or more knowlegable than others, they may also have different assumptions and goals. A real movie would almost never have 10x10 blocks of exact duplicate pixels. Or exact duplicate frames. A programmer that was looking at compressing movies might not consider those properties. Another programmer might think that a computer generated presentation full of flat shaded bar graphs and largely static content would have lots of those types of features and add separate algorithms for those cases. One programmer may deal with a frame of video as a 1d string of pixels. Another may consider that a video is a two dimensional array of pixels and deal with it as 2d blocks rather than a 1d string. One can usually optimize compression algorithms if one knows something about the properties of the data beforehand. But those types of optimizations may be useless with data of other properties. -
Chief among those are the various "quality" settings.
Then you have the actual coding of the codec. There are lots of h.264 codecs, as one example, and some programmers do a better job than others, including those who are totally brilliant and can figure out all sorts of beyond-clever shortcuts. Thus, the actual implementation of the exact same calculation can take vastly different amounts of time.
The best illustration of this is the Fast Fourier Transform (FFT) where Cooley and Tukey took the Fourier transform that is used to convert time to frequency (and which most students of calculus can easily understand), and figured out a totally different way to calculate it that takes a small fraction of the computational time and power. I first saw it almost fifty years ago in 1973, when I actually was pretty decent with calculus, but even when I was younger and sharper I couldn't understand one single line of it. It is a work of real genius (and I ain't no genius). -
If you spent as much time actually trying to understand what I wrote instead of wasting your entire post making nasty remarks to a stranger (me) who sincerely tried to help you, you might realize that I answered your questions.
I will try one more time, despite your insults.
Whether the encoder is lossless (Lagarith, HuffYUV, UT Video Codec etc.) or lossy (h.264) is completely irrelevant to the point I was making, so your eagerness to criticize me means that you missed the point. That point, since you missed it, is that the difference in programming algorithm can make a massive difference in encoding speed.
This applies equally to lossless as well as lossy codecs.
How much difference?
Well that's why I provided the example of the FFT. It was not in any way "beside the point," but is in fact probably the answer to your question. In case you didn't know, the discovery in the mid-1960s of the FFT approach to calculating the Jean-Baptiste Fourier transform provides an astounding 100x (i.e., two orders of magnitude) speed improvement under many circumstances.
So clever programming can indeed be the entire answer.
As for getting massive increases in compression, while still being lossless, there is no magic to be found here so there is really only one explanation: your benchmark is wrong. I've seen many such benchmarks and have never seen anything other than minor differences in file sizes between the three lossless codecs I mentioned. Here is one such comparison test by a respected member of the doom9.org forum:
Comparison of Lossless Realtime Codecs
I can link to half a dozen similar comparisons, and they never show any substantial difference in file size.
Finally, anticipating that you might still want to make comments about the style of my post rather than its substance, is my post above "snippy?" Yes, it is. You ticked me off. But despite that, I once again did answer your question and provided what I am quite sure are the answers you asked for.Last edited by johnmeyer; 28th Sep 2020 at 21:20. Reason: added last sentence.
-
That's true for "normal" video. But the OP's case is unusual. Recompress the attached Lagarith video with UT Video Codec. I got a ~20x larger file. I think that qualifies as more than a "minor difference". I verified that the decompressed output of the two files is identical.
-
At the core of it all? Math rounding errors.
More compression = more complex math, more rounding. That's all it is.
Lossless -- truly lossless -- mostly using different math for data compression, not image compression. But rounding is still involved. I'm not a fan of UT Video. There were discussions here many years ago, about how Huffyuv was not 100% visually the same as uncompressed, but 99.99+%. I have seen odd things in the past, but rarely, few times per decade.
I think the origins of the terms have been lost.
Consider still photos. Back in the 90s, anything that could be done in a darkroom was considered fair game for Photoshop. Yes, you could be more precise, but still same fundamentals were applied. Contrast, burn, dodge, color correct, etc. Content alterations were no longer fair game. Something modern like HDR is not photo, but artwork.
With video, anything standard, especially anything in hardware, was just editing and filtering. NR, color correction, cut/splice, add effects, deinterlacing, resize, etc. It's only when it went beyond that where it was considered rendered. There are some areas where lines get fuzzy, with some (mostly still terrible beta-grade next-gen method) upscales, or fps increase (fill-in data rendered), but the intention is still basics.
It's really a retcon to consider anything done in software to be a "render". That's not how it was when I started into video in 90s.
But so it goes. People still want to "rip" VHS, after all. (Another term that is wrong.)
Render was a term used exclusively for CG in the 90s. The term "render" did indeed mean content was being fabricated. I still remember SGI render farms. I so wanted to play with one, but didn't have clearance for it. At the time, it was being used for astronomical recreations. The closest that I ever got was to an SGI workstation using MediaBase MPEG T1-streaming encoding.
Not at all pointless, but we can table it.Last edited by lordsmurf; 29th Sep 2020 at 06:37.
Want my help? Ask here! (not via PM!)
FAQs: Best Blank Discs • Best TBCs • Best VCRs for capture • Restore VHS -
2D programs in the 90's referred to applying transforms as "render" as well. e.g. In 1993 after effects referred to "render" and added a "render queue" in 1995. This was long before AE introduced any 3D capabilities
It's the way professionals use the term "render" in the last 20-30 years whether you like it or not. It's entrenched in most pro software, 2D and 3D. -
RE: lossless compression
Normally , the largest contributing factor for more efficient lossless compression is long GOP. It's less suitable for realtime capture scenarios , because of increased chance of frame drops. It's used for offline, slower, high compression. Lagarith, Huffyuv , UTVideo are I-frame only
Normally, for YUV (4:2:0, 4:2:2, 4:4:4) or RGB video, x264 lossless will give the highest compression ratio on most types of content, CG, live action, video games, pixel art games. Lagarith has a "null frame" option, which can in rare situations improve compression (duplicate frames)
Attached below is a pixel art game sample , probably similar to what OP is using . The 256x224 suggests for SNES resolution or similar emulator
original size 256x224 RGB, 600 frames
camstudio rgb 31.2MB
ut video rgb 25.5MB
lagarith rgb 23.5MB
x265rgb 4.76MB
x264rgb 4.05MB
x264rgb g600 placebo 3.09MB
But on a 10x nearest neighbor upscale, almost all the lossless encoders increase the filesize substantially, sometimes 2x,4x,10x or more. The explanation for why even long GOP is no longer effective is the 10x nearest neighbor upscale messes up the macroblocks / CTU scale, so prediction is not as effective. The algorithms cannot distiguish very well how a large cluster of 10x10 pixels "blocks" move. Only x265 with 64x64 sized CTUs can achieve better compression than lagarith.
lagarith 10x NN 56.3MB
libx265rgb 10x NN 41.9MB -
-
-
I just looked at jagabo's video, and it is what is often called a "pathological case." This means that its characteristics are extreme in one or more dimensions, and the extreme characteristics invalidate any tests done with it. There is so much black that of course the compression will be phenomenal. If you were to compress a video consisting entirely of pure black, it would probably compress down to a few bytes.
-
Of course. And it's the OP's case, not mine. I just duplicated the OP's procedure.
Not at all. You just misunderstood abolibibelot's question. He wasn't talking about general compression performance but performance with a particular video even though the different encoders give similar results with more general material.
The darkness of the video isn't the main cause of the difference between lagarith and ut. It's because the video consists solely of large square blocks of identical colors. You can perform the test with a brighter video and you will get similar results: ut will give a much bigger file than lagarith. I already speculated as to why this might be.Last edited by jagabo; 29th Sep 2020 at 21:45.
-
Yes, I totally agree.
The specific color or luma value makes zero difference; it is the similarity from one pixel to the next, or one block of pixels to the next that makes it "pathological." If, for instance, in RGB space the entire video was 87,155,23 (or any other valid set of numbers), then you'd end up with a file size probably measured in KB, not MB or GB.
Sorry I wasn't more clear.
Similar Threads
-
Normalizing high speed recorded track (video & audio)
By vhsexplorer in forum Video ConversionReplies: 6Last Post: 27th Oct 2019, 12:40 -
Rendering Speed Help
By yonickyscorpio in forum Newbie / General discussionsReplies: 3Last Post: 19th Nov 2018, 10:41 -
SSD Slow on older system, what to do to speed it up?
By Gurd99 in forum ComputerReplies: 11Last Post: 1st Dec 2017, 05:39 -
VirtualDub Conversion unintentionally also changes video speed
By hwvd7 in forum Video ConversionReplies: 11Last Post: 4th Oct 2017, 05:42 -
Premiere Rendering Rate-Shifted Audio Out of Sync
By koberulz in forum Newbie / General discussionsReplies: 7Last Post: 4th Sep 2017, 13:05