Yes, that's what he's saying.
You still don't seem to understand that since the running time is the same file size and bitrate are synonymous.
Say you have two vehicles: both have 10 gallon gas tanks (running time) and both get 40 miles per gallon (bitrate). Then both get 400 miles per tank (file size).
For a still frame I'd open the video in VirtualDub and use Video -> Copy Soure Frame To Clipboard. Then paste the clipboard as a new image in an image editor and save as PNG (lossless).
For lagarithh AVI clips you can use VirtualDub in Video -> Direct Stream Copy mode. Mark in, mark out, Save as AVI.
For other types of videos you'll need an editor that can losslessly trim. The trimmed section must start on an I frame. VideoRedo can handle most h.264 videos.
+ Reply to Thread
Results 31 to 60 of 117
-
Last edited by jagabo; 4th Sep 2013 at 07:54.
-
No, I understand, but how does this apply to the organized vs. non-organized debate?
So, 2.5Gbps/1.7GB = unorganized and 181Mbps/170MB = organized?
If this is true, then why would anyone ever use unorganized? I mean if they are both lossless and have the same quality?
So, you want a video sample or a PNG sample? Not sure what an I frame is.. -
Let's do a quick recap.
You made a timelapse video from canon dslr stills.
You used Virtualdub to assemble the stills into a sequence.
The stills are a non-standard video frame size (but close enough to 4k that it looked like a possible mistype.)
From the available output codecs you chose lagarith lossless.
The result was a huge file.
You used handbrake to compress the lossless lagarith to h.264
You got another huge file (by many standards -- more on that in a moment) but not huge enough because you feel you must have lost some quality. (Although you also say you see no difference.)
Your h.264 data rate is much too high for web transmission, too high for TV transmission, and approaching the max data rate for theatrical projection, so I'll assume you're simply creating a personal archive.
There are other codecs with other options, but they are not available to Virtualdub or Handbrake. Things like DPX, PNG image sequences, JPEG image sequences, DNxHD, Red RAW, Cineform, Pro Res. Some of these formats are also "restricted" to standard video sizes. No doubt with something like after effects and some systematic testing you could hit your data rate target.
The question is why? As noted earlier your files are too big for "normal" sharing. Outputting to an image sequence format makes no sense really because you've already GOT the images on your hard drive (and can re-assemble them in virtual dub at a moment's notice if you need to.)
Generally when people are trying to hit filesize or bitrate targets it is to conform to some standard -- it has to fit on a dvd, it has to be broadcast on BBC, it has to go to YouTube. In other words, people usually target down.
You've maxed out your bitrate for the tools and codecs you used. Be happy.
I also encourage you to google any of the terms I've mentioned above to see if any of them more closely suit your purpose. -
I want to delete the original JPEGs and keep one lossless video that I could (if I need to) extract individual frames from - kind of like a JPEG sequence, but compressed just enough to not be noticeable.
Like you said, 1.7GB is the equivalent of throwing all the bricks in the truck. It's too big, unnecessary.
On the other hand, I thought that 170MB was too small because, as I've said, I DO see a slight difference in quality between the two.
Does that mean I prefer the 1.7GB over the 170MB? Of course not. But if I can almost not tell t he difference at 170MB, then (logically) a 340MB file should show 0% difference while still being 1/5 the size of the original.
That is what I want, that is the reason for this thread. If someone could just explain to me how to achieve that or why that is not possible, I'll get out of your collective hair. -
Feels like there's been some progress here. Two points come to mind.
1. Rule of thumb: NEVER throw away your originals. Someday a better codec or transmission scheme will come along and you'll want to go back to the source. Think of all the folks that threw away their 8mm home movies after transferring them to VHS. Now they're stuck with VHS quality copies forever even though 8mm is capable of HD transfers today.
2. You still don't seem to have your head wrapped around the idea that different codecs encode differently and are built for different purposes. You can't directly compare bitrates (filesizes.) -
-
1. Two things:
a.) I don't have the room to store my originals. I would need to continuously purchase more storage and I don't want to do that.
b.) I am trying to create a video that I can then extract individual frames from that will look nearly identical to the original files. So, essentially, I would be transferring the 8mm to something of equal quality so that I could at some point transfer it back to the 8mm.
2. No, but you can compare quality. The quality in the H.264 encode is less than the Lagarith. That is unacceptable. -
-
Yes. Assuming that the x264 compressed file is really lossless.
Lagarith may be compressing each frame as a separate image. That means to decompress any particular frame all the decoder has to do is locate the compressed data for that frame and decompress it. With inter-frame codecs like x264 most frames only contain the differences between that frame and earlier frames (or later frames, but let's keep this simple). That means to decompress any particular frame requires that the codec go back to an I-frame (a key frame, a frame that includes an entire image) decompress that frame, then reconstruct every frame from that frame to the requested frame. That can take a long time if the key frame interval is large.
The default key frame interval in x264 is 250 frames. So if your editor is sitting at frame 250 (an I-frame), and you seek to frame 249: frame 249 cannot be reconstructed until frame 248 is reconstructed. But frame 248 can't be reconstructed until frame 247 is reconstructed. Etc. So the decoder has to go all the way back to frame 0 (the previous i-frame), decompress that frame, add the changes for frame 1, then for frame 2, then frame 3... until it reaches frame 249. Finally it can display frame 249. This might take several seconds.
So editing long GOP (GOP = Group of Pictures, the distance between i-frames) can be very tedious. Long GOPs are meant for your final product where the video is going to be played one frame at a time, in order. You can mitigate this slow seeking behavior by using shorter GOPs. In x264 you set the "keyint" to the size you want. If you set keyint to 5 there will be an i-frame every 5 frames. You'll still benefit to some extent by inter-frame compression, but dealing with the file later on will be faster.
I suggest you go back to your Lagarith source and trim out a short representative sequence with VirtualDub in Direct Stream Copy mode. That will extract a segment of your source without reencoding it. Pick a size that's appropriate for upload (this site accepts up to 500 MB). Then compress that short segment with Handbrake as you did before (RF=0). Upload those two videos.Last edited by jagabo; 4th Sep 2013 at 09:24.
-
Lossless is lossless, regardless of file size. Lossless simply means there's no information thrown away. The video is encoded losslessly and when it's decoded it exactly the same as before it's encoded. The bitrate required to achieve that is irrelevant.
Lagarith is lossless, x264 using CRF0 is lossless. Forget the bitrates. Chances are one of the reasons x264 produces a much lower bitrate is because the encoding process is much slower and it's compressing the video more.
Think of it like taking a bunch of files and compressing them into a zip archive. Then taking the same bunch of files and creating a rar archive. The two archives will probably be different sizes but both can be decompressed to give you an exact copy of the files you started with.
I haven't read through the whole thread but I'd agree with some of the other posters. Some of your claims seem impossible in reality, such as CRF20 and CRF0 (or whatever they were) outputting the same file size. I'm not saying you're lying, but something odd is going on.
You can argue around in circles for days but nobody else knows exactly what you're doing. The simplest solution is to upload a small sample of the lossless video so there's no argument about what you're dealing with and what should be happening. If you don't get the same results as others, then it's a matter of trying to work out why, but you need a sample for others to work with as a starting point.
If it's an AVI, use VirtualDub to edit out a section of the original and save it as a sample using DirectStreamCopy for the video, then upload it here. -
Except when it's not. (I know you, hello_hello, know this, but the Track may not). Although the codec itself may be lossless (each pixel that goes into the encoder comes out of the decoder unchanged) there are often colorspace and chroma subsampling changes between the source and what was given to the encoder.
For example, say you have pristine RGB images as your source. Each pixel has three values: red, green, blue. Before being compressed by x264 it has to be converted to YUV (which results in precision losses because not every RGB value has a unique YUV value, and vice versa) and reduce to 4:2:0 chroma sub-sampling (the greyscale, Y, data is at the source's resolution, but the color information, U and V, is at half the resolution in each dimension) resulting in a loss of color resolution. Every 4 pixels in the RGB source, 12 RGB values, is reduced to 6 YUV values.
In your case, I believe you are starting with JPG images. These are likely YUV 4:2:0 internally but converted to RGB (quality loss) by the JPG decoder. Then that RGB is converted back to YUV 4:2:0 (quality loss again) for h.264 compression. So even if x264 is lossless, you've already lost quality twice before giving the frames to x264.
This is why it's recommend you keep your JPG files as a permanent archive. -
True. I actually started to add a bit about that to my post, but then thought that'd involve reading the whole thread more thoroughly to see if it'd been covered or was relevant, for which I didn't have the motivation.
Isn't the problem the size/quality of the Handbrake output when re-encoding the lossless Lagarith source, or are they both being encoded from "original" jpg source? If so, I guess I should have read the thread more thoroughly. -
What the Ope is arguing about is whether half as much compression will produce a result half as good as the original after it underwent conversion to "uncompressed". And I say that's setting up a false pope.
My understanding is that decompressing with the same codec can "work pretty well", but not good to transcode. So I'm wondering if recoding to a higher quality jpeg sequence would work better than Lagarith.Last edited by budwzr; 4th Sep 2013 at 10:56.
-
-
-
That will depend on your source. PNG works in RGB. Lagarith can also work in RGB. So an RGB 4:4:4 source will both can lossless with either. But if your source is YUV 4:2:0 it has to be converted to RGB 4:4:4 before compression as PNG. That gets you some losses. And more losses converting back to YUV 4:2:0 if necessary later. Lagarth can be set to compress the YUV 4:2:0 losslessly as YUV 4:2:0. When it decompresses the 4:2:0 that comes out will be exactly the same as the 4:2:0 that went in.
-
Regarding lossless compression: It's always possible when someone says they've found a difference with lossless compression that they've found a bug in the codec. Such a bug may only manifests with some particular arrangement of pixels that occurs very rarely.
But I've never seen anyone who claims losses with a lossless codec show proof of those losses. It's always "I compared such-and-such lossless codec and I saw subtle differences after decompression, so lossless isn't really lossless." They won't provide a source video and a compresed video that shows those losses for others to verify. They won't even say exactly what programs and procedures they used.
Here's an example of losses due to chroma subsampling. An RGB 4:4:4 source was converted to YUV 4:2:0 and back to RGB:
Original RGB:
Converted to YUV 4:2:0 (YV12) and back to RGB:
Last edited by jagabo; 4th Sep 2013 at 16:17.
-
While we're on the subject of conversions.... I was playing with VirtualDub a little today. I don't use it much, but I thought it'd work in colour spaces other than RGB these days? I have a few questions if you don't mind.
Fast Recompress: What goes in is what comes out, no conversion in between?
Normal and Full Recompress: I always thought the same applied as long as you didn't use any filtering but does VirtualDub always convert to RGB "internally" using either of those modes?
I noticed when looking at the Color Depth option, by default it's set to RGB for output. Would that be because it's always going to be converted to RGB internally anyway (except maybe Fast Recompress)?
About the only time I use VirtualDub is for remuxing or for opening an AVISynth script to output a lossless file, and I always use Fast Recompress, so I assume I've not been unecessarily converting between color spaces each time, but not having ever used Virtualdub for much else I've never worked out exactly what it does.
My apologies to the OP for sidetracking the thread a little. -
Yes. Though the codec itself my convert colorspace. For example, Panasonic DV codec always decods to RGB 4:4:4 though DV works internally in YUV 4:1:1 (NTSC) or 4:2:0 (PAL).
What happens depends on the Color Depth settings. If the input color depth is set to Autoselect, and output depth to Same As Decompression Format there will be no conversion unless you filter (again, codecs may perform conversions themselves). If you specify a colorspace for input or for output a conversion will take place if necessary.
The whole situation is very complicated now as VirtualDub can now work in different colorspaces internally. Some filters can work in YUV, others only in RGB. Enable the Show Image Format option in the Filters dialog and you'll see what colorspace each filter is working in.
I use VirtualDub all the time -- but mainly as a viewer for AviSynth scripts now. I hardly ever encode or filter with it anymore. -
Thanks for the info.
I used to use VirtualDub a bit for view scripts myself, but these days I just use MeGUI's preview. There's probably no huge advantage to using one over the other, but I'm a "clicker". I like using the mouse where I can, so for me MeGUI's preview window having a button to click on to reload the video and buttons to jump back and forward 25 frames at a time (as well as single frame buttons) give it the edge.
I tried to learn to like AvsPMod but haven't warmed to it yet. Do you use it much?
A while back I was trying to apply some clever filtering using lots of Trim()s in a script but found AvsPMod would slow to a crawl. Every time I changed something I'd have to wait and wait while it re-loaded the script. The same script would reload in MeGUI's preview in a flash so it was a lot faster to edit with Notepad or MeGUI's script creator and preview with MeGUI. -
So, you're saying that H.264 creates a smaller file but is more difficult to play back?
Why does that matter? The 1.7GB takes more out of my CPU than the 170MB..
Okay, I will.
EDIT: Okay, I am sick of VirtualDub. I try encoding with the exact same settings as before and it's just stuck on the first frame - nothing happening.
http://i.imgbox.com/abp1JqC8.png
EDIT2: I've created a new time-lapse (Lagarith.avi), then encoded it with Handbrake at 1Gbps (Bitrate = 1Gbps) and finally tried using the Constant Quality RF and chose lossless (CRF = 0).
The first file is 74MB. The second is 40MB, and the third is 2MB.
I hope this finally settles my question.
I don't understand.
You're saying that I should keep my original JPG files, but that JPG files always go through a YUV>RGB>YUV conversation.
So, why should I keep them? If this conversation is inevitable, then it's the best quality I will ever get from my JPGs and if it's not, then why don't I just use that and then delete my JPGs?
Seems either way, the JPGs are not necessary.
So, what do you use for encoding? -
Track, I just did a test.
Used 100 4928x3264 jpeg files from a Nikon, total size= 393mb
Created a timelapse in VirtualDub lagarith codec 29.97fps, size=718mb
Created same timelapse in VirtualDub x.264 codec (i-frame only), size=341mb
Check the total file size of all the images you are using to make your timelapse. If your results are similar to mine then your original files are less than half the size of your lagarith and only slightly larger than your h.264 -- meaning your h.264 is maxed out for your material.
If this is true, keep your originals. This is the intermediate you seek, using less disc space than you're aiming for -- and no loss of quality.
Then make yourself a nice 1920x1080 mp4 for "previewing" on your less than 4k screen. (Total size maybe 20mb)
Then you can always pop them back into Virtualdub (or Vegas, or Premiere, or Avid, or Lightworks...) if you need a 4k copy for broadcast, or a 720p version for YouTube, or whatever technology comes along in the future.Last edited by smrpix; 5th Sep 2013 at 06:09.
-
You have an 8 second sequence at 5fps. That's 40 frames. 40 original jpeg images is too much to keep? You're silly.
-
No, my results couldn't be more different.
2.6GB of photos translate into a 1.7GB Lagarith and then to a 170MB H.264.
Even if it were true, I'd still not have my goal of only keeping one video file.
This might be the most hilarious comment I've ever read on the internet. -
I use the x264 command line encoder, usually with an AviSynth script to open the source file. When I encode your 76 MB Lagarith.AVI with the CLI encoder at CRF=0 I get a 59 MB file. With CRF=0 and --keyint=1 I get a 65 MB file. At CRF=12, keyint=250, I get a 34 MB file. Analysis later...
Edit 1:
The main culprit in your x264 encoded video are the vbv-maxrate=20000 --vbv-bufsize=25000 options. Those are keeping the bitrate from rising as expected. It also appears to have been encoded at CRF=1, not CRF=0. The vbv settings would have been ignored at CRF=0.
Edit 2:
The reason Handbrake used CRF=1 and those vbv settings is because you chose the Normal preset. That uses Main profile at level 4.0. Change to High profile at 5.1 or higher to get lossless encoding.
Note that your source is YUV 4:2:2 and that x264 works in YV12 by default so you can't get truly lossless encoding. But Handbrake at the Normal preset, profile High, level 5.1 produced a 41 MB file. I recommend you just go with the High Profile preset, profile High, level 5.2 (44 MB).
Edit 3 (well, more really):
This command line gave a truly lossless encode from a YV12 starting point:
Code:x264.exe --preset=slow --crf=0 --sar=1:1 --output %1.mkv %1
Code:AviSource("Lagarith.avi").ConvertToYV12()
Code:v1=AviSource("Lagarith.avi").ConvertToYV12() v2=ffVideoSource("Lagarith.avs.mkv") Subtract(v2,v1) Levels(112,1,144,0,255)
Last edited by jagabo; 5th Sep 2013 at 08:35.
-
It depends what the original JPEG's were
High quality jpeg's are usually yuvj444p (the "j" denoting full range YUV). Low quality "for web" jpegs are usually subsampled yuvj420p
It's true that most jpeg decoders will output RGB. So you are technically losing quality by using lagarith from the YUV to RGB and back to YUV conversion, although it will be difficult to see the loss with human eyes, the loss can be measured by various tests. Also you said you were using the "YUV option" in lagarith - in that case you are causing subsampling because vdub's jpeg decoder outputs RGB . If you picked "YV12", that's 4:2:0 , if you picked "YUY2", that's 4:2:2 . If you used RGB, that would be what the decoder is outputting (least quality loss, larger filesizes) . Try it, you will notice lagarith RGB will be larger than the YUV options.
Since lagarith doesn't support yuv444, it would be truly lossless if you used yuvj444p (assuming the jpegs were indeed the high quality yuvj444p versions), for example by using ffmpeg decoder (which can output yuvj444p and keep YUV to prevent unecessary quality loss) to feed into x264 and encode with full range x264 lossless yuvj by using --input-range pc --range pc --input-csp i444 --output-csp i444 . Many decoders and software will not be configured properly to play those streams properly (they might play at the wrong levels, not doing the RGB conversion for display properly), so that's another reason to keep the original jpegs -
But I can't keep the original JPGs, as I've said.
I need to find a way to encode them into a video file that would truly be lossless, without subsampling or whatnot, and then to organize them into a lossless H.264 so save space.
If someone could just give me a to-do list in how to accomplish this, I can finally continue my projects instead of wasting time on here.
I'm really keen to get back to my work, thank you. -
Similar Threads
-
Need help getting optimal quality for DVD-5
By BlackThought in forum Video ConversionReplies: 8Last Post: 4th Apr 2012, 22:37 -
Optimal settings for conversion
By airwolfUK in forum Video ConversionReplies: 1Last Post: 17th Feb 2011, 14:08 -
Is there a better way to get optimal quality converting from .flv to DV?
By brassplyer in forum Video ConversionReplies: 7Last Post: 5th Jan 2011, 19:26 -
Optimal mpeg2avi conversion.
By mery in forum Video ConversionReplies: 1Last Post: 16th Dec 2010, 18:50 -
Best h.264 setting for optimal size/quality with handbrake.
By frickfrock99 in forum Newbie / General discussionsReplies: 4Last Post: 1st Oct 2010, 02:09