Difficult to make an objective comment on the files... as they have of course all been 're-encoded' to create the overall mp4! but I'm not seeing any really horrible changes in any of the files.... They probably do need de-shaking though, IMHO!
DV, HQX and Grass Valley Lossless are all intraframe formats suitable for editing. DV is interlaced and uses non square pixels, so you need to deintelace and resize to make life a bit easier in the editing environment.
You can save that working file as lossless or HQX. I personally don't think they'll be much (if any) visual difference between the two formats - especially when dealing with standard DV resolution footage.
You can use these intraframe files to do further processing - de-shaking - colour corrections etc.
Even though those processes might need more than one edit procedure, you will not lose any appreciable quality by re-encoding to another HQX working file (or lossless if you insist)
Only your final output file would be encoded as x264. Now that you don't want to edit and re-encode. Then you will start losing quality.... but not if you keep your working files as HQX (or lossless) throughout the project....
+ Reply to Thread
Results 31 to 56 of 56
Yep ... Deshaking is the next step - hence the intermediate format.
I went ahead and installed the GV codecs to see what's going on. Here's the deal:
If you give GV Lossless codec (hereinafter referred to as GVL) RGB data to compress it appears to encode that internally as RGB. If you give it YUY2 data it encodes it internally as YUY2. It will not accept YV12 as input. The RGB compressed files are about twice as big as the YUY2 compressed files. On decompression, for both RGB and YUY2 compreseed video, GVL gives you YUY2 data unless you specifically ask for RGB*.
So if you give GVL RGB data and then decompress the resulting video asking for RGB output the codec is lossless. But if you don't ask for RGB it will give you YUY2 and the losses associated with RGB to YUY2 conversion.
VirtualDub in full processing mode will give GVL RGB data and it will be encoded as RGB resulting in a "big" file. If you then open that file in VirtualDub in Fast Recompress mode VirtualDub will receive YUY2 video. That YUY2 is then compressed by GVL as YUY2 and you get a "small" file. If instead you use Full Processing mode VirtualDub converts that YUY2 back to RGB and passes that to GVL which encodes it in RGB mode, generating a big file again -- with losses from the RGB to YUY2 to RGB conversion. So I'd be wary of using GVL in VirtualDub.
* You can force GVL to output RGB with AviSource("gvl.avi", pixel_type="RGB24") in AviSynth. Or in VirtualDub using Video -> Color Depth -> Decompress Format RGB24.
On the 2nd point .... when I carried out my test ...
once compressed as GV losless it showed as YUV
If opened and compressed again with GLV again it was YUV (and same size)
If opened and fast recompresss again it was YUV (but half size)
at no time did I get RGB output ?
If the outcome here is don't use GVL .. then I'll go back to using Lagarith.
My HD camcorder allows you to record colour bars and - being in PAL land - those are recordings of PAL EBU colour bars the same as these:
Playing back a recorded colour bar file using WMP or MPC plays them back showing the colours exactly as recorded ...
(I'll use green as a reference level, but the other colours are all relatively correct)
So, green is displayed as 0,191,0
However, opening that same original camera file in Vdub - or indeed with VLC player - and the colours display differently.....
The green colour bar, for example, is now dsiplayed as 15,223,5.
Now comes the good bit....
If I re-encode the original file in Vdub, as Grass Valley HQX (CHQX) or Lossless (CLLC) - using fast re-compress - then the resultant file will display in Vdub with the original levels (i.e. green as 0.191,0)
If I re-encode using Lagarith - or as uncompressed - then the new file displays with green at the Vdub display level (i.e. 15,235,5)
You cannot use fast recompress for those options because of the internal Vdub colorspace requirements ( The experts will - hopefully- be able to explain exactly why that is?)
Slightly odd observations on the file sizes. For about 15 seconds of colour bar...The uncompressed file is 1.2GB. The CLLC is about 250MB and the CHQX about 60MB.
The lagarith file however is tiny .... only about 1.67MB using the YUY setting , and about 6MB when using the default RGB setting.
I have no idea why that is..... I'm sure there is a perfectly logical explanation?
Of course, these observations are not directly related to DV files, which have their own colourspace requirments, as Jagabo has already explained in an earlier post.
All I'm pointing out is that if I need to see my AVCHD videos displayed in Vdub - in their intraframe intermediate 'working' format - with the original colour and level settings, then I need to convert to Grass Valley, and not to Lagarith or uncompressed.
So it's probably best to check exactly how the intermediates from your DV files are displaying in Vdub, or you could well be making colour brightness and contrast adjustments based on 'false' visual information...
Last edited by jagabo; 15th Feb 2016 at 08:24.
An obvious question would be is 'what color space' does Youtube/Vimeo use ... trying to find that out, but so far no luck. (now trying Youtube forum)
As my end goal will be to take my DV captures and put onto a sharing site - this will be key ..........
I know the ideal file format is 720p or 1080p - H264 and AAC audio in MP4 container ... but no reference so far on what color space ...........
I think I shall stick to converting my .mts files to Grass Valley HQX, using fast re-compress. No colourspace conversion, and Vdub displays my intermediate HQX file as I expect it to be. ....(I've found no need to use lossless myself)
Quite what the best approach for creating a progressive intermediate working file for DV footage is, I'm still not sure?.......
Regarding your post #36:
The difference in colors that you are seeing is the difference between rec.601 and rec.709. It's a matter of which matrix was used to convert RGB to YUV (hence different colors from different codecs/editors) and which matrix was used to convert back to RGB for display (hence different colors from different players).
The difference in file size between lagarith and GV are due to the nature of your video. The algorithm used by Lagarith works well with large flat areas. It also recognizes identical frames and instead of encoding each frame uses "repeat the last frame" flags (unless you disable that feature). GV is designed to work well with "real" video, where Lossless algorithms don't work well.
Last edited by jagabo; 15th Feb 2016 at 08:45.
Ah .. thnx good to have YV12 confirmed.
I came across a really detailed & long workflow for creating files for Vimeo/Youtube
This is often referenced as the way to handle video files for streaming sites.
A bit too complex and long for me (given my level of videography & PC skills) and the fact I don't edit in Vegas as first step, ....mine are DV captures.
I do want to get the deinterlacing, resizing, stabilization (if needed) video compression and audio compression correct, then I will put in Vegas for combining with a bundle of other similar files.
This is where I found the tip to use ColorYUV(levels="PC->TV") which allows for Youtube expanding color range.
So at the end of my work in VD I would be best to add an ConvertToYV12 step at the end prior to final output file, will that be OK .. and still keep levels set earlier at [16,235] ?
I played around with GV HQX a bit. If you compress an RGB source with it the output from the decompressor is always RGB.
If you compress a YUY2 source the output is YUY2 or RGB depending on what program you open it with. By default AviSynths' AviSource() gets YUY2. VirtualDub gets RGB.
Last edited by jagabo; 15th Feb 2016 at 10:01.
related to end goal ....
I have as part of earlier steps set levels to be 16-235
If I change color space at end of workflow to YV12
using VD filter convertformat and set to 4.2.0 YV12
will that leave levels untouched ?
or do I need to reapply levels ?
Also this may be a daft question ...how can I tell what the levels are set to, is there a simple way to 'read' what is set?
Last edited by Tafflad; 15th Feb 2016 at 16:32.
YUY2 to YV12 with ConvertToYV12() gives no change in levels.
RGB to YV12 with ConvertToYV12() compresses RGB 0-255 to YV12 16-235.
Check YUV levels with Histogram() or VideoScope(). I usually prefer the traditional horizontal view and use TurnRight().Histogram().TurnLeft(). That gives a horizontal waveform monitor above the main video.
Hey. I tried to process the video 11 seconds 7 mb https://drive.google.com/open?id=1eOEqGOVd7WdPPC9RLXAvRoHIaxB3xGH3 with parameters
[Attachment 48358 - Click to enlarge] but it is impossible to stabilize (red lines)
[Attachment 48359 - Click to enlarge] Can I hope for help?
The problem with a shot like that there's not much detail in the background for the deshaker to lock on to. So it locks onto the person who's moving around. Try using the "Ignore pixels inside" feature to eliminate most of the person. AviSynth's Stab() works pretty well with that shot.
+1 jagabo. I was going to send you a link to the Deshaker guide I wrote years ago, but it seems to have disappeared from all the sites that used to post it. The "ignore pixels" was one of the settings I highlighted. It works great for things like this, where you want to use only the background in order to remove the gate weave which is the source of the motion in this clip.
[Edit]I've attached a copy of my guide, in Word format. Scroll way down to the "Advanced" section to get some more answers to your question.
Last edited by johnmeyer; 13th Mar 2019 at 08:44.
Thanks. I can also tell you about btightness/contrast ... before of the Deshaker. The "ignore pixels" does not work.
Does it mean that means the Deshaker such stabilization is impossible to perform?
I could go through all the other settings, but the simplest advice is to start again by using the defaults and then make only the change that jagabo suggested. When you do that, make sure you are forcing the program to look at the edges. It is easy to get confused and get the setting backwards. It will be obvious during pass 1, so make sure to check. Also, when you limit what part of the image will be used, keep plenty of pixels. I mentioned this at the end of my guide, if you had a chance to skim through it. You need at least the "block size" number of pixels around the border, and preferably 2x or 3x that size. If you really think you need a narrow border, then use a smaller block size, although that will make pass 1 take more time.
I tried all the settings for 1 pass, so I asked about the approximate settings for 1 pass. APPROXIMATE - not absolute, around which I should try to find the values of 1 pass.
With great respect.
@doctorkhv - what were your expectations ?
Did you want to smooth it out a bit ? or did you want it stable, simulating a locked off tripod shot ?
Deshaker is not designed for the latter, or is it ideal for this type of shot
Thank you all. I had to stabilize in other programs https://drive.google.com/open?id=1hgCZHK2ig1YuvKOYdyVhuS2L0gLgwoDS. I will know the possibility of Deshaker. Comparison of stabilized https://youtu.be/Ny7bdSzHJ2Y
You should be able to get rid of the small instability. I do it all the time with film where the problem is gate weave.