I am trying to put together an Avisynth script for noise filtration.
After I render my Vegas project (DV PAL SD Type-2) via Frameserver (RGB24) and open
the created avi in Avsp with the following code:
DirectShowSource("k:\signpost1.avi", fps=25.000, convertfps=true) #RGB24 color space
or
AviSource("k:\signpost1.avi", audio=false) #RGB24 color space
when I hit the video preview mode, I notice that the colors are dull compared to
the case when I simply open the avi file directly (avoiding Vegas and frameserving,
just the captured DV avi from camera):
DirectShowSource("k:\clip1.avi", fps=25.000, convertfps=true) # apparently it is in YUY2 color space
or
AviSource("k:\clip1.avi", audio=false) # for some reason it is in YV12 color space
the colors are more saturated and contrasted. do I do something wrong? how do I get consistent results regarding the colors?
+ Reply to Thread
Results 1 to 8 of 8
-
-
Probably a levels issue, TV vs. PC luma expansion and vegas project settings (e.g. using 32-bit settings and linear gamma vs. 2.222). But debugmode only supports 8-bit projects for vegas
http://www.glennchan.info/articles/vegas/v8color/vegas-9-levels.htm
The YUY2 vs. YV12 is dependent on your directshow vs. vfw decoder. e.g. if you use cedocida for vfw, you can set the colorspace, but you are probably using ffdshow to decode for directshow, so you get YV12 forced.
Can you upload a small segment? -
I uploaded the pics and a small part of the DV sourse:
http://www.mediafire.com/?sharekey=beff8ceaec0e6413aaca48175a79d1c3e04e75f6e8ebb871
I use Vegas Pro 8.0b, but then I installed the Vegas Pro 9.0b and the same color shift occurs. The project is in PAL, best quality, 8-bit pixel video. If I create the project in 32-bit pixel video, the frameserver cannot properly render the project (I know it only works for 8-bit).
I tried to frameserve from VirtualDub Mod and everything works fine. The colors are identical to the colors I get by simply opening the video in Avsp through AviSource("k:\clip1.avi", audio=false) command.
I found a similar post on the internet:
http://groups.google.com/group/frameserver-discuss/browse_thread/thread/a7e1f4e17650de60
---------------------------------
Debugmode Frameserver has been indispensable for me in Vegas 8.0, as
none of Vegas' internal rendering engines give me quality-based output
that x.264/MeGUI or TMPEGenc does. However, it looks like when a
project is set to 8-bit pixel video, Vegas re-adjusts the brightness
and contrast of the input video clips, such that rendered output looks
flat unless explicitly tweaked with a Vegas brightness/contrast
filter. (Oddly it doesn't seem to do this on imported still images
though.) I just figured out that setting the video to 32-bit floating
pixels leaves the contrast alone. But I gather DMFS won't work in
this mode, yes? Too bad.
--------------------------------- -
The clip you uploaded is uncompressed RGB, it's not the native DV (you probably forgot to use direct stream copy in vdub)
There is no direct workaround for that limitation with debugmode and 8-bit AFAIK
When you do your RGB=>YUY2 conversion on the frameserved video, you could specify a PC matrix e.g.
ConvertToYUY2(interlaced=true, matrix="PC.601")
But that is not as ideal, because you will get clipping on regular tv (it will look better on PC in terms of contrast). You should test on your intended target or goal (you never mentioned what that is, but I doubt you will leave it as RGB24 for distribution). PC monitors are usually calibrated differently than regular TV's
Another option is to use a lossless intermediate instead of frameserving (e.g. lagarith), and use 32bit mode (linear gamma 1.0)
Debugmode 8-bit ConvertToYUY2(interlaced=true)
Debugmode 8-bit ConvertToYUY2(interlaced=true, matrix="PC.601")
Notice how blacks are crushed when you use the full 0-255 levels. You are losing details. This will look bad on a DVD or TV.
You haven't mentioned your decoder details (vfw or directshow) when opening the DV directly, but it might be decompressing to full range 0-255 (full luma expansion) when you view it on the PC monitor, hence the difference. In vegas, the 8-bit project settings are supposed to be used if your end goal was DVD or for TV, because it uses Studio RGB or TV levels (16-235). Remember, all camcorders (DV, HDV, AVCHD) shoot TV levels (16-235) and the intended target is usually TV. -
Thank you very much for your input!
My goal is to create a good quality video DVD for TV (standard TV, LCD TV or plasma) playback. The PC playback is not very imporant.
I imagined the workflow like this: editing in Vegas, frameserving (in order to save the rendering time and, secondly, to avoid the big intermediate file) to Avisynth script for noise reduction (using fft3dfilter or fft3dgpu), open the script in CCE or Canopus Coder in order to encode it in MPEG-2 (VBR 2 pass, mastering quality) and then, finally, doing the authoring in DVD Architect 4.5. Ideally, I want to avoid clipping, loss of quality during color conversions and remove CCD noise.
Do you think this workflow makes any sense or it needs to be improved? I will try to use the lagarith codec instead of frameserving, but what color mode should be picked (RGB default, YUY2, YV12)? And the Vegas project should be in 32-bit color with a 1.0 gamma? -
No, I think you should leave it 8-bit project and TV levels (16-235), ie. the way you had it first. It will end up looking normal on a DVD/TV, but less contrast on a normal PC monitor (unless you have a PC monitor calibrated for TV). As you can see from waveform on the 1st picture I posted, the levels are in the correct range for TV.
There is an unavoidable colorspace conversion if you use vegas as you go from DV=>Vegas=>MPEG2 for DVD (it goes from YUY2 => RGB in vegas => YUY2 in the avisynth script fed to CCE) , but the levels should be 16-235 all the way though. If you did everything & editing in avisynth, you could avoid the colorspace conversion (ie. stay in yuy2 the whole time)
When you take the debugmode avi export from vegas, make sure you include the converttoyuy2(interlaced=true) before the input into CCE, and encode as interlaced
When using fft3dfilter, don't forget to specify interlaced mode in the arguments (so it works on fields)
CCE has some default low pass filtering and line shifting, so don't forget to disable or account for those settings
I don't use Canopus so I can't help there
FYI you should always leave lagarith settings in RGB mode. If your source is YV12, it will encode a YV12 lagarith output even in RGB mode (i.e. it detects and encodes it correctly to YV12 even in RGB mode, if you fed it from a YV12 avisynth script, for example).
The one benefit of using lagarith in your case is the effects only need to be rendered once for 2pass encodes. On heavy filtering, it is often way faster to use a lossless intermediate. This means all the vegas rendering, only needs to be done once, instead of twice. Also if you have heavy avisynth filtering, it might make sense to do another lagarith intermediate before the MPEG2 2pass encode, otherwise the avisynth filters have to be done twice - once for each pass - does this make sense? You're going to have to make the decision, but it will be dependent on how CPU intense the filters are for vegas and avisynth. If you have fast filters, you might not care and avoid the lossless intermediate stage
Similar Threads
-
Frameserving from Sony Vegas into MXF format
By 24fps in forum Video ConversionReplies: 5Last Post: 8th Feb 2012, 08:57 -
Changing colors in film to black and white and again colorful with Vegas 10
By siopilos in forum EditingReplies: 12Last Post: 3rd Jun 2011, 12:52 -
Vegas changing colors of mts files on import. Not levels.
By banditeer in forum EditingReplies: 37Last Post: 10th Oct 2010, 07:28 -
Sony Vegas - YouTube makes the colors pink
By Bucic in forum Video Streaming DownloadingReplies: 31Last Post: 15th Jun 2010, 04:37 -
FrameServing from Vegas 9 to MeGUI
By Kit-10 in forum Newbie / General discussionsReplies: 8Last Post: 4th Dec 2009, 17:12