Its something that I'm a little confused about although i spend a lot of time investigating. So i usually capture from VHS/8mm tapes in lagarith interlaced yuv2 pal 720 576 25 fps, or from 8mm/super 8mm films using canon foto camera in mjpeg 640x480 30 fps progressive (i think yuv2 but not sure, yes i know that that is not the best method but that's not the point). I usualy use vdub rgb and avisynth yuv, I use allot of noise reductions neat video in vdub convultion 3d degrain and the others in avisynth.
So if for example i put lagarith yuv2 file in vdub and use filter and full processing it doesn't matter what
color depth i put in vdub menu( some of the ycbcr types or rgb types for decompression input and compressor output). So if i chose lagarith as save as (and in lagarith properties yuv2 for compression doesn't that mean that i have 2 conversations along the way ( lagarith yuv2 input > vdub rgb internal neat video> lagarith yuv2 output) And in that case i have again lagarith yuv2 file right? So if i put that file in sony vegas or tmgenc or vdub i get another conversation again ( lagarith yuv2>vegas or tmpgenc vdub rgb> dvd mpeg 2 yuv2 file) right? So in that case isn't it better to chose from lagarith properties, when i encode from vdub rgb instead of yuv2 so i preserve rgb out from vdub and now i have lagarith rgb all the way till the mpeg 2 compression. More so that lagarith rgb if i put it again in vdub ( and again and again) and each time i do different type of filtering ( i do sometime deshaker filter after neat video filtering) i should end up with no another loss what so ever right (colorspace wise ).
Another thing that's confusing is colordepth menu in vdub what this actually do, as I mention before If I have lagarith yuv 2 and put yuv2 in decompression and compressor does that mean that i have another yet conversation ( i don't believe that that way i preserve yuv2 format because all vdub filters work in rgb right so 4 times conversation or ?)
I know that some will suggest avisynth all the way but some times if you use virtual dub or whatever NLE program for cut and transitions and other fancy things ( that most of them work in rgb) you will have to do conversation some time in chain. My question is what is the less lousy way is the method lagarith in (yuv2 in properties) lagarith out ( rgb in properties ) good choice.
Can someone elaborate on this
+ Reply to Thread
Results 1 to 14 of 14
-
-
Yes.
Yes.
Yes.
Yes.
It depends. VirtualDub first asks the decompression codec for the chosen format. If the codec can't supply that format it will negotiate some other format then convert to the chosen format. So if you set the decompression colorspace to YUY2, and the codec can only output RGB, VirtualDub will get RGB from the decoder and convert to YUY2.
I believe most of the color depth options are preparation for a more robust filter path later on. So someday VirtualDub will have internal filters that work directly in all the different color formats. Hopefully external filters will be updated too. Eventually you may be able to perform all your filtering and YUY2 or YV24 or whatever.
A few of VirtualDub's internal filters will work in YUV colorspace. Brightness/Contrast for instance. On the main filters dialog enable the "Show image formats" option and you'll see what format was chosen for each filter. -
full processing = convertion to rgb
copy stream, fast recompress = as is (so with YUV support)
As far as i know*** DIGITIZING VHS / ANALOG VIDEOS SINCE 2001**** GEAR: JVC HR-S7700MS, TOSHIBA V733EF AND MORE -
That has changed. In full processing mode VirtualDub now only converts to RGB if you use a filter that requires RGB.
-
Thanks guys
A few of VirtualDub's internal filters will work in YUV colorspace. Brightness/Contrast for instance. On the main filters dialog enable the "Show image formats" option and you'll see what format was chosen for each filter
I think if you put fast recompress that actually no filtering is done inside vdub its used solely for avysinth scripts.
It depends. VirtualDub first asks the decompression codec for the chosen format. If the codec can't supply that format it will negotiate some other format then convert to the chosen format. So if you set the decompression colorspace to YUY2, and the codec can only output RGB, VirtualDub will get RGB from the decoder and convert to YUY2.
I think that i read somewhere form you jagabo thats is better to capture in rgb or uncompressed so the conversation is done once inside tv capture card because it naturally works in yuv and the signal from analog (vcr or camera) is yuv also.
So the question is which is better doing rgb conversation on start inside tv card and now you have rgb to work with all the way to the mpeg2 or whatever compression and you can do filtering in vdub or sony vegas color correction transitions fancy intros cg animations and so one and so one and stay in rgb land, so when you decide to compress all that material you can let the mainconcept reference or tmpgenc do its conversation then finally put that file in authoring software and without recompression burn dvd.
I'm I correct? I know that the best thing is not to do conversation but sometimes you must
So someday VirtualDub will have internal filters that work directly in all the different color formats. Hopefully external filters will be updated too. Eventually you may be able to perform all your filtering and YUY2 or YV24 or whatever. -
RGB32 is RGB24 with an 8 bit alpha channel (the alpha channel is ignored by VirtualDub).
That is correct.
If you specify YUY2 as the input colorspace then Lgarith and HuffYUV should provide YUY2 video to VirtualDub.
No, you should capture YUV because that's what analog video signal is. YUY2 is the closest to what's in analog video.
The capture card will crush super blacks and super whites if you let it convert to RGB. If your source has details in those areas you will lose them.
In the case where all the tools work in RGB, if the video has superblack and super brights, I would capture as YUY2, adjust levels then convert to RGB for the rest of the filter chain. If the source doesn't have superblack and superbrights you can let the capture driver/software convert to RGB.
Attached is a HuffYUV YUY2 video that's all in super blacks. See if you can get any detail out of it.Last edited by jagabo; 17th Nov 2011 at 22:23.
-
if the video has superblack and super brights
I tried to open the file in media player classic and windows media player and it shows error i have hyffyuv I'm sure
I opened in vdub ok
Attached is a HuffYUV YUY2 video that's all in super blacks. See if you can get any detail out of it.Last edited by mammo1789; 18th Nov 2011 at 06:02.
-
In digital YUV video the brightness of the picture is encoded in the Y channel. With 8 bit per channel YUV the Y values can range from 0 to 255. Full black is designated as Y=16, full bright as Y=235. You should not routinely have image data in the ranges from 0 to 15 (super blacks) and 236 to 255 (super brights). When displayed on a properly calibrated TV you won't see any difference between Y=0 and Y=16 -- everything in that range will be the same shade of black. The same happens at the bright end, Y=235 and Y=255 will be the same shade of bright.
On computer monitors black is defined as RGB=0,0,0, and full white as RGB=255,255,255. So most programs stretch the contrast range from Y=16-235 to RGB=0-255 when they convert YUV to RGB. All pixels whose Y value range from 0 to 16 will end up the same RGB value 0,0,0. All pixels whose Y value ranges from 235 to 255 will end up the same RGB value, 255,255,255. So any details in those regions will be gone, unrecoverable.
Conversely, when programs convert RGB to YUV they squeeze the contrast range of RGB 0-255 down to YUV 16-235. In this case, since there can be no super blacks or super brights in the RGB data (ie, there can be no RGB values less than 0 or greater than 255), there will be none in the YUV data.
This contrast stretch is fine with properly captured video. There will be no significant part of the picture with super blacks and super brights. But if your capture card or VHS deck aren't well calibrated there often will be super blacks and super brights in the caps. So it's important to fix the levels while the video is still in YUV.
Which histogram are you referring to? AviSynth's Histogram() shows the Y channel and marks the super black and super bright areas in yellow. The ColorTools filter in VirtualDub works after conversion to RGB so it can't tell you if there are any super blacks or super brights.
Here's the darks video with AviSynth's Histogram():
You can see that all the Y values are in the 0-15 range (the left yellow bar). Note that this isn't really a histogram it's a waveform monitor, the default mode for Histogram(). You can force a true histogram with Histogram(mode="levels"):
In this mode you get true histograms of the Y, U, and V channels.
With the ColorTools filter in VirtualDub:
The top graph shows a true histogram of the luma channel. You can't tell if they were all 16 or if they ranged from 0 to 16. This because VirtualDub converted YUV to RGB (losing the super blacks and super brights) then ColorTools converted back to YUV to show a histogram of the luma channel. So all you have is a big peak at Y=16.
Yes.
Media players require DirectShow codecs. HuffYUV is a a VFW codec. VirtualDub uses VFW codecs. That explains why you can open the video in VirtualDub but not a media player. If you have ffdshow installed, enable its DirectShow HuffYUV decoder.
How do i know if i have detail in supperblack and super bright?!
https://forum.videohelp.com/threads/340808-Capturing-from-old-VHS-and-improving-quality...=1#post2121148
VideoScope() doesn't mark Y=16 and Y=235. I added the yellow lines to mark them manually in the first image.Last edited by jagabo; 18th Nov 2011 at 07:51.
-
Thanks jagabo for thoroughly and understandable explanation
Which histogram are you referring to? AviSynth's Histogram() shows the Y channel and marks the super black and super bright areas in yellow. The ColorTools filter in VirtualDub works after conversion to RGB so it can't tell you if there are any super blacks or super brights.
I see now that i have signal in 0-16 and 235 255 range wich means that i will lose signal in them.
I tried the other post AviSource("filename.avi")
ColorYUV(cont_y=-36, off_y=-2) and didnt help to take of the signal from the superblacks and superbriths
i will upload short clips to see how it works
thanks for the help -
Don't use Info() before Histogram(). Otherwie the histogram will include the text that Info() writes into the frame. I didn't see significant image data in the yellow bars of the histograms in your sample images.
It's more traditional for waveform monitors to be horizontal, not vertical. You can get that in AviSynth with:
TurnRight().Histogram().TurnLeft() -
I upload 2 files 10 sec each night and day variants can you help jagabo
http://www.mediafire.com/?c6qcql7ykqt3ucx,j212ig41q568ym3
http://www.mediafire.com/?c6qcql7ykqt3ucx
http://www.mediafire.com/?j212ig41q568ym3
and picture with yours suggestion this white smog should or shouldn't be there im confused
are the captures ok they are raw lagarith yuv files without any intervention. I planed to put them in vdub neatvideo maybe some noise reduction in avysinth and colormil and acwob and edit them in sony vegas ( the out file should be lagarith rgb ) compress them in mainconcept or tmpgenc and author them to dvd
I didn't see significant image data in the yellow bars of the histograms in your sample images.Last edited by mammo1789; 18th Nov 2011 at 11:45.
-
I think you don't really understand what a waveform monitor is. It's a graph of the brightness of the picture across the frame. A simple picture will make it more obvious:
On the bottom is the video (640x36 frame size), on the top the graph. Every scan line of the video is the same in this image so think of it as a single scan line of pixels. The vertical axis of the graph represents the brightness of the pixels on the scanline below it. The bottom of the graph indicates dark pixels, the top of the graph bright pixels. The height of the graph is 256 lines -- one for each possible intensity of the pixels (0-255). So the brightness of the pixels has been transformed into the height of the line on the graph.
The waveform monitor graph you get from Histogram() is the sum of the graphs of every scan line from the image. The brightness of each spot of the graph represents how many of the pixels of the picture below it had that brightness. If a spot of the graph is very dim few pixels had that brightness. If the spot is very bright many pixels had that brightness.
The yellow bar at the bottom of the graph represents pixels that have brightnesses of 0 to 15. The yellow bar at the top of the graph respresents pixels that have brightnesses from 236 to 255. You don't want the graph to extend into those areas. In the graph above you can see that the leftmost part of the image had a brightness of 0 -- the bright yellow line at the bottom left of the bottom yellow bar. The next block over has a brightness of 16, just above the yellow bar. In the actual video you can't see the difference between those two blocks -- they are both rendered as RGB=0. At the far right two lines fall above the 235 level. You can't see any difference in brightness of those blocks because they are both rendered as RGB=255.
In the sample image you posted the black background has generated the fuzzy white line just above the bottom yellow bar of the graph. Just about where you want your black level to be. The bright face of the clock has generated the dim white bar a little below the top yellow bar. That is fine. None of your image has resulted in dots in the yellow bars -- none of the pixels in your image are below Y=16 and none are above Y=235.
Last edited by jagabo; 18th Nov 2011 at 12:54.
-
I think you don't really understand what a waveform monitor is. It's a graph of the brightness of the picture across the frame. A simple picture will make it more obvious:
The yellow bar at the bottom of the graph represents pixels that have brightnesses of 0 to 15. The yellow bar at the top of the graph represents pixels that have brightnesses from 236 to 255. You don't want the graph to extend into those areas. In the graph above you can see that the leftmost part of the image had a brightness of 0 -- the bright yellow line at the bottom left of the bottom yellow bar. The next block over has a brightness of 16, just above the yellow bar. In the actual video you can't see the difference between those two blocks -- they are both rendered as RGB=0. At the far right two lines fall above the 235 level. You can't see any difference in brightness of those blocks because they are both rendered as RGB=255.. I'm not very proficient in avysinth i do only simple scripts copied from others.
Thanks again your explanations are very simple and easy to understand you should make some tutorials on the matter for the others to benefit
Similar Threads
-
VirtualDub Batch Conversation with MTS files?
By khan.cross in forum Video ConversionReplies: 2Last Post: 24th May 2015, 02:09 -
need a quick refresher course/ conversation
By jameskiehl in forum Newbie / General discussionsReplies: 11Last Post: 18th Mar 2011, 15:20 -
Problems with VOB Conversation
By Sundayarak in forum Video ConversionReplies: 7Last Post: 7th Jun 2010, 07:37 -
help, Flv to Hd conversation with XviD4PSP
By betalb2008 in forum Video ConversionReplies: 1Last Post: 15th Nov 2008, 13:10 -
an anoying problem with subtitles and conversation.
By d2idan in forum Video ConversionReplies: 10Last Post: 13th Jun 2008, 16:54