VideoHelp Forum
+ Reply to Thread
Results 1 to 14 of 14
Thread
  1. Its something that I'm a little confused about although i spend a lot of time investigating. So i usually capture from VHS/8mm tapes in lagarith interlaced yuv2 pal 720 576 25 fps, or from 8mm/super 8mm films using canon foto camera in mjpeg 640x480 30 fps progressive (i think yuv2 but not sure, yes i know that that is not the best method but that's not the point). I usualy use vdub rgb and avisynth yuv, I use allot of noise reductions neat video in vdub convultion 3d degrain and the others in avisynth.



    So if for example i put lagarith yuv2 file in vdub and use filter and full processing it doesn't matter what
    color depth i put in vdub menu( some of the ycbcr types or rgb types for decompression input and compressor output). So if i chose lagarith as save as (and in lagarith properties yuv2 for compression doesn't that mean that i have 2 conversations along the way ( lagarith yuv2 input > vdub rgb internal neat video> lagarith yuv2 output) And in that case i have again lagarith yuv2 file right? So if i put that file in sony vegas or tmgenc or vdub i get another conversation again ( lagarith yuv2>vegas or tmpgenc vdub rgb> dvd mpeg 2 yuv2 file) right? So in that case isn't it better to chose from lagarith properties, when i encode from vdub rgb instead of yuv2 so i preserve rgb out from vdub and now i have lagarith rgb all the way till the mpeg 2 compression. More so that lagarith rgb if i put it again in vdub ( and again and again) and each time i do different type of filtering ( i do sometime deshaker filter after neat video filtering) i should end up with no another loss what so ever right (colorspace wise ).
    Another thing that's confusing is colordepth menu in vdub what this actually do, as I mention before If I have lagarith yuv 2 and put yuv2 in decompression and compressor does that mean that i have another yet conversation ( i don't believe that that way i preserve yuv2 format because all vdub filters work in rgb right so 4 times conversation or ?)
    I know that some will suggest avisynth all the way but some times if you use virtual dub or whatever NLE program for cut and transitions and other fancy things ( that most of them work in rgb) you will have to do conversation some time in chain. My question is what is the less lousy way is the method lagarith in (yuv2 in properties) lagarith out ( rgb in properties ) good choice.
    Can someone elaborate on this
    Quote Quote  
  2. Originally Posted by mammo1789 View Post
    So if i chose lagarith as save as (and in lagarith properties yuv2 for compression doesn't that mean that i have 2 conversations along the way ( lagarith yuv2 input > vdub rgb internal neat video> lagarith yuv2 output) And in that case i have again lagarith yuv2 file right?
    Yes.

    Originally Posted by mammo1789 View Post
    So if i put that file in sony vegas or tmgenc or vdub i get another conversation again ( lagarith yuv2>vegas or tmpgenc vdub rgb> dvd mpeg 2 yuv2 file) right?
    Yes.

    Originally Posted by mammo1789 View Post
    So in that case isn't it better to chose from lagarith properties, when i encode from vdub rgb instead of yuv2 so i preserve rgb out from vdub and now i have lagarith rgb all the way till the mpeg 2 compression.
    Yes.

    Originally Posted by mammo1789 View Post
    More so that lagarith rgb if i put it again in vdub ( and again and again) and each time i do different type of filtering ( i do sometime deshaker filter after neat video filtering) i should end up with no another loss what so ever right (colorspace wise ).
    Yes.

    Originally Posted by mammo1789 View Post
    Another thing that's confusing is colordepth menu in vdub what this actually do, as I mention before If I have lagarith yuv 2 and put yuv2 in decompression and compressor does that mean that i have another yet conversation
    It depends. VirtualDub first asks the decompression codec for the chosen format. If the codec can't supply that format it will negotiate some other format then convert to the chosen format. So if you set the decompression colorspace to YUY2, and the codec can only output RGB, VirtualDub will get RGB from the decoder and convert to YUY2.

    I believe most of the color depth options are preparation for a more robust filter path later on. So someday VirtualDub will have internal filters that work directly in all the different color formats. Hopefully external filters will be updated too. Eventually you may be able to perform all your filtering and YUY2 or YV24 or whatever.

    A few of VirtualDub's internal filters will work in YUV colorspace. Brightness/Contrast for instance. On the main filters dialog enable the "Show image formats" option and you'll see what format was chosen for each filter.
    Quote Quote  
  3. full processing = convertion to rgb
    copy stream, fast recompress = as is (so with YUV support)

    As far as i know
    *** DIGITIZING VHS / ANALOG VIDEOS SINCE 2001**** GEAR: JVC HR-S7700MS, TOSHIBA V733EF AND MORE
    Quote Quote  
  4. Originally Posted by themaster1 View Post
    full processing = convertion to rgb
    copy stream, fast recompress = as is (so with YUV support)
    That has changed. In full processing mode VirtualDub now only converts to RGB if you use a filter that requires RGB.
    Quote Quote  
  5. Thanks guys

    A few of VirtualDub's internal filters will work in YUV colorspace. Brightness/Contrast for instance. On the main filters dialog enable the "Show image formats" option and you'll see what format was chosen for each filter
    jagabo i checked and it says rgb 32. Which is more confusing for me now i thought that vdub works in rgb 24 and that another 8bit is dummy alpha but maybe I'm wrong.

    I think if you put fast recompress that actually no filtering is done inside vdub its used solely for avysinth scripts.

    It depends. VirtualDub first asks the decompression codec for the chosen format. If the codec can't supply that format it will negotiate some other format then convert to the chosen format. So if you set the decompression colorspace to YUY2, and the codec can only output RGB, VirtualDub will get RGB from the decoder and convert to YUY2.
    Does that means that for lagarith or hyffyuv no conversation will occur because the booth can work in yuv2 decompressing and compressing.
    I think that i read somewhere form you jagabo thats is better to capture in rgb or uncompressed so the conversation is done once inside tv capture card because it naturally works in yuv and the signal from analog (vcr or camera) is yuv also.

    So the question is which is better doing rgb conversation on start inside tv card and now you have rgb to work with all the way to the mpeg2 or whatever compression and you can do filtering in vdub or sony vegas color correction transitions fancy intros cg animations and so one and so one and stay in rgb land, so when you decide to compress all that material you can let the mainconcept reference or tmpgenc do its conversation then finally put that file in authoring software and without recompression burn dvd.
    I'm I correct? I know that the best thing is not to do conversation but sometimes you must

    So someday VirtualDub will have internal filters that work directly in all the different color formats. Hopefully external filters will be updated too. Eventually you may be able to perform all your filtering and YUY2 or YV24 or whatever.
    You are right i don't understand why professional software packages for video editing don't use yuv or rgb native ( I think that adobe premiere has some filters and transitions in yuv but I'm not sure can someone correct me) so you can choose which one you want its shame that only avysinth got that right and although is great program it has its own limitations.
    Quote Quote  
  6. Originally Posted by mammo1789 View Post
    i checked and it says rgb 32. Which is more confusing for me now i thought that vdub works in rgb 24 and that another 8bit is dummy alpha but maybe I'm wrong.
    RGB32 is RGB24 with an 8 bit alpha channel (the alpha channel is ignored by VirtualDub).

    Originally Posted by mammo1789 View Post
    I think if you put fast recompress that actually no filtering is done inside vdub
    That is correct.

    Originally Posted by mammo1789 View Post
    It depends. VirtualDub first asks the decompression codec for the chosen format. If the codec can't supply that format it will negotiate some other format then convert to the chosen format. So if you set the decompression colorspace to YUY2, and the codec can only output RGB, VirtualDub will get RGB from the decoder and convert to YUY2.
    Does that means that for lagarith or hyffyuv no conversation will occur because the booth can work in yuv2 decompressing and compressing.
    If you specify YUY2 as the input colorspace then Lgarith and HuffYUV should provide YUY2 video to VirtualDub.


    Originally Posted by mammo1789 View Post
    I think that i read somewhere form you jagabo thats is better to capture in rgb or uncompressed so the conversation is done once inside tv capture card because it naturally works in yuv and the signal from analog (vcr or camera) is yuv also.
    No, you should capture YUV because that's what analog video signal is. YUY2 is the closest to what's in analog video.

    Originally Posted by mammo1789 View Post
    So the question is which is better doing rgb conversation on start inside tv card
    The capture card will crush super blacks and super whites if you let it convert to RGB. If your source has details in those areas you will lose them.

    Originally Posted by mammo1789 View Post
    and now you have rgb to work with all the way to the mpeg2 or whatever compression and you can do filtering in vdub or sony vegas color correction transitions fancy intros cg animations and so one and so one and stay in rgb land, so when you decide to compress all that material you can let the mainconcept reference or tmpgenc do its conversation then finally put that file in authoring software and without recompression burn dvd.
    I'm I correct?
    In the case where all the tools work in RGB, if the video has superblack and super brights, I would capture as YUY2, adjust levels then convert to RGB for the rest of the filter chain. If the source doesn't have superblack and superbrights you can let the capture driver/software convert to RGB.

    Attached is a HuffYUV YUY2 video that's all in super blacks. See if you can get any detail out of it.
    Image Attached Files
    Last edited by jagabo; 17th Nov 2011 at 22:23.
    Quote Quote  
  7. if the video has superblack and super brights
    I never seem to understand the levels completely as I know the levels ( histogram should show normally look like wide coast and far left and far right should be zeros the most extremes white and black right? So you are saying if there is signal in those area that should be zeros then when i convert i will lost that signal because the conversation will "delete" these peaks that should be zeros and the information will be lost, if I'm understanding correctly, if there is no signal in those areas then no lost of signal will be made is that correct.

    I tried to open the file in media player classic and windows media player and it shows error i have hyffyuv I'm sure

    Click image for larger version

Name:	ScreenShot002.jpg
Views:	600
Size:	288.3 KB
ID:	9677Click image for larger version

Name:	ScreenShot001.jpg
Views:	643
Size:	127.5 KB
ID:	9678

    I opened in vdub ok
    Attached is a HuffYUV YUY2 video that's all in super blacks. See if you can get any detail out of it.
    How do i know if i have detail in supperblack and super bright?!
    Last edited by mammo1789; 18th Nov 2011 at 06:02.
    Quote Quote  
  8. Originally Posted by mammo1789 View Post
    if the video has superblack and super brights
    I never seem to understand the levels completely
    In digital YUV video the brightness of the picture is encoded in the Y channel. With 8 bit per channel YUV the Y values can range from 0 to 255. Full black is designated as Y=16, full bright as Y=235. You should not routinely have image data in the ranges from 0 to 15 (super blacks) and 236 to 255 (super brights). When displayed on a properly calibrated TV you won't see any difference between Y=0 and Y=16 -- everything in that range will be the same shade of black. The same happens at the bright end, Y=235 and Y=255 will be the same shade of bright.

    On computer monitors black is defined as RGB=0,0,0, and full white as RGB=255,255,255. So most programs stretch the contrast range from Y=16-235 to RGB=0-255 when they convert YUV to RGB. All pixels whose Y value range from 0 to 16 will end up the same RGB value 0,0,0. All pixels whose Y value ranges from 235 to 255 will end up the same RGB value, 255,255,255. So any details in those regions will be gone, unrecoverable.

    Conversely, when programs convert RGB to YUV they squeeze the contrast range of RGB 0-255 down to YUV 16-235. In this case, since there can be no super blacks or super brights in the RGB data (ie, there can be no RGB values less than 0 or greater than 255), there will be none in the YUV data.

    This contrast stretch is fine with properly captured video. There will be no significant part of the picture with super blacks and super brights. But if your capture card or VHS deck aren't well calibrated there often will be super blacks and super brights in the caps. So it's important to fix the levels while the video is still in YUV.

    Originally Posted by mammo1789 View Post
    as I know the levels ( histogram should show normally look like wide coast and far left and far right should be zeros the most extremes white and black right?
    Which histogram are you referring to? AviSynth's Histogram() shows the Y channel and marks the super black and super bright areas in yellow. The ColorTools filter in VirtualDub works after conversion to RGB so it can't tell you if there are any super blacks or super brights.

    Here's the darks video with AviSynth's Histogram():

    Click image for larger version

Name:	avisynth.jpg
Views:	318
Size:	52.2 KB
ID:	9679

    You can see that all the Y values are in the 0-15 range (the left yellow bar). Note that this isn't really a histogram it's a waveform monitor, the default mode for Histogram(). You can force a true histogram with Histogram(mode="levels"):

    Click image for larger version

Name:	mode.jpg
Views:	346
Size:	51.0 KB
ID:	9681

    In this mode you get true histograms of the Y, U, and V channels.

    With the ColorTools filter in VirtualDub:

    Click image for larger version

Name:	vdub.jpg
Views:	344
Size:	77.6 KB
ID:	9680

    The top graph shows a true histogram of the luma channel. You can't tell if they were all 16 or if they ranged from 0 to 16. This because VirtualDub converted YUV to RGB (losing the super blacks and super brights) then ColorTools converted back to YUV to show a histogram of the luma channel. So all you have is a big peak at Y=16.

    Originally Posted by mammo1789 View Post
    So you are saying if there is signal in those area that should be zeros then when i convert i will lost that signal because the conversation will "delete" these peaks that should be zeros and the information will be lost, if I'm understanding correctly, if there is no signal in those areas then no lost of signal will be made is that correct.
    Yes.

    Originally Posted by mammo1789 View Post
    I tried to open the file in media player classic and windows media player and it shows error i have hyffyuv I'm sure...

    I opened in vdub ok
    Media players require DirectShow codecs. HuffYUV is a a VFW codec. VirtualDub uses VFW codecs. That explains why you can open the video in VirtualDub but not a media player. If you have ffdshow installed, enable its DirectShow HuffYUV decoder.

    How do i know if i have detail in supperblack and super bright?!
    Use AviSynth's Histogram() while the video is still in YUV. There is another filter called VideoScope() in Avisynth. It can show the Y levels too. The following post shows an example of a video captured with super darks and super brights, and the improved picture after adjusting the levels:

    https://forum.videohelp.com/threads/340808-Capturing-from-old-VHS-and-improving-quality...=1#post2121148

    VideoScope() doesn't mark Y=16 and Y=235. I added the yellow lines to mark them manually in the first image.
    Last edited by jagabo; 18th Nov 2011 at 07:51.
    Quote Quote  
  9. Thanks jagabo for thoroughly and understandable explanation

    Which histogram are you referring to? AviSynth's Histogram() shows the Y channel and marks the super black and super bright areas in yellow. The ColorTools filter in VirtualDub works after conversion to RGB so it can't tell you if there are any super blacks or super brights.
    You are right probably i always used histogram in vdub that's why didn't show the color anomalies. I thought that the captures recorded in windows 7 ( due to no proc amps controls in capture virtual dub levels) were fine but i was wrong first i opened video recorded in dark in London and i get this then i opened video recorded in brajton at day and almost the same deal Click image for larger version

Name:	ScreenShot003.png
Views:	343
Size:	1.70 MB
ID:	9682Click image for larger version

Name:	ScreenShot002.png
Views:	345
Size:	1.67 MB
ID:	9683Click image for larger version

Name:	ScreenShot001.png
Views:	349
Size:	1.67 MB
ID:	9684

    I see now that i have signal in 0-16 and 235 255 range wich means that i will lose signal in them.
    I tried the other post AviSource("filename.avi")
    ColorYUV(cont_y=-36, off_y=-2) and didnt help to take of the signal from the superblacks and superbriths
    i will upload short clips to see how it works

    thanks for the help
    Quote Quote  
  10. Don't use Info() before Histogram(). Otherwie the histogram will include the text that Info() writes into the frame. I didn't see significant image data in the yellow bars of the histograms in your sample images.

    It's more traditional for waveform monitors to be horizontal, not vertical. You can get that in AviSynth with:

    TurnRight().Histogram().TurnLeft()
    You'll get the graph at the top.
    Quote Quote  
  11. I upload 2 files 10 sec each night and day variants can you help jagabo
    http://www.mediafire.com/?c6qcql7ykqt3ucx,j212ig41q568ym3

    http://www.mediafire.com/?c6qcql7ykqt3ucx

    http://www.mediafire.com/?j212ig41q568ym3

    and picture with yours suggestion this white smog should or shouldn't be there im confused
    Click image for larger version

Name:	ScreenShot004.png
Views:	1025
Size:	1.51 MB
ID:	9685
    are the captures ok they are raw lagarith yuv files without any intervention. I planed to put them in vdub neatvideo maybe some noise reduction in avysinth and colormil and acwob and edit them in sony vegas ( the out file should be lagarith rgb ) compress them in mainconcept or tmpgenc and author them to dvd

    I didn't see significant image data in the yellow bars of the histograms in your sample images.
    In other post the yellow lines are thin here are like bold is it possible that i cant see the white fog crossing in the superblaks and superbriths because can't be visible because bold yellow or not
    Last edited by mammo1789; 18th Nov 2011 at 11:45.
    Quote Quote  
  12. I think you don't really understand what a waveform monitor is. It's a graph of the brightness of the picture across the frame. A simple picture will make it more obvious:

    Click image for larger version

Name:	steps.jpg
Views:	626
Size:	9.1 KB
ID:	9688

    On the bottom is the video (640x36 frame size), on the top the graph. Every scan line of the video is the same in this image so think of it as a single scan line of pixels. The vertical axis of the graph represents the brightness of the pixels on the scanline below it. The bottom of the graph indicates dark pixels, the top of the graph bright pixels. The height of the graph is 256 lines -- one for each possible intensity of the pixels (0-255). So the brightness of the pixels has been transformed into the height of the line on the graph.

    Click image for larger version

Name:	steps2.jpg
Views:	597
Size:	12.0 KB
ID:	9689

    The waveform monitor graph you get from Histogram() is the sum of the graphs of every scan line from the image. The brightness of each spot of the graph represents how many of the pixels of the picture below it had that brightness. If a spot of the graph is very dim few pixels had that brightness. If the spot is very bright many pixels had that brightness.

    The yellow bar at the bottom of the graph represents pixels that have brightnesses of 0 to 15. The yellow bar at the top of the graph respresents pixels that have brightnesses from 236 to 255. You don't want the graph to extend into those areas. In the graph above you can see that the leftmost part of the image had a brightness of 0 -- the bright yellow line at the bottom left of the bottom yellow bar. The next block over has a brightness of 16, just above the yellow bar. In the actual video you can't see the difference between those two blocks -- they are both rendered as RGB=0. At the far right two lines fall above the 235 level. You can't see any difference in brightness of those blocks because they are both rendered as RGB=255.

    In the sample image you posted the black background has generated the fuzzy white line just above the bottom yellow bar of the graph. Just about where you want your black level to be. The bright face of the clock has generated the dim white bar a little below the top yellow bar. That is fine. None of your image has resulted in dots in the yellow bars -- none of the pixels in your image are below Y=16 and none are above Y=235.

    Click image for larger version

Name:	ben.jpg
Views:	669
Size:	52.9 KB
ID:	9691
    Last edited by jagabo; 18th Nov 2011 at 12:54.
    Quote Quote  
  13. I think you don't really understand what a waveform monitor is. It's a graph of the brightness of the picture across the frame. A simple picture will make it more obvious:

    The yellow bar at the bottom of the graph represents pixels that have brightnesses of 0 to 15. The yellow bar at the top of the graph represents pixels that have brightnesses from 236 to 255. You don't want the graph to extend into those areas. In the graph above you can see that the leftmost part of the image had a brightness of 0 -- the bright yellow line at the bottom left of the bottom yellow bar. The next block over has a brightness of 16, just above the yellow bar. In the actual video you can't see the difference between those two blocks -- they are both rendered as RGB=0. At the far right two lines fall above the 235 level. You can't see any difference in brightness of those blocks because they are both rendered as RGB=255.
    I got it now i saw the line in the other sample (post) and there the line was thin and here it was bold so I tough that even if i had scan lines that they wouldn't be visible over the bold yellow line . I'm not very proficient in avysinth i do only simple scripts copied from others.

    Thanks again your explanations are very simple and easy to understand you should make some tutorials on the matter for the others to benefit
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!