VideoHelp Forum
+ Reply to Thread
Page 2 of 2
FirstFirst 1 2
Results 31 to 49 of 49
Thread
  1. While some TBC circuits can make the video look worse, that is the exception. In general, a good TBC can make a massive improvement in the quality of your captures. By comparison, the difference in compression artifacts between codecs, or the ability to change the analog gain by using a proc amp (which, as you are finding out, almost always leads to problems) is roundoff error.

    Simply put: TBC = Better Result.

    Hopefully most of your tapes are not badly exposed. If so, you just want to capture them in the most efficient way possible. If you make the workflow too complicated, and if you constantly have to be adjusting knobs and dials, you will ALWAYS eventually forget some setting and then probably not catch your mistake for a long time and either have to live with the lousy capture, or else go back and re-do hours and hours of work.

    KISS.
    Quote Quote  
  2. Should I use the ES10 between 8mm camcorder and capture device? Or is it mainly just helpful for VHS captures?
    Quote Quote  
  3. I am quite sure that I read somewhere not to use something like an ES-10 unless you try first without it and experience issues. I’m trying my first full 2 hour capture right now with 8mm camcorder. It’s been about 20 min so far and virtualdub isn’t reporting any dropped frames yet. I am not quite sure what types of things I would be looking for as far as other issues. If it looks good upon playback after, I assume the ES10 wouldn’t be of much use, and could also do some negative things to the video quality (posterization), if what I’ve read is correct.
    Quote Quote  
  4. It's interesting that you mention dropped frames. The only capture system I've ever used that never, even once, gave me a dropped frame, was DV.

    As I remember, VirtualDub is not the greatest client to use when capturing, specifically because it was tough to tune to eliminate dropped frames. Others can chime in with advice, should you find you have dropped frames.

    I would definitely recommend that you take the middle of one of your first captures, before you get too deep into the project, and put it on the timeline in your NLE. Then, walk through the video frame-by-frame, for a few hundred frames to look for duplicate frames. Dups are easy to spot (I have an AVISYnth script that finds them automatically). Even though you are looking for drops, almost all capture software, when it does drop a frame, will insert a duplicate a few frames later in order to keep the audio in sync. Since looking for the dups is easy, whereas spotting a drop can be tough, even when there is a lot of movement, and near-impossible in low motion scenes, I always look for dups.
    Quote Quote  
  5. As for whether to use a capture system with TBC somewhere in the chain, I already answered that: it is almost always better. The only reason I insert "almost" is that some TBC circuits are actually quite poor. Lordsmurf has some details about this over at DigitalFAQ.com.

    Rather than capturing hours of video before you get your capture chain perfected, if I were you, I'd be doing all sorts of 30 second capture test, using various permutations and combinations of the hardware and software you have.
    Quote Quote  
  6. Originally Posted by jagabo View Post
    Here's an example batch file for ffmpeg:

    Code:
    "G:\program files\ffmpeg64\bin\ffmpeg.exe" ^
        -i %1 ^
        -c:v libx264 -preset slow -crf 18 -g 50 ^
        -profile:v high -level:v 4.2 -colorspace bt709 -color_range tv ^
        -acodec ac3 ^
        "%~dpn1.avc.mkv"
    pause
    Jagabo, in another thread you provided me with the above sample ffmpeg code for drag and drop. It was someone else's thread and was off topic so I'm bringing it here to ask-

    With using the code above with my input as an AVS file that includes QTGMC in "slower" mode (with the source file within the AVS script being my huffYUV original capture) what kind of speeds should I be expecting to see in the cmd window as ffmpeg is processing all of this? I'm showing 0.217x and fps=13. This seems slow but I have no point of comparison so it might be normal - I suspect this will take a long time for a long video. Is there a way to interpret the .217x to know about how long the encode will take compared to length of video? (Is 1.0X basically in real time?) I'm on a core i5 windows 7 desktop and I included some code in my AVS script to tell it how many cores, processors, threads to use etc, and I want to make sure I didn't screw that up. My CPU usage is only at 65-75% so I'm suspecting that I did. I was expecting CPU usage to be near 100%.
    Quote Quote  
  7. Originally Posted by Christina View Post
    I was under the impression I should keep interlaced video interlaced as long as possible, including in any intermediate/archival copies
    Certainly keep copies of your interlaced source. Some day something better than QTGMC will come along an you may want to convert then again. But in most cases QTGMC has better deinterlacing than any software or hardware deinterlacer. Some of the old arguments against deinterlacing no longer apply. For example most older deinterlacers converted 30i to 30p losing half the motion. Using a 30i to 60p deinterlacer overcomes that issue. A long time ago many devices couldn't handle 60p playback but that's no longer an issue (with the exception of DVD players) for SD video. And since visual quality of QTGMC is superior to realtime playback deinterlacers that's no longer an issue. And, as mentioned earlier, filtering works better with progressive video than it does with interlaced video.

    Regarding interlaced filtering, most interlace aware filters do something like AviSynth's

    Code:
    SeparateFields()
    Filter()
    Weave()
    Separate Fields turns each field of the source into a half height frame. So interlaced scan lines:

    Code:
    0
    1
    2
    3
    4
    ...
    476
    477
    478
    479
    become:

    Code:
    0
    2
    4
    ...
    476
    478
    and
    Code:
    1
    3
    ...
    477
    479
    Notice how scan lines which previously were not next to each other now are (0,2,4...). Any vertical spacial filtering will have information crossing two previously non adjacent lines rather than just the adjacent line. Bad effects of this can be seen with a simple example using Blur():

    Image
    [Attachment 54765 - Click to enlarge]


    Be sure to view the image full size. On the left is the original video, in the middle is the result of Blur(1.0), on the right is the result of SeparateFields().Blur(1.0).Weave(). Notice the ugly artifacts of the latter?

    A similar problem occurs with temporal filters. Notice how scan line 0 is now in the same vertical position as scan line 1, 2 is in the same position as 3, etc. Any temporal filter now also has information crossing from even to odd scalines.

    Another way interlace aware filters can work is by separating the fields, filtering all the even fields separately from the odd fields, then weaving them back together:

    Code:
    SeparateFields()
    even = SelectEven().Filter()
    odd = SelectOdd.Filter()
    Interleave(even, odd)
    Weave()
    This has the same problem with spacial filters. It has a different problem with temporal filters: instead of filtering at 1/60 second intervals it filters at 1/30 second intervals -- fields which were previously two fields apart (1/30 second) are treated as if they are temporally adjacent (1/60 second).

    Originally Posted by Christina View Post
    in case I want (for some reason) to output as interlaced at some point in the future. And then as the last step, deinterlace for final output file only.
    I would just use two different scripts. One for interlaced output, one for progressive output.

    Originally Posted by Christina View Post
    I also would need to do a bit of research to understand which filters work "better" on progressive (should be fairly easy to figure out which don't work on interlaced at all). So deinterlacing early on would solve that problem...
    Pretty much any filter with a spacial or temporal component will work better with progressive material. Those without a spacial or temporal component are agnostic -- they work just as well with progressive and interlace material. Something like ColorYUV() for example only deal with individual pixels so they work fine either way. Cropping can be done with either (with interlaced YV12 vertical cropping must be done on mod4 boundaries, mod2 with progressive YV12).

    Originally Posted by Christina View Post
    I have an S-VHS deck and I believe it has some type of LTBC built in.
    Because of the dark, noisy, shaky video it's a little hard to tell for sure, but the caps you have been posting show scanlines bent and wiggling left and right. That's an indication the line TBC isn't working. See if you can fined a still shot with sharp vertical lines/edges (dark telephone poles against the sky, for example) see how straight the lines are. The issue is more obvious of you SeparateFields() or Bob(). I think you'll see the vertical lines are wavy:

    https://forum.videohelp.com/threads/319420-Who-uses-a-DVD-recorder-as-a-line-TBC-and-w...er#post1983288

    Originally Posted by Christina View Post
    I have an ES10. I haven't tried using it yet because I was working on some 8mm and not VHS. I've heard not to use it unless you see problems first. Is that true or should I just automatically use it if I'm capturing VHS? Can it do anything bad if it's not needed or can it only help?
    8mm tape has the same problem as VHS: the helical scan head cannot spin at a perfectly constant speed resulting in individual scan lines moving left/right and having different lengths. A line TBC reduces those problems. As with all processing device you have to weigh the benefits against the drawbacks and decide for yourself. I don't have an ES10 but the setup is probably similar to the ES15 (which I do have, sitting in the closet because I do capture analog sources anymore). Be sure the input and output levels are set up for best operation. That will reduce posterization problems (which aren't really a problem with noisy caps anyway). See this post and the one following it:

    https://forum.videohelp.com/threads/380285-Where-did-I-go-wrong-What-am-I-missing#post2460874
    Quote Quote  
  8. Originally Posted by johnmeyer View Post
    It's interesting that you mention dropped frames. The only capture system I've ever used that never, even once, gave me a dropped frame, was DV.

    As I remember, VirtualDub is not the greatest client to use when capturing, specifically because it was tough to tune to eliminate dropped frames. Others can chime in with advice, should you find you have dropped frames.

    I would definitely recommend that you take the middle of one of your first captures, before you get too deep into the project, and put it on the timeline in your NLE. Then, walk through the video frame-by-frame, for a few hundred frames to look for duplicate frames. Dups are easy to spot (I have an AVISYnth script that finds them automatically). Even though you are looking for drops, almost all capture software, when it does drop a frame, will insert a duplicate a few frames later in order to keep the audio in sync. Since looking for the dups is easy, whereas spotting a drop can be tough, even when there is a lot of movement, and near-impossible in low motion scenes, I always look for dups.
    I didn’t drop any frames in about a 90 minute capture but it did insert 9.. I was watching it as it was capturing and it seemed to do that in the dead space when it was switching between events. If you care to share your script to find duplicate frames I can run it on my capture and see where exactly they are, if any. Thanks.
    Quote Quote  
  9. Jagabo- thank you again for the very detailed reply and the links to the older threads. I read through all and it definitely helps.

    I did my first capture without the ES10 and to my untrained eye I don’t see any blaring issues. No wavy lines or edges, no dropped frames and no audio sync issues. I can’t right now but maybe early next week I’ll post a short sample from that capture and you can tell me if you see any of the tell tale signs that a TBC is needed or would help.

    For now, the main issue I seem to be having is that the video just looks dull. It lacks depth and pop, even for an old home movie, and even when compared with the old DV capture which seems more vibrant in comparison with zero color correction or levels processing.

    I tried playing around with levels(), coloryuv() and tweak() but can’t seem to get it to look how I want. I don’t know that I would think it was that bad if I didn’t have the DV capture to compare it to, which in theory has a worse color space and would expect opposite results. So it must be something I am doing. (I will say that the finished processed deinterlaced file of the new capture when viewed on my tv shows a MASSIVE improvement in other areas when compared to the DV capture. Much cleaner, clearer, better quality. It’s just the colors and contrast I can’t seem to get right.)

    I never took my video out of the YUV color space - all edits were done in avisynth and the only conversion I did was to YV12 for QTGMC. Never brought into virtualdub or converted to RGB as far as I know. And then I used Jagabo’s sample ffmpeg x264 batch script which specifies color range tv and bt709 (I think, from memory). Do you think my issue is with the proc amp settings during capture that would make colors look washed out? Again, I will post a sample as soon as I can, as I’m sure you can’t answer that question without seeing it.
    Quote Quote  
  10. Originally Posted by Christina View Post
    I didn’t drop any frames in about a 90 minute capture but it did insert 9.. I was watching it as it was capturing and it seemed to do that in the dead space when it was switching between events. If you care to share your script to find duplicate frames I can run it on my capture and see where exactly they are, if any. Thanks.
    Here is a script that will write to a text file the frame number of each duplicate frame. There are various options you can choose, by uncommenting code, such as outputting ALL duplicate frame numbers, or only writing the first dup frame number when there are a string of duplicates in a row.

    Most capture cards create perfect duplicates, so you can leave the "blankthreshold" value between 0 (perfect dups) and 1. The YDifference values between frames is usually quite high (>10 for sure), so any numbers between 0 and 1 will require that the adjacent frames be virtually identical.

    Once you find duplicates, go back and forth on the timeline from each duplicate and I'll bet you detect an abnormal jump in the motion, indicating a dropped frame. As I said in an earlier post, jumps are sometimes very difficult to detect (although you'll sense them when you watch the video, but dups are easy to spot.

    Code:
    #This script finds duplicate frames and outputs the frame numbers
    loadPlugin("c:\Program Files\AviSynth 2.5\plugins\dgdecode.dll")
    
    #Set this number higher to find more "duplicates". Zero finds only perfect dups.
    global blankthreshold=1
    
    filename = "e:\output_duplicate_frames.txt"
    AVISource("e:\fs.avi").killaudio()
    i=AssumeBFF.ConvertToYV12
    
    #This line below will output EVERY frame that is below threshold, which results in LOTS of frames
    #Normally you don't do this, but it is included for those who want this capability.
    #WriteFileIf(last, filename,  "(YDifferenceFromPrevious(i)<=blankthreshold)", "current_frame+1", append = false)
    
    #The line below writes the FIRST frame that falls below the threshold
    WriteFileIf(last, filename,  "(YDifferenceFromPrevious(i)>blankthreshold)&&YDifferenceToNext(i)<=blankthreshold", "current_frame", append = false)
    
    #Use this instead of WriteFile in order to determine blankthreshold
    #ScriptClip("Subtitle(String(YDifferenceFromPrevious(i)))")
    Quote Quote  
  11. Originally Posted by Christina View Post
    For now, the main issue I seem to be having is that the video just looks dull. It lacks depth and pop, even for an old home movie, and even when compared with the old DV capture which seems more vibrant in comparison with zero color correction or levels processing.
    That shouldn't be the case. Here's a field (Bob()) of "DV sample capture.mov" and "check levels.avi" side by side with no adjustments (dv left, huffyuv right):

    Image
    [Attachment 54779 - Click to enlarge]


    The huffyuv cap has slightly higher black levels and slightly lower white levels. A small ColorYUV(cont_y=20, off_y=2) adjustment to the huffyuv cap makes the levels nearly identical:

    Image
    [Attachment 54778 - Click to enlarge]


    There ares some slight color differences but those could be adjusted too.
    Quote Quote  
  12. Originally Posted by jagabo View Post
    Originally Posted by Christina View Post
    For now, the main issue I seem to be having is that the video just looks dull. It lacks depth and pop, even for an old home movie, and even when compared with the old DV capture which seems more vibrant in comparison with zero color correction or levels processing.
    That shouldn't be the case.
    That’s what I was thinking. I did a different capture of a different tape that had better lighting than the one you’re referencing, and that’s what I’m referring to. It was Christmas and we had someone dressed as Santa.. the reds in the outfit were a bit muted compared with the old DV capture which surprised me. I can’t post til Monday or Tuesday but I’ll share a part of it when I can and maybe you can figure out where I went wrong.
    Quote Quote  
  13. Originally Posted by Christina View Post
    I did a different capture of a different tape that had better lighting than the one you’re referencing, and that’s what I’m referring to. It was Christmas and we had someone dressed as Santa.. the reds in the outfit were a bit muted compared with the old DV capture which surprised me. I can’t post til Monday or Tuesday but I’ll share a part of it when I can and maybe you can figure out where I went wrong.
    My guess is it's just a hue/saturation difference, also easily adjusted with the capture device's proc amp, or in software later.
    Quote Quote  
  14. Originally Posted by johnmeyer View Post
    Here is a script that will write to a text file...
    Finally getting around to look at this. Thanks for providing. Maybe this is a dumb question but how do you "run" an AVS script that outputs to a text file? I've got it set up for my video. Now what?

    Originally Posted by jagabo
    My guess is it's just a hue/saturation difference, also easily adjusted with the capture device's proc amp, or in software later.
    I attached a part of my last capture that looked washed out to me ("8mm_27_1995_03_sample.avi"). With Histogram() I can see that the black level looks too high but when I was setting up the histogram in Virtual Dub, I raised it juuust enough so that there was no red on the left end (and yes I temporarily cropped off my black borders first, so not sure how they ended up being so high). Also this was the actual part of the tape playing when I set up my capture.

    I would really like to get the capture as good (i.e. close to source) as possible so I'm not having to try to correct everything afterwards, but I've read not to mess too much with other proc amp settings besides brightness and contrast (and maybe turning sharpening down). Any other tips (besides increasing saturation in proc amp)? Is my too-high black level the cause of the wishy washy colors? PS. The colors actually don't look bad on my monitor, but they look dull on my tv (especially when compared to my original DV conversion, as I mentioned, when viewed on same TV).

    A sample from the final converted file I am viewing on my tv is also attached ("8mm_27_1995_03a sample.avc.mp4"). This is after my attempt at corrections on the attached AVI capture with AviSynth and then converting using your sample ffmpeg batch script, which I left basically as-is - coped below.

    Side note, any idea why the converted h264/mp4 file after using this batch file will not open in MPEG Streamclip? I was able to use Avidemux to extract the sample, but I tried MPEG Streamclip first and it said it's an unrecognized file type. MPEG Streamclip has no problem opening the mp4 sample I uploaded here, after extracting it with Avidemux, but it won't open the original full-length file. Is there something weird in the header? I attached a media info export of the full-length file that won't open in MPEG Streamclip.

    Side note 2, I used your sample ffmpeg batch file almost exactly as you provided it (code I used is below) because I don't know all the necessary settings and flags enough at this point for h264, as I was using Handbrake before. I know you were just providing a sample to show me how to convert via drag and drop, but is this OK to actually use for my conversions? Or are there some settings I should look into in order to better understand my options? I did look up each of the flags you used so I knew what they were doing and it all seemed fine to me, but not sure if there are other flags I should use as well.

    Code:
    "C:\Program Files (x86)\ffmpeg32\bin\ffmpeg.exe" ^
        -i %1 ^
        -c:v libx264 -preset slow -crf 16 -g 50 ^
        -profile:v high -level:v 4.2 -colorspace bt709 -color_range tv ^
        -acodec ac3 ^
        "%~dpn1.avc.mp4"
    pause
    THANK YOUUUUUUU!
    Image Attached Files
    Last edited by Christina; 7th Sep 2020 at 23:57. Reason: typos, as usual
    Quote Quote  
  15. Originally Posted by Christina View Post
    how do you "run" an AVS script that outputs to a text file? I've got it set up for my video. Now what?
    I usually open the script in VirtualDub(2) and select File -> Run Video Analysis Pass. That will execute the script frame by frame as fast as possible.

    Originally Posted by Christina View Post
    I attached a part of my last capture that looked washed out to me ("8mm_27_1995_03_sample.avi"). With Histogram() I can see that the black level looks too high but when I was setting up the histogram in Virtual Dub, I raised it juuust enough so that there was no red on the left end (and yes I temporarily cropped off my black borders first, so not sure how they ended up being so high). Also this was the actual part of the tape playing when I set up my capture.
    Are you sure you cropped all the black borders? There's more than the usual amount on the right edge. There's also a little black at the bottom of the frame in the head switching noise. Also, I believe the capture mode histogram display is logarithmic. So small peaks appear much larger than they really are. And I'm not absolutely sure VirtualDub calculates the histogram after cropping. I don't have any analog capture devices set up to test anymore. As a quick test, try cropping large amounts (say, 32 from the left and right, 16 from the top and bottom) from all four sides and comparing the histogram with and without the cropping.

    But in general, this is why I don't like histograms for checking levels -- you don't know what parts of the picture are out of spec. Looking at a waveform monitor of your capture in AviSynth:

    Image
    [Attachment 54820 - Click to enlarge]


    You can see that only the black overscan bars are near y=16 (that's why I was thinking you may have not cropped enough, or the histogram is pre-copping). The part of the picture you care about doesn't have full blacks (and it's not just this frame, the entire clip never approaches full black).

    Originally Posted by Christina View Post
    I would really like to get the capture as good (i.e. close to source) as possible so I'm not having to try to correct everything afterwards, but I've read not to mess too much with other proc amp settings besides brightness and contrast (and maybe turning sharpening down).
    Many capture devices actually capture at higher resolution and with more bit depth than the final 720x480 8 bit output (for example 1440x480, 10 bit). With those it makes a lot of sense to adjust using the device's proc amp --you get better precision in the final output. In any case, getting a cap that's reasonably close to the right levels, saturation, and hue reduces the need for further adjustments in post. The biggest things you need to worry about is not crushing the lightest and darkest parts of the picture. Once they are crushed they can't be restored.

    Originally Posted by Christina View Post
    Any other tips (besides increasing saturation in proc amp)? Is my too-high black level the cause of the wishy washy colors? PS. The colors actually don't look bad on my monitor, but they look dull on my tv (especially when compared to my original DV conversion, as I mentioned, when viewed on same TV).
    I haven't seen the DV conversion so I don't know exactly what's different about it.

    Originally Posted by Christina View Post
    A sample from the final converted file I am viewing on my tv is also attached ("8mm_27_1995_03a sample.avc.mp4"). This is after my attempt at corrections on the attached AVI capture with AviSynth and then converting using your sample ffmpeg batch script, which I left basically as-is - coped below.
    The video looks reasonable. There's one obvious problem though, the video is encoded as rec.709 (bt709 in the ffmpeg command line) which is wrong for your SD video. It should be rec.601 (smpte170m on the ffmpeg command line). That will cause the reds to be dark and undersaturated on a TV (assuming a player is following the flagged matrix).

    Originally Posted by Christina View Post
    Side note, any idea why the converted h264/mp4 file after using this batch file will not open in MPEG Streamclip?
    I suspect it's the AC3 audio. That wasn't part of the mp4 spec until a few years ago and the last Mpeg Streamclip appears to be about 8 years old. Try changing the audio to aac. You may also want to specify the audio bitrate:

    Code:
    "C:\Program Files (x86)\ffmpeg32\bin\ffmpeg.exe" ^
        -i %1 ^
        -c:v libx264 -preset slow -crf 16 -g 50 ^
        -profile:v high -level:v 4.2 -colorspace smpte170m -color_range tv ^
        -acodec aac -b:a 160 ^
        "%~dpn1.avc.mp4"
    pause
    I don't have time to fully address it now but I saw you asked about the speed of QTGMC. Whay you are seeing isn't unusual. AviSynth(+) runs single threaded by default. In AviSynth+ you can force multithreading by adding prefetch(N) to the end of your script. N should be about 1 to 1.5x the number of threads your CPU supports. For example a quad core CPU should usually have N set to 4 to 6. If you're using an older version of AviSynth (pre-"plus") you have to find a multithreaded build at set the number of threads with SetMtMode().
    Quote Quote  
  16. Originally Posted by jagabo View Post

    Originally Posted by Christina View Post
    Any other tips (besides increasing saturation in proc amp)? Is my too-high black level the cause of the wishy washy colors? PS. The colors actually don't look bad on my monitor, but they look dull on my tv (especially when compared to my original DV conversion, as I mentioned, when viewed on same TV).
    I haven't seen the DV conversion so I don't know exactly what's different about it.
    Thanks! I'm going to try all your suggestions. In the meantime, I just hopped on the mac to upload the DV capture of the same footage.

    I'll touch base again after I try reencoding to see if the wrong colorspace in ffmpeg script was the primary issue with the desaturated colors.
    Image Attached Files
    Quote Quote  
  17. Video Restorer lordsmurf's Avatar
    Join Date
    Jun 2003
    Location
    dFAQ.us/lordsmurf
    Search Comp PM
    Analog domain proc amps tend to be far better than digital "proc amps" (not really) in capture cards, or post-capture software corrections. Noting that BOTH analog + digital/software are almost always required. Sometimes even capture card proc amp for good measure.

    All of the sample images on pg1 are pretty crappy, obvious non-analog correction attempts.

    You also need lossless. Starting from DV is like running a race with a broken leg. Won't work, or miserable experience, or both.

    ES10/15 adds posterization effects, ie screws with color more than normal.

    DV color loss presents as dull, sometimes fuzzy. It is what is is.
    Want my help? Ask here! (not via PM!)
    FAQs: Best Blank DiscsBest TBCsBest VCRs for captureRestore VHS
    Quote Quote  
  18. Hi LS,
    Thanks for chiming in. Just showing the [old] DV capture as a point of reference because actually the colors in that capture looked better to me on my tv than my lossless capture, which we know shouldn't be - I'm now capturing lossless with the ATI 600 USB (purchased from you last month). I was having trouble getting the colors and levels right on the lossless capture using the only proc-amp I have, the one built into capture card adjusting in VirtualDub - Jagabo has been helping me understand the histograms and waveforms and I'm experimenting to get as best results as possible. We think the tv color issue on the lossless capture might have been in the ffmpeg script on final conversion but I still have yet to test that out. I have an ES10 in case but haven't even tried it yet since so far I haven't had any wobbly video.

    Jagabo/johnmeyer -
    I ran the AVS script in VirtualDub2 to find duplicate frames. It found about 14 or so. I scrubbed to each of them and in 100% of the cases, it was when scenes were switching, i.e. the recording was stopped and started again. So it looks like I had a pretty good result from my capture with no actual dropped frames during normal scenes, as far as I can tell and as far as the script is reporting.

    Jagabo-
    I tested the histogram during capture in VirtualDub as you suggested, cropping off 32 from the sides and 16 from top and bottom and comparing that with the histogram with no cropping, and they're definitely different, so it does seem to take the cropping into account. I attached 4 screenshots here showing the differences. The filenames should be self-explanatory but you should see:
    1. histogram with no crop - using proc amp settings from original capture
    2. histogram with large crops - using proc amp settings from original capture
    3. histogram with large crops - adjusting brightness down by 2 (from original capture settings)
    4. histogram with large crops - adjusting brightness down by 4 (from original capture settings)

    I tried to keep using the same part of the video by rewinding each time but certain parts of the video showed blacks just starting to clip while others seemed ok with the lower brightness setting at 118.

    With brightness lowered by 4 to 118 (which does show some slight blacks clipping in VirtualDub histogram with 32/16 cropping as in screenshots here) I recaptured the same part of the video we've been looking at, disabling the cropping for the actual capture. I checked waveform in AviSynth and it still doesn't look like it changed all that much- still a pretty big gap of space at the bottom. I attached the waveform here as well. I don't know how much lower I should go, being that I'm already seeing some red at this level. Do I make it even lower and increase the contrast even more? At what point is this just going to make my overall video too dark at the expense of trying to makes the blacks truly black?
    Image Attached Thumbnails Click image for larger version

Name:	Vdub histogram cropping 32 and 16 off all edges.jpg
Views:	13
Size:	92.4 KB
ID:	54822  

    Click image for larger version

Name:	Vdub historgram no cropping.jpg
Views:	16
Size:	96.7 KB
ID:	54823  

    Click image for larger version

Name:	Vdub histogram cropping 32 off all edges adjusting brightness down from 122 to 120.jpg
Views:	16
Size:	100.9 KB
ID:	54824  

    Click image for larger version

Name:	Vdub histogram cropping 32 off all edges adjusting brightness down to 118.jpg
Views:	46
Size:	176.4 KB
ID:	54825  

    Click image for larger version

Name:	waveform- newcap lower brightness 118 vs 122.PNG
Views:	17
Size:	810.3 KB
ID:	54827  

    Quote Quote  
  19. Video Restorer lordsmurf's Avatar
    Join Date
    Jun 2003
    Location
    dFAQ.us/lordsmurf
    Search Comp PM
    Part of this issue may also be illegal blacks. What is referred to as "blacker than black" ("crushed blacks"), and falls in the non-YUV 0-15 range. What some refer to as "clipping black", but that term has a negative connotation that seemingly blames the card. But the card is correct, the video input is not. (I think "clipped black" is a photo term that has leaked into video.) Certain cards, namely ATI AIW, can capture those sub-blacks. The ATI 600 USB is an excellent card, but it's "normal" in this regard, capturing only the legal 16-235 values.

    The solution for this is to properly adjust the video with a proc amp.

    Understand that this mostly happens on underexposure, and the data tends to be lost no matter if 0-255 RGB or 16-235 YUV. VHS was analog, not digital, so it didn't always precisely record in the equivalent 16-235 range. Some tapes are too dark, other too light.
    Want my help? Ask here! (not via PM!)
    FAQs: Best Blank DiscsBest TBCsBest VCRs for captureRestore VHS
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!