VideoHelp Forum
+ Reply to Thread
Page 2 of 2
FirstFirst 1 2
Results 31 to 56 of 56
Thread
  1. Member
    Join Date
    May 2014
    Location
    Memphis TN, US
    Search PM
    ...And as we saw in the posted sample, some of those structurally different parts are joined with dissolves.
    - My sister Ann's brother
    Quote Quote  
  2. OK. I decided to practice with a different clip.

    Here's what I did:

    1. Captured to HuffYUV video and PCM audio with Amarectv.

    2. Used Color Tools to see how much the brightness & contrast needed adjusted.

    3. Made adjustments to brightness & contrast in the device settings and captured again.

    4. Repeated 2 & 3 a few times until it was pretty close to being right according to Color Tools in VirtualDub.

    5. Used VirtualDub Color Tools to make one last adjustment to brightness & contrast. Saved result as HuffYUV.

    The first sample is the result of that.

    I then used QTGMC with this script:

    AviSource("Path to file")
    ConvertToYV12(interlaced=true)
    AssumeTFF()
    QTGMC( Preset="Slow" )

    The second sample is the result of that.

    The plan is to encode with x264 or a high bitrate MPEG-2. In the process I will mask to edges that would normally be covered by overscan. I'll also encode anamorphic and make sure is is set to display at the original 4:3.

    Again, I know it isn't ideal and I know some would say not to deinterlace. However, I think I like how QTGMC does it. Did I at least get the basics right? Is there any kind of filter to fix the double image that is especially noticable by the dancing hanky?

    Edit:

    I also added a sample where I used this script:

    AviSource("E:\VHS\Cap1.avi")
    ConvertToYV12(interlaced=true)
    AssumeTFF()
    QTGMC( Preset="Slower", EZDenoise=2.5, NoisePreset="Slow" )
    Image Attached Files
    Last edited by Micheal81; 12th Nov 2015 at 03:39.
    Quote Quote  
  3. The brights are too bright in all three caps. The black level look about right.

    You can't really use Color Tools in VirtualDub to judge levels. Since it works in RGB (after a standard rec.601 conversion from YUV) any super brights (Y > 235) will be crushed to RGB=255. The best you can do with Color Tools is look for a peak at the top and bottom that implies crushed blacks and brights.

    Color Tools:
    Click image for larger version

Name:	ct.jpg
Views:	157
Size:	98.8 KB
ID:	34447

    The big peak at Y=235 is a little hard to see because it coincides with the orange line.

    AviSynth HistogramOnBottom() (really a waveform monitor):
    Click image for larger version

Name:	hist.jpg
Views:	262
Size:	88.9 KB
ID:	34448

    The super brights are from the table, the handkerchief, and the bright lights near the left edge of the frame. Those brights are crushed to Y=235 when converted to RGB (255). That's not highly consequential in this shot but it could be in others.

    The waveform shows superblacks too but they are mostly in the front/back porch which doesn't matter (though you would want to check a wider range of video since the porches are supposed be be at Y=16) and a few oversharpening halos.
    Last edited by jagabo; 12th Nov 2015 at 08:18.
    Quote Quote  
  4. Originally Posted by manono View Post
    By 'telecined' I'll assume you mean hard telecine. I'd do it (and have done it) differently - encode the progressive parts progressively, applying soft pulldown afterwards, and encode the interlaced parts as interlaced 29.97fps. Then rejoin the different parts during authoring.
    Hmmm ... I didn't realize that you could have the soft pulldown flag turn on and off within the same stream. I thought it was set for the duration of the clip.

    I'll have to remember that one.

    Also, as already pointed out, this is 16 fps material, and when going to 29.97, there are definite limits to how far the soft pulldown can take you. And, if you play back 16 fps at 19 fps (or higher), to get it within the range that the soft telecine trick can handle, the resulting motion will be pretty fast.

    The other issue is that, AFIK, soft telecine is only available for MPEG-2, so if you want to do an H.264 MP4 encode, as one example, the soft telecine technique won't work. I don't think the OP mentioned what final delivery format he is considering.
    Last edited by johnmeyer; 12th Nov 2015 at 10:12. Reason: changed the soft telecine wording
    Quote Quote  
  5. Member
    Join Date
    May 2014
    Location
    Memphis TN, US
    Search PM
    Originally Posted by Micheal81 View Post
    OK. I decided to practice with a different clip.

    Here's what I did:

    1. Captured to HuffYUV video and PCM audio with Amarectv.

    2. Used Color Tools to see how much the brightness & contrast needed adjusted.

    3. Made adjustments to brightness & contrast in the device settings and captured again.

    4. Repeated 2 & 3 a few times until it was pretty close to being right according to Color Tools in VirtualDub.

    5. Used VirtualDub Color Tools to make one last adjustment to brightness & contrast. Saved result as HuffYUV.

    The first sample is the result of that.

    I then used QTGMC with this script:

    AviSource("Path to file")
    ConvertToYV12(interlaced=true)
    AssumeTFF()
    QTGMC( Preset="Slow" )

    The second sample is the result of that.

    The plan is to encode with x264 or a high bitrate MPEG-2. In the process I will mask to edges that would normally be covered by overscan. I'll also encode anamorphic and make sure is is set to display at the original 4:3.

    Again, I know it isn't ideal and I know some would say not to deinterlace. However, I think I like how QTGMC does it. Did I at least get the basics right? Is there any kind of filter to fix the double image that is especially noticable by the dancing hanky?

    Edit:

    I also added a sample where I used this script:

    AviSource("E:\VHS\Cap1.avi")
    ConvertToYV12(interlaced=true)
    AssumeTFF()
    QTGMC( Preset="Slower", EZDenoise=2.5, NoisePreset="Slow" )
    To add to jagabo's notes about your levels and samples:

    [edit]RGB histograms are used for color work and to check how invalid levels affect RGB display. You can't fix clipping in RGB. Once invalid levels go to RGB, the clipping is there for good. Fix it in YUV. As it is, my ColorTools RGB histograms showed luma and everything else climbing the walls at the bright end.

    Cap2 and Denoised avi's are uncompressed YV12. You can save some space -- and some upload time -- by compressing YV12 with a lossless compressor. Lagarith works well and is in wide use: http://lags.leetcode.net/codec.html. There are other lossless codecs such as UT Video Codec, but some media players can't read UT. Those two avi's uncompressed take up ~330MB each. Compressed with Lagarith, they'd be about 115 MB each, or 1/3 the uncompressed file space.

    QTGMC strictly for denoising doesn't handle some types of noise as well as other filters. Denoised.avi sill has plenty of noise. I got cleaner results with QTGMC at "very fast" just to deinterlace for double frame rate and denoised with plain vanilla MDegrain2 and TemporalSoften, with a touch of Checkmate to kill some of the obvious dot crawl. There's nothing that any filter can do with that edge ghosting.

    The two progressive avi's run at 59.94 fps. If that's what you want, you can't use that frame rate for regulation DVD/BluRay/AVCHD. Because the captures are clipped at the high end anyway, the only levels work I did was to reduce contrast. The denoised 480p I made from Cap1.avi is attached.

    I know you don't want to get better playback gear, but it's a shame to accept really bad playback with this colorful and interesting video. You have serious playback problems that could be avoided. Here's hoping you'll give it a second thought. But it's your video.
    Image Attached Files
    Last edited by LMotlow; 12th Nov 2015 at 11:14.
    - My sister Ann's brother
    Quote Quote  
  6. Member PuzZLeR's Avatar
    Join Date
    Oct 2006
    Location
    Toronto Canada
    Search Comp PM
    Originally Posted by johnmeyer
    That "2,2,2,7,2" example you gave does a great job of making the point. I understand completely why you don't want to give any weight to "outliers." While there might be a more sophisticated algorithm or function, I can see where median would work well in many situations, and it is extremely easy and fast to calculate. Also, it can completely eliminate the noise pixel. As a result, it would seem to me that the increment in noise reduction that you would achieve by doing more captures might decline much faster than the square root function that someone mentioned in that Vegas thread. Once you have enough captures to "vote off the island" those pixels that don't conform, additional captures might not change anything.
    I did try to keep it simple with the "2,2,2,7,2" example, and I'm glad it made the point.

    As per subsequent captures, when you average, in theory, you will only ever eliminate the 7 here with an infinite amount of captures, otherwise its weight will always be there. Since we're working with a finite bit-depth with color spaces, you will always have some footprint of the 7 (albiet fading capture by capture). It will always be there, even if your eyes notice it less and less. (Again, talking theory mostly.) You can only really make the 2 something like down to 2.1, or down to 2.01, or down to 2.00001, etc., (something like that) but would need an awful lot of capturing to get that done, and STILL never get it down to 2 like it's meant to be.

    With median methods, it's more a function of probabilty and discrete mathematics. (Yes, more statistics centered.) It's not continuous at all, only integer values, and the 7, with a low enough probabilty of occurance, like most random errors or outliers are, can be completely elimintated.

    They use such methods in astronomy as well.

    As well AJK's plugin does allow for averaging in the mix as you may have noticed. This can solve some problems too, and also eliminate the 7 entirely if done right.

    Originally Posted by johnmeyer
    BTW, I read the thread you linked to, as well as the parallel thread AJK started over in doom9.org, and it looks like his median function works really well for this situation.
    Yes indeed, awesome plugin. I've been using it since and it works great, and the new update to handle the out-of-sync captures will be awesome.

    For anyone interested, it's been updated, just check the link I posted earlier in the thread. Now it's less work than ever to put together multiple capturing.

    @Micheal81: If you wanted a method to improve your captures, without any new hardware, drivers, tools, etc, no extra expenses, and very little extra learning - and on top of the great advice you've already received - you couldn't have found a better method to drastically increase the quality of your captures with the equipment and software you already have than with median methods.

    Now I'm off to test this new update on that thread.
    I hate VHS. I always did.
    Quote Quote  
  7. Originally Posted by johnmeyer View Post
    I didn't realize that you could have the soft pulldown flag turn on and off within the same stream. I thought it was set for the duration of the clip.
    You see it most commonly in certain anime television episode DVDs where, if you run the preview in DGIndex or study the resulting D2V, you can see it flip back and forth from Film to Video all the time. Documentaries, too, sometimes. It's more common than you might think, or want. Criterion film DVD releases, for example, are notorious for dropping to video at chapter breaks. You might get a D2V that's 99.8% Film, but you're irritated at seeing some interlacing at a chapter change because your DVD player didn't adjust to deinterlace it quickly enough.

    Anyway, if there are too many of these flips, it becomes too much work to break it up into pieces like that so you may as well encode the whole thing as interlaced 29.97fps. But if you do reencode the different sections differently (progressive as progressive with soft pulldown and interlaced as interlaced) it's a simple matter to rejoin the pieces afterwards using the 'Add' button in Muxman.

    And, if you play back 16 fps at 19 fps (or higher), to get it within the range that the soft telecine trick can handle, the resulting motion will be pretty fast.
    My suggestion wasn't to speed it up, but to interpolate frames to bring it up to 19.98fps to be soft telecined for NTSC DVD. Yes, if the end result isn't to be DVD, then this doesn't apply. I was under the (mistaken?) impression the output format was to be DVD.
    Quote Quote  
  8. Thanks, manono. Your explanation is very useful.
    Quote Quote  
  9. Originally Posted by jagabo View Post
    The brights are too bright in all three caps. The black level look about right.

    You can't really use Color Tools in VirtualDub to judge levels. Since it works in RGB (after a standard rec.601 conversion from YUV) any super brights (Y > 235) will be crushed to RGB=255. The best you can do with Color Tools is look for a peak at the top and bottom that implies crushed blacks and brights.

    Color Tools:

    The big peak at Y=235 is a little hard to see because it coincides with the orange line.

    AviSynth HistogramOnBottom() (really a waveform monitor):


    The super brights are from the table, the handkerchief, and the bright lights near the left edge of the frame. Those brights are crushed to Y=235 when converted to RGB (255). That's not highly consequential in this shot but it could be in others.

    The waveform shows superblacks too but they are mostly in the front/back porch which doesn't matter (though you would want to check a wider range of video since the porches are supposed be be at Y=16) and a few oversharpening halos.
    So are you saying instead of using Color Tools in VirtualDub, use AviSyth's HistogramOnBottom() filter? Create an avs script with that filter and run it throught VirtualDub? Sorry if I'm a little slow.
    Quote Quote  
  10. Originally Posted by LMotlow View Post

    To add to jagabo's notes about your levels and samples:

    [edit]RGB histograms are used for color work and to check how invalid levels affect RGB display. You can't fix clipping in RGB. Once invalid levels go to RGB, the clipping is there for good. Fix it in YUV. As it is, my ColorTools RGB histograms showed luma and everything else climbing the walls at the bright end.

    Cap2 and Denoised avi's are uncompressed YV12. You can save some space -- and some upload time -- by compressing YV12 with a lossless compressor. Lagarith works well and is in wide use: http://lags.leetcode.net/codec.html. There are other lossless codecs such as UT Video Codec, but some media players can't read UT. Those two avi's uncompressed take up ~330MB each. Compressed with Lagarith, they'd be about 115 MB each, or 1/3 the uncompressed file space.

    QTGMC strictly for denoising doesn't handle some types of noise as well as other filters. Denoised.avi sill has plenty of noise. I got cleaner results with QTGMC at "very fast" just to deinterlace for double frame rate and denoised with plain vanilla MDegrain2 and TemporalSoften, with a touch of Checkmate to kill some of the obvious dot crawl. There's nothing that any filter can do with that edge ghosting.

    The two progressive avi's run at 59.94 fps. If that's what you want, you can't use that frame rate for regulation DVD/BluRay/AVCHD. Because the captures are clipped at the high end anyway, the only levels work I did was to reduce contrast. The denoised 480p I made from Cap1.avi is attached.

    I know you don't want to get better playback gear, but it's a shame to accept really bad playback with this colorful and interesting video. You have serious playback problems that could be avoided. Here's hoping you'll give it a second thought. But it's your video.
    You said fix it in YUV. Is that what Avisynth's HistogramOnBottom filter is for?

    Cap2 and Denoised were supposed to be HuffYUV, but I made a mistake and saved as uncompressed. I thought I was supposed to use direct stream, but apparently fast recompress is what is needed.

    The 59.94 fps is fine. I'm not going to being using it with DVD/BD/AVCHD.

    There is no way I can spend any money on better gear. Sorry, but that is just how it is. Maybe one day I will and hopefully the tapes will still be good by then. I just want to go ahead and digitize them the best I can with what I have in case something happens to the tapes before I can get better gear.
    Quote Quote  
  11. Originally Posted by Micheal81 View Post
    So are you saying instead of using Color Tools in VirtualDub, use AviSyth's HistogramOnBottom() filter?
    Yes. Or just Histogram(), or TurnRight().Histogram().Turnleft().

    Originally Posted by Micheal81 View Post
    Create an avs script with that filter and run it throught VirtualDub?
    Yes. I always use VirtualDUb to view the results of my scripts.

    VirtualDub's capture module also has a (true) histogram that you can view while previewing. It's much easier to use that since you get live feedback. So you don't have to capture, check, make proc amp adjustments, capture again, check again, etc.

    With most capture devices you can change the proc amp settings while previewing. To do so you need to use GraphEdit or GraphStudio, add the capture filter, go to the filter's settings dialog, and adjust proc amp the settings. You'll see the changes live in VirtualDub.
    Quote Quote  
  12. Member
    Join Date
    May 2014
    Location
    Memphis TN, US
    Search PM
    Originally Posted by Micheal81 View Post
    Cap2 and Denoised were supposed to be HuffYUV, but I made a mistake and saved as uncompressed. I thought I was supposed to use direct stream, but apparently fast recompress is what is needed.
    Most versions of huffyuv don't work with YV12. If your output is YV12, don't change colorspaces again just to use huffyuv. Use a compressor that can handle YV12 and leave it that way. Lagarith works with YUY2, YV12, and RGB.
    - My sister Ann's brother
    Quote Quote  
  13. Originally Posted by LMotlow View Post
    Originally Posted by Micheal81 View Post
    Cap2 and Denoised were supposed to be HuffYUV, but I made a mistake and saved as uncompressed. I thought I was supposed to use direct stream, but apparently fast recompress is what is needed.
    Most versions of huffyuv don't work with YV12. If your output is YV12, don't change colorspaces again just to use huffyuv. Use a compressor that can handle YV12 and leave it that way. Lagarith works with YUY2, YV12, and RGB.
    I captured with HuffYUV 2.2.0 which is YUY2 and MediaInfo says YUV 4:2:2. When I use QTGMC and VDub's Fast Recompress, I use HuffYUV 2.2.0 and MediaInfo says YUV 4:2:2. Am I misunderstanding something?
    Quote Quote  
  14. Member
    Join Date
    May 2014
    Location
    Memphis TN, US
    Search PM
    Originally Posted by Micheal81 View Post
    Originally Posted by LMotlow View Post
    Originally Posted by Micheal81 View Post
    Cap2 and Denoised were supposed to be HuffYUV, but I made a mistake and saved as uncompressed. I thought I was supposed to use direct stream, but apparently fast recompress is what is needed.
    Most versions of huffyuv don't work with YV12. If your output is YV12, don't change colorspaces again just to use huffyuv. Use a compressor that can handle YV12 and leave it that way. Lagarith works with YUY2, YV12, and RGB.
    I captured with HuffYUV 2.2.0 which is YUY2 and MediaInfo says YUV 4:2:2. When I use QTGMC and VDub's Fast Recompress, I use HuffYUV 2.2.0 and MediaInfo says YUV 4:2:2. Am I misunderstanding something?
    Oh. I guess I'd better explain, as it seems that you're unaware of the colorspace you're working with at any given time.

    Cap1.avi used a YUY2 colorspace and was compressed with huffyuv. In this case the huff version doesn't matter.

    You posted a script that you said you used on that YUY2 capture. The script you said you used to create the resulting Cap2.avi was:
    AviSource("Path to file")
    ConvertToYV12(interlaced=true)
    AssumeTFF()
    QTGMC( Preset="Slow" )
    Was that your entire script? Were there any other lines in the script where you changed the YV12 colorspace to something else? If not, then the output of your script was YV12 when VirtualDub saw it. The Cap2.avi you posted used a YV12 colorspace and was uncompressed. I get that colorspace information by using this short code:
    AviSource("Cap2.avi")
    info()
    Shown below is a part of the information I see overlaid on the frames when I run that script in VirtualDub:
    Click image for larger version

Name:	Cap2_Info.jpg
Views:	1768
Size:	39.3 KB
ID:	34462

    Below is part of the information I see when I run MediaInfo on Cap2.avi:

    Code:
    Complete name                            : E:\forum\micheal81\vhelp2\Cap2.avi
    Format                                   : AVI
    Format/Info                              : Audio Video Interleave
    File size                                : 323 MiB
    Duration                                 : 10s 844ms
    Overall bit rate                         : 250 Mbps
    Writing library                          : VirtualDub build 35491/release
    
    Video
    ID                                       : 0
    Format                                   : YUV
    Codec ID                                 : YV12
    Codec ID/Info                            : ATI YVU12 4:2:0 Planar
    Duration                                 : 10s 844ms
    Bit rate                                 : 249 Mbps
    Width                                    : 720 pixels
    Height                                   : 480 pixels
    Display aspect ratio                     : 3:2
    Frame rate                               : 59.940 fps
    Color space                              : YUV
    Chroma subsampling                       : 4:2:0
    Compression mode                         : Lossless
    Because your script for Cap2 entered VirtualDub as YV12, and because you saved that output using direct stream copy, Virtualdub output a YV12 video. But huffyuv does not work with YV12. So your YV12 video was output as uncompressed YV12.

    You also posted the script you used to create the sample for Denoised.avi. Here is the script you posted:

    AviSource("E:\VHS\Cap1.avi")
    ConvertToYV12(interlaced=true)
    AssumeTFF()
    QTGMC( Preset="Slower", EZDenoise=2.5, NoisePreset="Slow" )

    Again, was that your entire script? Were there any other lines in the script where you changed the YV12 colorspace to something else? If not, then the output of your script was YV12 when VirtualDub saw it. The Denoised.avi you posted used a YV12 colorspace and was uncompressed. I get that colorspace information by using this short code:
    AviSource("Denoised.avi")
    info()
    Below is a section of the information that Avisynth overlays on the frames using that script:

    Click image for larger version

Name:	Denoised_Info.jpg
Views:	1751
Size:	35.3 KB
ID:	34463


    Below is part of the information I see when I run MediaInfo on Denoised.avi:

    Code:
    Complete name                            : E:\forum\micheal81\vhelp2\Denoised.avi
    Format                                   : AVI
    Format/Info                              : Audio Video Interleave
    File size                                : 323 MiB
    Duration                                 : 10s 844ms
    Overall bit rate                         : 250 Mbps
    Writing library                          : VirtualDub build 35491/release
    
    Video
    ID                                       : 0
    Format                                   : YUV
    Codec ID                                 : YV12
    Codec ID/Info                            : ATI YVU12 4:2:0 Planar
    Duration                                 : 10s 844ms
    Bit rate                                 : 249 Mbps
    Width                                    : 720 pixels
    Height                                   : 480 pixels
    Display aspect ratio                     : 3:2
    Frame rate                               : 59.940 fps
    Color space                              : YUV
    Chroma subsampling                       : 4:2:0
    Compression mode                         : Lossless
    Bits/(Pixel*Frame)                       : 12.000
    Stream size                              : 321 MiB (99%)
    The input/output colorspace notes apply equally to both sample files.

    Your version of huffyuv doesn't work with YV12. Most versions of huffyuv can't compress YV12. That's why I included a link to a popular lossless compressor that does work with YV12. I believe the ffdshow version of huffyuv works with YV12, but Id stay away from it, as ffdshow is fond of changing and/or removing its different versions of huffyuv from time to time. Often the replacement version is incompatible with the former -- at least, that's a complaint I've seen on the internet. You can take your chances if you want. Many people don't have ffdshow on their PCs. I'd suggest sticking with Lagarith for YV12. Others might suggest UT codec. Some media players don't recognize UT codec, and its umpteen settings can be confusing.
    Last edited by LMotlow; 12th Nov 2015 at 22:32.
    - My sister Ann's brother
    Quote Quote  
  15. Originally Posted by LMotlow View Post
    Originally Posted by Micheal81 View Post
    Originally Posted by LMotlow View Post
    Originally Posted by Micheal81 View Post
    Cap2 and Denoised were supposed to be HuffYUV, but I made a mistake and saved as uncompressed. I thought I was supposed to use direct stream, but apparently fast recompress is what is needed.
    Most versions of huffyuv don't work with YV12. If your output is YV12, don't change colorspaces again just to use huffyuv. Use a compressor that can handle YV12 and leave it that way. Lagarith works with YUY2, YV12, and RGB.
    I captured with HuffYUV 2.2.0 which is YUY2 and MediaInfo says YUV 4:2:2. When I use QTGMC and VDub's Fast Recompress, I use HuffYUV 2.2.0 and MediaInfo says YUV 4:2:2. Am I misunderstanding something?
    Oh. I guess I'd better explain, as it seems that you're unaware of the colorspace you're working with at any given time.

    Cap1.avi used a YUY2 colorspace and was compressed with huffyuv. In this case the huff version doesn't matter.

    You posted a script that you said you used on that YUY2 capture. The script you said you used to create the resulting Cap2.avi was:
    AviSource("Path to file")
    ConvertToYV12(interlaced=true)
    AssumeTFF()
    QTGMC( Preset="Slow" )
    Was that your entire script? Were there any other lines in the script where you changed the YV12 colorspace to something else? If not, then the output of your script was YV12 when VirtualDub saw it. The Cap2.avi you posted used a YV12 colorspace and was uncompressed. I get that colorspace information by using this short code:
    AviSource("Cap2.avi")
    info()
    Shown below is a part of the information I see overlaid on the frames when I run that script in VirtualDub:
    Image
    [Attachment 34462 - Click to enlarge]


    Below is part of the information I see when I run MediaInfo on Cap2.avi:

    Code:
    Complete name                            : E:\forum\micheal81\vhelp2\Cap2.avi
    Format                                   : AVI
    Format/Info                              : Audio Video Interleave
    File size                                : 323 MiB
    Duration                                 : 10s 844ms
    Overall bit rate                         : 250 Mbps
    Writing library                          : VirtualDub build 35491/release
    
    Video
    ID                                       : 0
    Format                                   : YUV
    Codec ID                                 : YV12
    Codec ID/Info                            : ATI YVU12 4:2:0 Planar
    Duration                                 : 10s 844ms
    Bit rate                                 : 249 Mbps
    Width                                    : 720 pixels
    Height                                   : 480 pixels
    Display aspect ratio                     : 3:2
    Frame rate                               : 59.940 fps
    Color space                              : YUV
    Chroma subsampling                       : 4:2:0
    Compression mode                         : Lossless
    Because your script for Cap2 entered VirtualDub as YV12, and because you saved that output using direct stream copy, Virtualdub output a YV12 video. But huffyuv does not work with YV12. So your YV12 video was output as uncompressed YV12.

    You also posted the script you used to create the sample for Denoised.avi. Here is the script you posted:

    AviSource("E:\VHS\Cap1.avi")
    ConvertToYV12(interlaced=true)
    AssumeTFF()
    QTGMC( Preset="Slower", EZDenoise=2.5, NoisePreset="Slow" )

    Again, was that your entire script? Were there any other lines in the script where you changed the YV12 colorspace to something else? If not, then the output of your script was YV12 when VirtualDub saw it. The Denoised.avi you posted used a YV12 colorspace and was uncompressed. I get that colorspace information by using this short code:
    AviSource("Denoised.avi")
    info()
    Below is a section of the information that Avisynth overlays on the frames using that script:

    Image
    [Attachment 34463 - Click to enlarge]



    Below is part of the information I see when I run MediaInfo on Denoised.avi:

    Code:
    Complete name                            : E:\forum\micheal81\vhelp2\Denoised.avi
    Format                                   : AVI
    Format/Info                              : Audio Video Interleave
    File size                                : 323 MiB
    Duration                                 : 10s 844ms
    Overall bit rate                         : 250 Mbps
    Writing library                          : VirtualDub build 35491/release
    
    Video
    ID                                       : 0
    Format                                   : YUV
    Codec ID                                 : YV12
    Codec ID/Info                            : ATI YVU12 4:2:0 Planar
    Duration                                 : 10s 844ms
    Bit rate                                 : 249 Mbps
    Width                                    : 720 pixels
    Height                                   : 480 pixels
    Display aspect ratio                     : 3:2
    Frame rate                               : 59.940 fps
    Color space                              : YUV
    Chroma subsampling                       : 4:2:0
    Compression mode                         : Lossless
    Bits/(Pixel*Frame)                       : 12.000
    Stream size                              : 321 MiB (99%)
    The input/output colorspace notes apply equally to both sample files.

    Your version of huffyuv doesn't work with YV12. Most versions of huffyuv can't compress YV12. That's why I included a link to a popular lossless compressor that does work with YV12. I believe the ffdshow version of huffyuv works with YV12, but Id stay away from it, as ffdshow is fond of changing and/or removing its different versions of huffyuv from time to time. Often the replacement version is incompatible with the former -- at least, that's a complaint I've seen on the internet. You can take your chances if you want. Many people don't have ffdshow on their PCs. I'd suggest sticking with Lagarith for YV12. Others might suggest UT codec. Some media players don't recognize UT codec, and its umpteen settings can be confusing.

    Another stupid mistake on my part. I forgot that I have ConvertToYV12 in my script. I did it that way because I got an error about the colorspace when I tried to use QTGMC.

    So, in order to prevent unnecessary colorspace conversions, I'll use Lagarith for the capture and when using QTGMC. Is that correct?
    Quote Quote  
  16. Member
    Join Date
    May 2014
    Location
    Memphis TN, US
    Search PM
    Either huffyuv or Lagarith can be used for tape capture. Capture with either to YUY2 because YUY2 is more similar to the way YUV info is stored on the tapes. Personally I use huffyuv for capture on my cantankerous ancient capture PCs, as Lagarith is just a tiny bit slower than huff. On a faster PC it won't make any speed difference. For color conversions stick with Avisynth, which makes these conversions correctly.

    It's not always necessary to use YV12 filters. It depends on the video, of course. If you go to YV12 stay there until you need something else. Many filters work in YUY2 and in YV12 (examples are ColorYUV, Tweak, and others). Then there are oddities like Histogram("mode="Levels") which requires YV12, but other modes of Histogram work in both colorspaces. Avisynth's documentation and the docs that come with plugins give the details.

    If you have to go to VirtualDub or other editor that works in RGB, use Avisynth's ConvertToRGB (interlaced=true or false, whichever applies) for any colorspace you're working with. You need RGB only if you're applying VirtualDub filters -- RGB histograms used by themselves just for viewing/analysis won't make any difference.

    For any colorspace conversion, there's always a certain amount of rounding error. One or two conversions won't matter much if done correctly, but don't jockey back and forth between colorspaces randomly. The errors can add up with multiple conversions back and forth.

    My usual workflow for these godawful analog tapes is to cap to YUY2, then use any YUY2 filters that are needed (like correcting levels with ColorYUV). Getting into heavy denoising means converting to YV12 for most of them. I work in YV12 until I get what I want (well, as close as one can get, LOL!). Then those pesky color tweaks in RGB require ConvertToRGB. If you don't need further RGB work, just stay in YV12. If you do require RGB filters in VirtualDub you have to use "full processing mode". Then you'll need YV12 output again for your encoder, so set VDub's output color depth to YV12 and set your compressor to Lagarith. You have to make that setting if you want it because the default output for full processing mode is uncompressed RGB24. Do I forget to make those output settings sometimes? You bet. Bummer.

    If most h264 or MPEG encoders don't see YV12 coming in, they'll either make the conversion for you (I'd rather do it with Avisynth or VirtualDub output), or like HCenc they won't accept anything except YV12. There are plenty of times when RGB isn't necessary.
    Last edited by LMotlow; 13th Nov 2015 at 09:42.
    - My sister Ann's brother
    Quote Quote  
  17. Dinosaur Supervisor KarMa's Avatar
    Join Date
    Jul 2015
    Location
    US
    Search Comp PM
    @LMotlow
    Why not just convert to YV12 right away with Lagarith or Huffyuv, and save some file space with huge jobs, if they are just going to end up YV12 in a H.264 file anyway.
    Last edited by KarMa; 14th Nov 2015 at 23:44. Reason: spelling
    Quote Quote  
  18. Member
    Join Date
    May 2014
    Location
    Memphis TN, US
    Search PM
    Yes, the projects eventually end up as YV12 before encoding.
    I capture VHS in a colorspace that is close to the YPbPr used by VHS.
    Often a project goes directly from YUY2 to RGB, then YV12 last. I'd rather go from 4:2:2/YUY2 to 4:2:2/RGB rather then YV12->RGB->YV12 when possible, and it sometimes is. I don't always use YV12 filters. VHS Color correction is done in YUV first, then RGB. IF no major color problems need correction in RGB, the video doesn't go to RGB.

    Space is neither an issue or a priority.
    I don't always encode to h.264. I have too many relatives and friends who don't own BluyRay and probably never will. But of course MPEG requires YV12 as well.

    [EDIT] pardon the weird edits. Power keeps going on and off in our neighborhood. The battery supply on the PC stays on, but I have to see what the hell is going on elsewhere again and again. Damn it.
    Last edited by LMotlow; 14th Nov 2015 at 12:50.
    - My sister Ann's brother
    Quote Quote  
  19. Originally Posted by KarMa View Post
    Why not just convert to YV12 right away with Lagarith of Huffyuv, and save some file space with huge jobs, if they are just going to end up YV12 in a H.264 file anyway.
    If you have interlaced YUV 4:2:2 , lagarith or huffyuv won't do the conversion correctly to YV12 (or any other color space conversion like RGB, either to or from) it will convert using progressive and you will have chroma issues
    Quote Quote  
  20. Dinosaur Supervisor KarMa's Avatar
    Join Date
    Jul 2015
    Location
    US
    Search Comp PM
    Originally Posted by poisondeathray View Post
    If you have interlaced YUV 4:2:2 , lagarith or huffyuv won't do the conversion correctly to YV12 (or any other color space conversion like RGB, either to or from) it will convert using progressive and you will have chroma issues
    Does that include a standard VHS capture through a YUY2 card?
    Quote Quote  
  21. Originally Posted by KarMa View Post
    Originally Posted by poisondeathray View Post
    If you have interlaced YUV 4:2:2 , lagarith or huffyuv won't do the conversion correctly to YV12 (or any other color space conversion like RGB, either to or from) it will convert using progressive and you will have chroma issues
    Does that include a standard VHS capture through a YUY2 card?


    Yes it does - generally whenever a codec is used to do the conversion (instead of converttoXX(interlaced=true) in avisynth, or using another interlace aware method ), it will scale the planes using progressive a algorithm, unless otherwise specified (some codecs might have a tickbox option to change the method). Even though the chroma resolution of VHS is very low to begin with, you will still get chroma artifacts

    "interlaced 4:2:0" is a big topic . By some definitions, it doesn't exist. For example vdub's author is one in that camp. You can read up on it , it's discussed thorougly in other threads.
    Quote Quote  
  22. Dinosaur Supervisor KarMa's Avatar
    Join Date
    Jul 2015
    Location
    US
    Search Comp PM
    Thanks for your input poisondeathray, I'll do some testing of my own. I've noticed some artifacts with red/orange fields, and not being able to detelecine them. As far as truely interlaced material, I have not seem any blatant problems yet.

    Originally Posted by poisondeathray View Post
    "interlaced 4:2:0" is a big topic . By some definitions, it doesn't exist. For example vdub's author is one in that camp. You can read up on it , it's discussed thorougly in other threads.
    Not sure what to make of that when just about every ATSC 1080i broadcast is YV12.
    Quote Quote  
  23. Originally Posted by KarMa View Post
    Originally Posted by poisondeathray View Post
    "interlaced 4:2:0" is a big topic . By some definitions, it doesn't exist. For example vdub's author is one in that camp. You can read up on it , it's discussed thorougly in other threads.
    Not sure what to make of that when just about every ATSC 1080i broadcast is YV12.
    Exactly; I'm in the other camp. It exists, but the problem is handling it "properly" for chroma up/down sampling and colorspace conversions. There is an interlaced method, and a progressive method of resizing (essentially that's what you are doing to the U,V planes when converting 422 to 420 - you're resizing them). When you do it the wrong way for your type content, you get problems .
    Quote Quote  
  24. Originally Posted by poisondeathray View Post
    "interlaced 4:2:0" is a big topic . By some definitions, it doesn't exist. For example vdub's author is one in that camp.
    Of course he knows it exists. His position is that it shouldn't use the same fourcc as progressive planar YUV 4:2:0, "YV12". So he refuses to update his code for it.
    Quote Quote  
  25. Originally Posted by jagabo View Post
    Originally Posted by poisondeathray View Post
    "interlaced 4:2:0" is a big topic . By some definitions, it doesn't exist. For example vdub's author is one in that camp.
    Of course he knows it exists. His position is that it shouldn't use the same fourcc as progressive planar YUV 4:2:0, "YV12". So he refuses to update his code for it.
    Yes, that's what I meant - thanks for clarifying, sorry for the misquote.

    Eitherway, the problem can be solved by giving the "option" to choose the method used instead of being so rigid
    Quote Quote  
  26. Dinosaur Supervisor KarMa's Avatar
    Join Date
    Jul 2015
    Location
    US
    Search Comp PM
    Ok poisondeathray, you have convinced me.

    I captured a fire dance scene twice on VHS, with my YUY2 card. Once with Lagarith YUY2 and then again with Lagarith YV12. The video required detelecining.



    I've seen this before with my previous captures, at least with material that required detelecining. Messing with TFF and BFF never did anything to help so I just assumed it was just a problem with the VHS format. Now I know.

    Truely interlaced material is less noticable to me, besides maybe some ghosting on scene changes. But I'll still stick with YUY2 for capturing.

    Used AvsPmod to capture PNG images with below avisynth scripts.
    Code:
     AVISource("O:\YV12.avi")
    LoadPlugin("O:\Megui\Megui\tools\avisynth_plugin\TIVTC.dll")
    tfm(order=1).tdecimate()
    crop(12, 0, -6, -10)
    Spline64Resize(640,470) # Spline64 (Sharp)
    
    ---------
    
    AVISource("O:\YUY2.avi")
    LoadPlugin("O:\Megui\Megui\tools\avisynth_plugin\TIVTC.dll")
    tfm(order=1).tdecimate()
    crop(12, 0, -6, -10)
    Spline64Resize(640,470) # Spline64 (Sharp)
    converttoyv12()
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!