VideoHelp Forum
+ Reply to Thread
Results 1 to 17 of 17
Thread
  1. Hi to everyone,

    Since serveral years now, I try to transfer my old VHS on my computer, but I must admit I'm not satisfied with the results.
    Two years ago I finally bought Neat Video VirtualDub filter, other cleaning filters didn't satisfied me. But I'm still fighting with other filters, especially deinterlacer filters.

    Since several months I use QTGMC, but I have a problem with subtitles which are incorrectly deinterlaced.
    Original videos are captured in 768x576 MJPEG 24-bit with VirtualDub and an old Pinnacle DC10+.
    Usually I open videos in VirtualDub and save them to allow opening them with AVISynth scripts.
    (When I do that, videos are converted from MJPEG 24-bit to RGB 24-bit, but they can't be deinterlaced by softwares like VLC, I didn't manage to find how to avoid that).

    Subtitles look like this :
    Click image for larger version

Name:	Sans titre 01.png
Views:	503
Size:	131.0 KB
ID:	33144

    I use the following AVISynth Script :
    AviSource("xxxxx.avi").ConvertToYV12()
    QTGMC (Preset="Slow")
    SelectEven()
    Subtitles look like this :
    Click image for larger version

Name:	Sans titre 02.png
Views:	466
Size:	102.2 KB
ID:	33145

    Instead of :
    Click image for larger version

Name:	Sans titre 03.png
Views:	441
Size:	126.3 KB
ID:	33146

    I tried another method, as mentionned in another site :
    - Opening videos with an AVISynth script with separatefields parameter,
    - Using Deinterlaced Smooth filter,
    but videos suffers from quality issues.
    Name:  Sans Titre 04.png
Views: 555
Size:  177.1 KB

    I don't know what to do.
    Any help would be greatly appreciated. It's driving me mad !
    Last edited by Pseudopode; 13th Aug 2015 at 13:45.
    Quote Quote  
  2. Upload a sample of your source -- with no filtering or reencoding. But your first video looks like the fields are in the wrong position -- the top field is one scan line below the bottom field rather than one scan line above it. This is a problem with some capture devices and is usually fixed by a swap fields option in the decompression codec or in AviSynth with SwapFields().
    Quote Quote  
  3. Why do you want to deinterlace? If you are going to watch on a TV set, deinterlacing is totally unnecessary.

    Remember that deinterlacing always degrades your video, and that degradation can never be undone. Also, remember that still image captures of interlaced video show the characteristic "teeth" because alternate scan lines come from different points in time, but these "teeth" never show up during video playback, and your eye is completely unaware that they exist.
    Quote Quote  
  4. First of all, thanks to both of you for your answers.

    jagabo
    I uploaded a sample at the following link : https://1fichier.com/?mhweikgtni
    I tried with the SwapFields() parameters and it works. Many thanks for this advice.

    johnmeyer
    In fact I hesitate between several formats since many years :
    - 25fps deinterlaced, at the cost of fluidity and quality, *
    - 50fps deinterlaced, at the cost of quality,
    - 25fps interlaced.
    But I have two problems :
    - When I save a video with VirtualDub, it seems that it's not considered interlaced anymore (maybe because of the wrong position of the fields ?) When I activate the deinterlace mode of VLC it only works with my MJPEG video but not the RGB one (saved by VirtualDub).
    - I would like to compress my videos, in MP4 format (with Handbrake), so I thought it was a good idea to deinterlace them as they were not considered interlaced anymore after being processed by VirtualDub.
    Before using VirtualDub, I used to capture my videos with Studio 8 or 10, keep them interlaced and convert them to MPEG-2 format, but without them being cleaned up or enhanced. That's why I had another approach with VirtualDub. Neat Video, Levels et Hue/Saturation/Intensity filters and cropping according to 4/3 ratios.

    * I was thinking of that script in this case :
    LoadPlugin("D:\Applications\Multimedia\AviSynth 2.5\plugins\vaguedenoiser.dll")
    AviSource("xxxxx.avi").ConvertToYV12()
    SwapFields()
    VagueDenoiser(threshold=7, method=3, nsteps=6, chromaT=2.0, interlaced=true) # To remove aerial lines in areas of plain color
    QTGMC( Preset="Slow" )
    SelectEven()
    Completed with VirtualDub filters : Neat Video, Levels and Hue/Saturation/Intensity and cropping.
    Last edited by Pseudopode; 13th Aug 2015 at 16:54. Reason: Completing my answer
    Quote Quote  
  5. Originally Posted by Pseudopode View Post
    LoadPlugin("D:\Applications\Multimedia\AviSynth 2.5\plugins\vaguedenoiser.dll")
    AviSource("xxxxx.avi").ConvertToYV12()
    SwapFields()
    VagueDenoiser(threshold=7, method=3, nsteps=6, chromaT=2.0, interlaced=true) # To remove aerial lines in areas of plain color
    QTGMC( Preset="Slow" )
    SelectEven()
    If the only reason for doing it that way is to get the subs deinterlaced, and if the rest of the video is progressive, then you might consider using a mask to make sure QTGMC is used only on the subs and not the rest of it as well. Of course, if you're taking advantage of QTGMC's cleaning properties also, then leave it as-is.
    Completed with VirtualDub filters : Neat Video, Levels and Hue/Saturation/Intensity and cropping.
    I'm not sure why you're using VDub filters at all, or why you're creating an RGB lossless intermediate AVI to begin with, since in the script you're immediately converting to YV12. AviSynth has Levels and Tweak for adjusting the levels, and hue and saturation (and the Crop filter for cropping). Not sure what Intensity is, and not sure what you're having Neat Video do (if anything) that AviSynth can't.
    Quote Quote  
  6. With SwapFields() the entire video is correctly deinterlaced, including subtitles.

    I use an intermediate RGB AVI because I can't open the MJPEG one with AVISynth, it's the only solution I found.

    Before I try QGTMC, I only used VirtualDub filters, and since this moment I only use QGTMC as an exception in my VirtualDub Filters chain.
    But if AVISynth can deal with adjusting levels, saturation and cropping, I will keep an eye on how to do it.
    Some of VirtualDub filters are sometimes hard to understand, and some of AVISynth filters are much more harder to understand, that's also why I don't use them.

    I bought Neat Video to clean up videos (noise and color defaults), because I wasn't satisfied by other VirtualDub filters and because I didn't understand how to deal with some of them.
    Quote Quote  
  7. You can open the MJPEG AVI with ffVideoSource(). After SwapFields() I would add vInverse() to eliminate some mild horizontal lines where fiels have different brightness. I'd forget using VirtualDub and try a motion compensated grain remover like TemporalDegrain() instead. Consider RemoveSpots() to get rid of a lot of those spots -- unless they're supposed to be there.

    I tried this:

    Code:
    ffVideoSource("Test_MJPEG24.avi") 
    SwapFields()
    ConvertToYV12()
    ColorYUV(cont_y=100, off_y=10, gamma_y=50, cont_u=50, cont_v=50)
    vInverse()
    BilinearResize(400,height)
    TemporalDegrain()
    RemoveSpots()
    MergeChroma(aWarpSharp(depth=5), aWarpSharp(depth=20))
    Sharpen(0.3, 0.0)
    nnedi3_rpow2(2, cshift="Spline36Resize", fwidth=768, fheight=height)
    It still needs work but that's something of a start. Be careful with RemoveSpots(), it can remove real detail too. Especially in panning shots.
    Quote Quote  
  8. Yes, spots are supposed to be there : this commercial is extracted from a broadcast dedicated to special effects in TV commercials and this one was the oldest presented.

    I tried your script and I'm astonished... It's far far far more better than what I obtained with my VirtualDub filters.

    Now I'll try to understand how each parameter work and attempt to adjust them if necessary for the entire broadcast, also for the TV movies and other broadcasts I planned to capture.

    Many many many thanks for your help.
    Quote Quote  
  9. I tried with a sample from a broadcast dedicated to Imagina festival of 1992. Wow.
    Click image for larger version

Name:	OnTheRun_After.png
Views:	389
Size:	380.6 KB
ID:	33154
    Click image for larger version

Name:	OnTheRun_Original.png
Views:	464
Size:	623.8 KB
ID:	33155
    Quote Quote  
  10. The reason for the downsize and upsize was to sharpen the video a bit without introducing aliasing artifacts (nnedi3). PAL VHS has an inherent resolution of about 360x576. Also, the bilinear downscale gets a little natural noise reduction and reduces horizontal time base jitter a bit. Applying TemporalDegrain while the video is small makes it run faster. Play around with TD's SAD1, SAD2, and sigma parameters. The defaults are 400, 300, and 16 respectively. Smaller values remove less noise but are also less prone to ghosting when there is motion (especially panning shots). The aWarpSharps sharpen luma lightly and chroma more strongly (VHS has very low chroma resolution) and reduce the time base jitter a little more. I guess you want to remove the RemoveSpots() since you want to keep the spots. ColorYUV was used to increase the contrast, gamma and saturation. I think it's a bit overdone in the new image you posted. You have to tailor that to the particular video.

    If you haven't already, learn to use Histogram() and/or VideoScope() to check levels.
    Quote Quote  
  11. I was wondering why downsize and upsize the video and was about to ask you. Thanks for this precious tip. I already started to take notes and I'll complete them with all your additionnal explanations.

    Yes, you're right, for the second video I have to adjust colors.
    I'll try the Histogram() and VideoScope() parameters.

    I removed the RemoveSpots() parameters, yes, but I keep it in mind for some family videos of friends of mine.

    I have to confess I'm impressed by the results, I often saw VHS videos looking like that and I thought it was thanks to heavy commercial softwares or dedicated hardwares, I know now that it's not neccesarily the case.
    Quote Quote  
  12. I would like to ask you, if I want to produce an interlaced video, I only have to remove BilinerResize and nnedi3 lines ?
    Quote Quote  
  13. Most of those filters only work with progressive frames. One way to deal with that is to SeparateFields(), filter, then Weave() the fields back together. That isn't optimal because you end up treating scan lines that aren't next to each other as if they were. Another method is to convert fields to frames with QTGMC(), filter, then pull fields out of the frames to weave together: SeparateFields().SelectEvery(4,0,3).Weave().
    Quote Quote  
  14. Hi,

    I found a french broadcast captured by the man who worked on its credits.
    The colors seem to be more natural and accurate.

    So I worked with a sample of my captured VHS and tried to approach them.
    https://1fichier.com/?t1r3c1o3w3

    I kept the script unchanged, except for the ColorYUV line.

    SwapFields()
    ConvertToYV12()
    ColorYUV(cont_y=50, cont_u=25, cont_v=25, off_y=0, off_u=0, off_v=0, gain_y=0, gain_u=5, gain_v=5, gamma_y=50, gamma_u=0, gamma_v=0)
    vInverse()
    BilinearResize(400,height)
    TemporalDegrain()
    MergeChroma(aWarpSharp(depth=5), aWarpSharp(depth=20))
    Sharpen(0.3, 0.0)
    nnedi3_rpow2(2, cshift="Spline36Resize", fwidth=768, fheight=height)

    This is the closest I can approach.
    Click image for larger version

Name:	OntheRunCloser1-2.png
Views:	185
Size:	1.48 MB
ID:	33299
    Click image for larger version

Name:	OntheRunCloser2-2.png
Views:	175
Size:	820.3 KB
ID:	33300

    I understood how to work with off_y and gain_y parameters and Histogram() function, to increase ou decrease black and white levels.
    But I don't clearly understand how to work with off, gain and cont parameters, and Histogram(Mode="Levels") by example.

    In this image, wrong colors are clearly noticeable :
    Click image for larger version

Name:	OntheRunCloser3.png
Views:	184
Size:	1.17 MB
ID:	33301

    But if I change a parameter, results are worse.

    I was wondering if using Tweak and Levels functions, to increase saturation and levels, are necessary ? It seems that one can only deal with ColorYUV or am I wrong ?
    Quote Quote  
  15. Yes, you can use Tweak() and Levels() instead of ColorYUV().

    In ColorYUV() gain_y multiples the Y values by (1.0 + N/256):

    Code:
    Y' = Y * (1.0 + N/256)
    cont_y is similar but it's centered around 126 (the middle between 16 and 235) instead of zero:

    Code:
    Y' = (Y-126) * (1.0 + N/256) + 126
    off_y just adds or subtracts:

    Code:
    Y' = Y + N
    Typically what I do is use gain_y or cont_y to get a good spread of values, then use off_y to move all of them up or down so that full black is near 16 and full white is near 235.

    The U and V parameters work similarly. But with U and V perfect greyscale is 128 (halfway between 16 and 240) for both U and V. You get colors when U and V are not 128, and the farther away from 128 the more saturated the colors. You increase saturation by increasing cont_u and cont_v (positive values), reduce saturation by decreasing cont_u and cont_v (negative values). Off_u and off_v can be used to restore white balance when they are off in that way. You can view the U channel with:

    Code:
    VideoScope("both", true, "U", "U", "UV") # show U channel)
    or the V channel with:

    Code:
    VideoScope("both", true, "V", "V", "UV") # show U channel)
    Or both with:

    Code:
    VideoScope("both", true, "UV", "UV", "UV") # show U channel)
    Another way of viewing U and V channels:

    Code:
    StackHorizontal(UtoY(), VtoY()) # convert U and V channels to Y, then stack them horizontally
    VideoScope("both", true, "Y", "Y")
    Since that converts U and V to Y you can also use Histogram() to view the U and V channels. I usually prefer a horizontal trace (rather than Histogram()'s vertical trace) so I usually rotate the image, run Histogram, then unrotate the image:

    Code:
    StackHorizontal(UtoY(), VtoY()) # convert U and V channels to Y, then stack them horizontally
    TurnRight().Histogram().TurnLeft()
    That gives a horizontal waveform monitor above the original image.

    Note that cont in Tweak() is similar to gain_y in ColorYUV(), not cont_y. Bright in Tweak() is similar to off_y in ColorYUV(). For example, Tweak(cont=2.0, coring=false) is the same as ColorYUV(gain_y=256).
    Last edited by jagabo; 23rd Aug 2015 at 09:00.
    Quote Quote  
  16. Many thanks for these explanations.

    I did some research through the forums and I realised that there is no ideal solution to calibrate captured video colors.
    The ideal situation is having colors bars, which happens essentially with some commercial tapes.
    And the most import is that each captured video has different colors from the others : due to the VCR, to the TV, to the capture card, to the age of the tape, to the tape itself...
    Today it seems so obvious for me.
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!