VideoHelp Forum
+ Reply to Thread
Page 1 of 4
1 2 3 ... LastLast
Results 1 to 30 of 119
Thread
  1. Hi everyone,

    I have bunch of PAL VHS captured Cartoons (Interlaced) I'm slowly but surely converting from loseless format into a format I can more easily watch. My understanding that most cartoons should be IVTCed instead of de-interlanced (using TFM() instead of QTGMC()). What I do, is to look for a panning shot, where I get 1 movement per frame - and apply TFM().

    Now, if the 5th frame is identical to the 4th on the panning shot, I SRestore(23.975) or TFM().TDcimate() and check which one looks better (which means no frames are being skipped during the panning shot, and the end results are not jagged). If the 6th frame is identical to the 5th, I SRestore(25) or TFM().TDcimate(cycle=6, cycler=1).

    It that right? Just so I won't have to repeat a lot of work later. I'll admit I tried to QTGMC(Preset="Fast", FPSDivisor=2) instead of TFM - and didn't notice the difference?
    Thanks!
    Quote Quote  
  2. If every frame is interlaced, then the chances are good it's phase-shifted and TFM alone should work. The way you decide if that's true or not is to either separate the fields or bob the video and check if every field has a duplicate.

    If when bobbing a video you see a lot of blended/ghosted/double-imaged fields then the chances are good you have a video field-blended from a standards conversion (film to PAL, perhaps). In such cases your favorite bobber (QTGMC perhaps) followed by Srestore should work.

    It's near impossible to have a progressive PAL video with duplicate frames for every 6th frame. Maybe 1 out of 25.

    In almost all cases separating the fields or bobbing the video shows you what's going on. And, as usual, untouched samples are welcome if you'd like more specific advice.
    Last edited by manono; 11th Jul 2021 at 14:32.
    Quote Quote  
  3. Hello manono, thanks for the quick answer.

    I loaded one of the videos and tried to bob(). That specific video had indeed 1 moving frame, followed by a duplicated frame. It's hard to say It's 100% duplicated, as noise or aliasing seems to shift slightly, but not actual movement.

    In such cases your favorite bobber (QTGMC perhaps) followed by Srestore should work.
    If I see blended fields, I mostly end up do some guessing. Try to TDecimate(), see if this doesn't end with frame skipping, if it does, I will try SRestore(23.970). That assuming all cartoons are film. If I something is still wrong, I will try to TDeimcate(cycle=6, rcycle=1) or SRestore(25) just in case the post capture was funky.

    I will try couple of videos, see if what you mentioned above always checks for me. And if not - I will indeed post untouched sample of that video
    Thank you!
    I try TDecimate(cycle=6, rcycle=1), followed by restore(23.970
    Quote Quote  
  4. I never use the AviSynth filter Bob, but for testing use Yadif(Mode=1). Bob will show up things you don't care about, as maybe you noticed (like the aliasing and the bouncing up and down).

    If it's field-blended, you use neither TFM nor TDecimate.
    Quote Quote  
  5. Separatefields() always give you a correct result. By contrast, Bob() has many settings, and some of them will not give you the same thing as Separatefields().

    I will defer to others, but I believe that the default settings for the AVISynth built-in Bob() function will not give you the just the original fields (doubled up, of course).
    Quote Quote  
  6. Member Skiller's Avatar
    Join Date
    Oct 2013
    Location
    Germany
    Search PM
    Originally Posted by johnmeyer View Post
    I will defer to others, but I believe that the default settings for the AVISynth built-in Bob() function will not give you the just the original fields (doubled up, of course).
    Correct, it does not.

    Code:
    Bob(0,1)
    Does give you the original fields (except for YV12 chroma). For testing I always use that. Of course if one minds the aliasing a lot, manono's suggestion of using Yadif is a good option as well for a very fast bob.
    Quote Quote  
  7. If it's field-blended, you use neither TFM nor TDecimate.
    That's Field-Blending, right? based on previous threads, it seems like there's is no real way to "fix" that? as it's already merged into each frame during the original conversion?

    Ermm, strange. I still get some small jump/aliasing/noise moving around changes, even with Yadif(1) (on multiple videos). But oh well, that's no biggie. Enough for me to figure out what's wrong with the video

    So this works for me on most videos I test:

    Code:
    AviSource("Z:\Videos\VHS\Children\Loseless\Roga.avi")
    
    Robocrop()
    
    TFM()
    
    MergeChroma(Levels(20, 1.0, 210, 16, 235, coring=false), last)
    ChromaShiftSP(y=2, x=-3)
    
    Prefetch(3)
    I'm playing around with TFM if there's something funky with the video, and the Levels/ChromaShift. If there is something you think that can benefit most if not all cartoon - let me know and I'll check it out, see if I can add it to be applied to most of the videos.

    Thanks!

    EDIT: By the way, isn't it better to mostly use TFM(QTGMC(FPSDivisor=2)) over just TFM() as QTGMC mostly considered better here?
    Last edited by Okiba; 12th Jul 2021 at 01:47.
    Quote Quote  
  8. Field-blending can appear like that, yes. But you have to separate the fields to be sure. And sometimes they're doubly blended and there's no real help for that kind of damage.

    I already suggested not using TFM at all when your source is blended. But rather than give general advice, samples are much more useful to us all.
    Quote Quote  
  9. I didn't actually had an experience with field-bending yet, I will share an example when and if I'll encounter it
    I sampled a quick section. I'm not sure I can detect difference between TFM() and TFM(QTGMC(FPSDivisor=2))? I consider using just TFM in that case, because it so much faster compared to QTGMC. But perhaps more testing is required on more videos. What's your guys take on that?
    Quote Quote  
  10. OK, Tried other videos. I do see difference (QTGMC(FPSDivisor=2) to the Right, TFM() the left):

    Image
    [Attachment 59824 - Click to enlarge]


    Everything is much more smooth with QTGMC.
    Quote Quote  
  11. Maybe something like this: TFM image sample processed:
    Image
    [Attachment 59826 - Click to enlarge]

    Noise reduction would be better with a video sample.
    Quote Quote  
  12. Thank you jagabo.

    I will post the above example when I'm home. The question is - why would you work "Harder" to have to above with TFM() and other tools, over just using TFM(QTGMC(FPSDivisor=2) and let QTGMC do the cleaning for you?
    Quote Quote  
  13. I played around with TFM() vs TFM(QTGMC(FPSDivisor=2) a bit more.
    While QTGMC looks better, It sort of making panning shot a more more jaggy. Not by much, but by amount I can notice. I assume that's because a lot of other stuff is being applied. I find "Faster" look smoother then anything above, but pretty noisy. "Slow" is somehow a sweet spot for me.
    Image Attached Files
    Last edited by Okiba; 13th Jul 2021 at 05:26.
    Quote Quote  
  14. Originally Posted by Okiba View Post
    I played around with TFM() vs TFM(QTGMC(FPSDivisor=2) a bit more.
    While QTGMC looks better, It sort of making panning shot a more more jaggy. Not by much, but by amount I can notice. I assume that's because a lot of other stuff is being applied. I find "Faster" look smoother then anything above, but pretty noisy. "Slow" is somehow a sweet spot for me.
    For cleaning the example.avi up, try
    Code:
    converttoYV16()
    Levels(16,1.0,235,3,245,coring=false)
    Tweak(cont=1.18,coring=false)
    KNLMeansCL(h=3.5)
    SMDegrain()
    santiag()
    Quote Quote  
  15. You suggestion involve still using TFM for more "Smooth" movement, but cleaning the image manually (instead of QTGMC will be doing that)?
    Only santiag() worked for me, and it indeed solved anti-aliasing. SMDegrain in the AviSynth page, bring to a dead link, can you maybe share the AVSI file?
    And KLMeansCL need a strong machine, the one I use for capturing doesn't meet the requirements.

    Thanks!
    Quote Quote  
  16. Code:
    LWlibavVideoSource("Example.avi") 
    ConvertToYV12()
    Crop(8,0,-8,-0)
    
    # levels and saturation adjustments
    ColorYUV(gain_y=100, off_y=-38, cont_u=100, cont_v=100)
    
    # white balance
    ConvertToRGB()
    RGBAdjust(rb=-15, bb=-12)
    RGBAdjust(r=253.0/240.0, b=253.0/192.0)
    ConvertToYV12()
    
    # eliminate combing from time base errors
    # reduce buzzing/flickering edges, mild noise reduction
    QTGMC(InputType=2)
    
    #remove halos
    Spline36Resize(400,height)
    dehalo_alpha(rx=2.5, ry=1.5, brightstr=1.5, darkstr=1.5)
    MergeChroma(aWarpSharp2(depth=5), aWarpSharp2(depth=20))
    KNLMeansCL(d=2, a=2, h=2)
    U = UtoY().KNLMeansCL(d=2, a=3, h=4)
    V = Vtoy().KNLMeansCL(d=2, a=3, h=4)
    YtoUV(U, V, last)
    
    # sharpen, darken lines
    Sharpen(0.3, 0.0)
    Hysteria(strength=0.75)
    
    #restore frame size
    nnedi3_rpow2(2, cshift="Spline36Resize", fwidth=704, fheight=576)
    
    # align chroma
    ChromaShiftSP(x=-1.5, y=2)
    dehalo_alpha() can be pretty damaging to the picture. Small details (like those in the characters' faces in the background) get blurred away. You might want to skip it or reduce the strength.
    Image Attached Files
    Quote Quote  
  17. Thank you for the detailed script jagabo. The results are interesting. It's almost feels like computer animated cartoon now, rather then drawn one. It's very "Pastel"-like. I wasn't able to reproduce it here, but maybe that's because my PC doesn't support the use of KMLMeansCL? (so I can't remove the Halos)

    Here's couple of question:

    - Something here confuses me. This is a cartoon, and as far as I know cartoons should be IVTCed, not de-interlaced. Why are you de-interlacing this one? and how did you came to the conclusion InputType2 should be used ("badly deinterlaced material")? It's indeed making the picture looks better, but introduce blended fields on other sections of the same video (attached).

    - It's interesting you restore frame size using nnedi3_rpow2. What I mostly do is just set SAR value 12/11. I assume resizing using nnedi3_rpow2 is perhaps better then let the player handle the it?
    Image Attached Thumbnails Click image for larger version

Name:	Test.jpg
Views:	47
Size:	204.5 KB
ID:	59834  

    Last edited by Okiba; 13th Jul 2021 at 09:36.
    Quote Quote  
  18. Originally Posted by Okiba View Post
    I wasn't able to reproduce it here, but maybe that's because my PC doesn't support the use of KMLMeansCL? (so I can't remove the Halos)
    Dehalo_alpha() was used for halo reduction. KNLMeansCL() was placed there to speed up the processing. You can substitute some other noise reduction filter there.

    Originally Posted by Okiba View Post
    This is a cartoon, and as far as I know cartoons should be IVTCed, not de-interlaced. Why are you de-interlacing this one? and how did you came to the conclusion InputType2 should be used ("badly deinterlaced material")? It's indeed making the picture looks better
    QTGMC isn't deinterlacing here. It's removing the small combing caused by horizontal time base errors in your cap (you can try Blur(0.0).Sharpen(0.0, 0.6), or Santiag() instead for that). It's also reducing the buzzing and flickering of some edges, and reduces noise a bit.

    Originally Posted by Okiba View Post
    but introduce blended fields on other sections of the same video (attached).
    You'll need to provide that section of video from your source. But QTGMC does sometimes cause blending artifacts.

    Originally Posted by Okiba View Post
    It's interesting you restore frame size using nnedi3_rpow2. What I mostly do is just set SAR value 12/11. I assume resizing using nnedi3_rpow2 is perhaps better then let the player handle the it?
    nnedi3_rpow2 was used there to restore the 704x576 frame size (and 12:11 sampling aspect ratio). A lot of the processing was done at 400x576 -- dehalo_alpha works better at that smaller frame size and the other processing is faster (the aforementioned KNLMeans, for example).

    Oh, I noticed later than the levels adjustment was lowering the black level a little too much. I was playing with the levels and forgot that the white balances used ConvertToRGB which crushed those over-dark areas.
    Quote Quote  
  19. Originally Posted by Okiba View Post
    SMDegrain in the AviSynth page, bring to a dead link, can you maybe share the AVSI file?
    Try this
    https://github.com/Dogway/Avisynth-Scripts/blob/master/SMDegrain%20v.3.2.2d/SMDegrain%20v3.2.2d.avsi
    Quote Quote  
  20. I have been trying that. It misses a method called `ex_bs` I couldn't find information about. Maybe something AviSynth+ can't run. EDIT: The first one jagabo linked work, cheers!

    I'm trying to put everything specific to this video on the side. Most of the specifics, like colors, white balance and such - I already know how to handle. What I'm trying to do is to grasp the big concepts (as I have a lot of cartoons to process, and I don't want to bother you with each one):

    Dehalo_alpha() was used for halo reduction. KNLMeansCL() was placed there to speed up the processing. You can substitute some other noise reduction filter there.
    I never used noise reduction filter before. Anything you recommend that is simple to use?

    QTGMC isn't deinterlacing here.
    Oh, I see. So your script doesn't include neither QTGMC or TFM. So the video will be left Interlaced? No De-interlacing/IVTCing happening? When I'm using TFM(QTGMC(FPSDivisor=2)), does it already do what InputType2 do?

    nnedi3_rpow2 was used there to restore the 704x576 frame size (and 12:11 sampling aspect ratio).
    As I mentioned, what I mostly do is crop, and just set the Aspect Ratio to 12/11 using the SAR flag in ffmpeg. Isn't taking cropped content - upscaling it, actually hurt the quality? as the resolution is so very low?

    And lastly, for a Casual like me, it feels like QTGMC doing a lot of[/FONT][/FONT] good things to the video quality. So wouldn't it be just easier to just go with TFM(QTGMC(Preset="Slow", FPSDivisor=2))? That way QTGMC still doing it magic, but not very aggressively as the Preset is just Slow?
    EDIT: With using SMDegrain(), I was even able to set Preset to "Medium" and get the same results as Slow.

    Thanks!
    Last edited by Okiba; 13th Jul 2021 at 13:23.
    Quote Quote  
  21. Originally Posted by Okiba View Post
    I have been trying that. It misses a method called `ex_bs` I couldn't find information about. Maybe something AviSynth+ can't run.
    https://github.com/Dogway/Avisynth-Scripts/blob/master/ExTools.avsi

    [QUOTE=Okiba;2625227]
    Dehalo_alpha() was used for halo reduction. KNLMeansCL() was placed there to speed up the processing. You can substitute some other noise reduction filter there.
    I never used noise reduction filter before. Anything you recommend that is simple to use?
    Sharc use SMDegrain. Some other common ones are hqdn3d, fft3dfilter, mcTemporalDenoise, TemporanDegrain, QTGMC has a built in noise reducer too. It's OK for very light denoising. Jjust add EZDenoise=1.0, DenoiseMC=true, to it's list of arguments. You can use higher EZDenoise values but it starts blurring away picture detail. Here's more:

    http://avisynth.nl/index.php/External_filters#Denoisers

    Originally Posted by Okiba View Post
    QTGMC isn't deinterlacing here.
    Oh, I see. So your script doesn't include neither QTGMC or TFM. So the video will be left Interlaced? No De-interlacing/IVTCing happening? When I'm using TFM(QTGMC(FPSDivisor=2)), does it already do what InputType2 do?
    You mean TFM(clip2=QTGMC(FPSdivisor=2))? TFM is a field matcher, usually used to inverse telecine. After it pairs two fields together it looks to see if there are any comb artifacts. If it finds any it uses a deinterlacer to remove them. If you provide clip2 that clip is used to fix the comb artifacts rather than one of it's internal deinterlacers.

    Originally Posted by Okiba View Post
    nnedi3_rpow2 was used there to restore the 704x576 frame size (and 12:11 sampling aspect ratio).
    As I mentioned, what I mostly do is crop, and just set the Aspect Ratio to 12/11 using the SAR flag in ffmpeg. Isn't taking cropped content - upscaling it, actually hurt the quality? as the resolution is so very low?
    The effective resolution of PAL VHS is about 300x576. Downscaling 400x576, then back up to 704x576 doesn't hurt the image much.

    Originally Posted by Okiba View Post
    And lastly, for a Casual like me, it feels like QTGMC doing a lot of[/FONT][/FONT] good things to the video quality. So wouldn't it be just easier to just go with TFM(QTGMC(Preset="Slow", FPSDivisor=2))? That way QTGMC still doing it magic, but not very aggressively as the Preset is just Slow?
    Do whatever gets you the result you want.

    Dehalo_alpha() is one of the most damaging filters. I often use an edge mask to limit to only the strongest halos. For example, try replacing the dehalo_alpha() line with:
    Code:
    edges = mt_edge(mode="cartoon", thy1=30, thy2=40).Blur(1.0).mt_expand()
    Overlay(last, dehalo_alpha(rx=2.5, ry=1.5, brightstr=1.5, darkstr=1.5), mask=edges)
    Quote Quote  
  22. Levels(16,1.0,235,3,245,coring=false)
    Why 3-245? Isn't it suppose to be limited black levels? so 16-235?

    You mean TFM(clip2=QTGMC(FPSdivisor=2))?

    No way! That was my mistake all along. I know you can change the default TFM de-interlacer, and I preferred QTGMC, but what I missed is a typo that been copied paste from one AVS script to the other, I didn't used clip2! So in reality, TFM was getting POST QTGMC modified frames (/facepalm). That explains a lot

    The effective resolution of PAL VHS is about 300x576. Downscaling 400x576, then back up to 704x576 doesn't hurt the image much.
    Oh, OK. So what's the point of Upscaling it back? nnedi3_rpow2 makes it sharper? That sounds like a good practice to follow in every video I work with then. Up until now I cropped (sometimes up to 20pixels from each side), and just set SAR value. So it sound like a better approach is to upscale it back with nnedi3 and not stating SAR value to FFMPEG (as the resolution is already 12/11)?

    Dehalo_alpha() is one of the most damaging filters. I often use an edge mask to limit to only the strongest halos.
    I agree, I didn't like what dehalo did to the quality of the frame. I do see a little bit less Halos, but it's not worth it in my own opinion. I couldn't find any difference in picture using edge mask. Maybe I just didn't find a strong enough halo.

    I spend an hour or do playing around with everything you guys suggested. This worked the best for me:

    Code:
    SMDegrain()
    Santiag()
    QTGMC(InputType=2)
    
    MergeChroma(Levels(20, 1.0, 210, 16, 235, coring=false), last)
    ChromaShiftSP(x=-1.5, y=2)
    
    TFM(clip2=QTGMC(FPSDivisor=2))
    
    Prefetch(3)
    Thanks!
    Quote Quote  
  23. Originally Posted by Okiba View Post
    Levels(16,1.0,235,3,245,coring=false)
    Why 3-245? Isn't it suppose to be limited black levels? so 16-235?
    To expand the luma, exploit the 16...235 range better, adjust the black level, reduce the washed look ...... whatever.
    See the histogram waveforms. Left is the original, right is tweaked (and denoised).
    Code:
    Levels(16,1.0,235,3,245,coring=false)
    Tweak(hue=0.0,cont=1.18,sat=1.3,coring=false)
    Image
    [Attachment 59844 - Click to enlarge]


    Edit:
    About Levels, Tweak, Colorspace etc. revisit your former threads:
    https://forum.videohelp.com/threads/399085-Color-Blending-Post-Encoding
    https://forum.videohelp.com/threads/399404-Level%28%29-And-Why-Does-It-Also-Modify-Chroma
    Last edited by Sharc; 14th Jul 2021 at 05:10.
    Quote Quote  
  24. Originally Posted by Okiba View Post
    Code:
    SMDegrain()
    Santiag()
    QTGMC(InputType=2)
    
    MergeChroma(Levels(20, 1.0, 210, 16, 235, coring=false), last)
    ChromaShiftSP(x=-1.5, y=2)
    
    TFM(clip2=QTGMC(FPSDivisor=2))
    
    Prefetch(3)
    Why are you using TFM() after QTGMC()? Also Santiag() before QTGMC() is redundant. Does your source contain real interlaced frames elsewhere? Moving your levels adjustments before SMDegrain() will reduce posterization problems.
    Last edited by jagabo; 14th Jul 2021 at 08:25.
    Quote Quote  
  25. About Levels, Tweak, Colorspace etc. revisit your former threads
    Yea. I understand the basics concept. What I wasn't sure about, is expanding the Luma beyond 16-235. It just, I always sure I have to stick to 16-235, I didn't know I can use something like 2-245 to "stretch" it, so to speak. It makes sense, as while the Luma was expected, they video still doesn't clip 16-235. Didn't know that's a valid option

    Does your source contain real interlaced frames elsewhere?
    Another "Ha!" moment for me. This video is not interlaced. For some reason I was sure that ALL VHS captures were Interlaced (and what got me was also the combing from time base errors). That's why I couldn't understand earlier why you didn't invoke TFM on your script early on.

    Why are you using TFM() after QTGMC()?
    Leaving the fact this video doesn't need to be TFM() at all it seems, QTGMC(InputType=2) doesn't do actual de-interlacing right? just cleaning up?

    Also Santiag() before QTGMC() is redundant
    It is? because I see difference, there's less combing when using Santiag() + QTGMC(InputType=2), vs using just QTGMC(InputType=2).

    Thanks again!
    Quote Quote  
  26. Originally Posted by Okiba View Post
    Another "Ha!" moment for me. This video is not interlaced. For some reason I was sure that ALL VHS captures were Interlaced (and what got me was also the combing from time base errors). That's why I couldn't understand earlier why you didn't invoke TFM on your script early on.
    The original analog VHS video recording is interlaced, and it is recommended that this is maintained when digitizing. Therefore, given the option of selecting interlaced or progressive for capturing, it is recommended that Interlaced be selected and not the De-interlacing option.
    I refer to your file "Example.avi". Its format is PAL interlaced (720x576 25i) but with both fields taken from the same instant in time. Therefore it looks like progressive, but exhibits some residual scanline or aliasing artefacts and time base errors. It was probably like this on the VHS tape, or your capturing process applied a deinterlacer at some stage.
    Last edited by Sharc; 14th Jul 2021 at 11:25.
    Quote Quote  
  27. That's very strange!
    The capture was done with VirtualDub (Loseless HufYuv). No De-interlacing happened as far as I know. I used the same setup for around 200 Tapes. I just randomly checked couple of files. All the footage from the Camcorder LOOKS interlaced. I'm not so sure now, as it's possible I just consider residual combing artifacts as "interlaced". Correct me if I'm wrong, but the best way to know is to use use Yadif(1) and check frame by frame. If there's movement every frame, It's interlaced. If there's movement once per 2 frames, it's progressive?

    So before we continue, is that right?
    Quote Quote  
  28. Analog SD video was always interlaced. That means it was transmitted (and stored on tape) one field at a time. But those fields could could come from different points in time (an interlaced video camera) or the same point in time (typicial of 24p film). 24p film sources were generally sped up to 25 fps and transmitted as two fields from each film frame. (Here the numbers represents the film frame number, t and b refer to top and bottom fields from those frames, and the plus sign means those two fields are woven together into frames in digital form). So film frames
    Code:
    1  2  3  4...  (25 film frames per second)
    were transmitted as:
    Code:
    1t  1b  2t  2b  3t  3b  4t  4b...  (25 film frames becomes 50 video fields per second)
    Sometimes they are captured and stored in phase, meaning pairs of fields from the same film frame are stored together:
    Code:
    1t+1b  2t+2b  3t+3b  4t+4b... (50 fields becomes 25 digital frames per second)
    When this happens the video frames look progressive and can be treated as progressive in your processing.

    Sometimes they are captured out of phase:
    Code:
    1b+2t  2b+3t  3b+4t... (50 fields becomes 25 digital frames per second)
    When this happens the digital frames look interlaced (when there is motion) but they can easily be matched with TFM() to restore the original progressive film frames.
    Last edited by jagabo; 14th Jul 2021 at 11:37.
    Quote Quote  
  29. So the TRANSITION of Analog SD was always interlaced, but they can be store in phases, so while interlaced, they LOOK progressive. I see. Phew. I was worried I did a mistake capturing 200 VHS Tapes, In my mind Progressive was DVD era thingi

    So if I am back to the original question, I can determine the type of film by using Yadif(1), find panning shot/section with a lot of movement:

    - Movement per frame means I should de-interlace it
    - Movement per two frames, means It's progressive - and It doesn't require anything else.

    Now the question how the captured out of phase scenario would look like (to use TFM on). You mentioned they would look interlaced when in motion. So after Yadif(1), I should still be getting 1 movement per two frames, but those frames will look if there's motion combed? maybe you have an example for me to check?
    Last edited by Okiba; 14th Jul 2021 at 12:36.
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!