Hi everyone,
I have bunch of PAL VHS captured Cartoons (Interlaced) I'm slowly but surely converting from loseless format into a format I can more easily watch. My understanding that most cartoons should be IVTCed instead of de-interlanced (using TFM() instead of QTGMC()). What I do, is to look for a panning shot, where I get 1 movement per frame - and apply TFM().
Now, if the 5th frame is identical to the 4th on the panning shot, I SRestore(23.975) or TFM().TDcimate() and check which one looks better (which means no frames are being skipped during the panning shot, and the end results are not jagged). If the 6th frame is identical to the 5th, I SRestore(25) or TFM().TDcimate(cycle=6, cycler=1).
It that right? Just so I won't have to repeat a lot of work later. I'll admit I tried to QTGMC(Preset="Fast", FPSDivisor=2) instead of TFM - and didn't notice the difference?
Thanks!
+ Reply to Thread
Results 1 to 30 of 119
-
-
If every frame is interlaced, then the chances are good it's phase-shifted and TFM alone should work. The way you decide if that's true or not is to either separate the fields or bob the video and check if every field has a duplicate.
If when bobbing a video you see a lot of blended/ghosted/double-imaged fields then the chances are good you have a video field-blended from a standards conversion (film to PAL, perhaps). In such cases your favorite bobber (QTGMC perhaps) followed by Srestore should work.
It's near impossible to have a progressive PAL video with duplicate frames for every 6th frame. Maybe 1 out of 25.
In almost all cases separating the fields or bobbing the video shows you what's going on. And, as usual, untouched samples are welcome if you'd like more specific advice.Last edited by manono; 11th Jul 2021 at 14:32.
-
Hello manono, thanks for the quick answer.
I loaded one of the videos and tried to bob(). That specific video had indeed 1 moving frame, followed by a duplicated frame. It's hard to say It's 100% duplicated, as noise or aliasing seems to shift slightly, but not actual movement.
If I see blended fields, I mostly end up do some guessing. Try to TDecimate(), see if this doesn't end with frame skipping, if it does, I will try SRestore(23.970). That assuming all cartoons are film. If I something is still wrong, I will try to TDeimcate(cycle=6, rcycle=1) or SRestore(25) just in case the post capture was funky.In such cases your favorite bobber (QTGMC perhaps) followed by Srestore should work.
I will try couple of videos, see if what you mentioned above always checks for me. And if not - I will indeed post untouched sample of that video
Thank you!
I try TDecimate(cycle=6, rcycle=1), followed by restore(23.970 -
Separatefields() always give you a correct result. By contrast, Bob() has many settings, and some of them will not give you the same thing as Separatefields().
I will defer to others, but I believe that the default settings for the AVISynth built-in Bob() function will not give you the just the original fields (doubled up, of course). -
-
That's Field-Blending, right? based on previous threads, it seems like there's is no real way to "fix" that? as it's already merged into each frame during the original conversion?If it's field-blended, you use neither TFM nor TDecimate.
Ermm, strange. I still get some small jump/aliasing/noise moving around changes, even with Yadif(1) (on multiple videos). But oh well, that's no biggie. Enough for me to figure out what's wrong with the video
So this works for me on most videos I test:
I'm playing around with TFM if there's something funky with the video, and the Levels/ChromaShift. If there is something you think that can benefit most if not all cartoon - let me know and I'll check it out, see if I can add it to be applied to most of the videos.Code:AviSource("Z:\Videos\VHS\Children\Loseless\Roga.avi") Robocrop() TFM() MergeChroma(Levels(20, 1.0, 210, 16, 235, coring=false), last) ChromaShiftSP(y=2, x=-3) Prefetch(3)
Thanks!
EDIT: By the way, isn't it better to mostly use TFM(QTGMC(FPSDivisor=2)) over just TFM() as QTGMC mostly considered better here?Last edited by Okiba; 12th Jul 2021 at 01:47.
-
Field-blending can appear like that, yes. But you have to separate the fields to be sure. And sometimes they're doubly blended and there's no real help for that kind of damage.
I already suggested not using TFM at all when your source is blended. But rather than give general advice, samples are much more useful to us all. -
I didn't actually had an experience with field-bending yet, I will share an example when and if I'll encounter it

I sampled a quick section. I'm not sure I can detect difference between TFM() and TFM(QTGMC(FPSDivisor=2))? I consider using just TFM in that case, because it so much faster compared to QTGMC. But perhaps more testing is required on more videos. What's your guys take on that? -
OK, Tried other videos. I do see difference (QTGMC(FPSDivisor=2) to the Right, TFM() the left):
[Attachment 59824 - Click to enlarge]
Everything is much more smooth with QTGMC. -
Maybe something like this: TFM image sample processed:
[Attachment 59826 - Click to enlarge]
Noise reduction would be better with a video sample. -
Thank you jagabo.
I will post the above example when I'm home. The question is - why would you work "Harder" to have to above with TFM() and other tools, over just using TFM(QTGMC(FPSDivisor=2) and let QTGMC do the cleaning for you? -
I played around with TFM() vs TFM(QTGMC(FPSDivisor=2) a bit more.
While QTGMC looks better, It sort of making panning shot a more more jaggy. Not by much, but by amount I can notice. I assume that's because a lot of other stuff is being applied. I find "Faster" look smoother then anything above, but pretty noisy. "Slow" is somehow a sweet spot for me.Last edited by Okiba; 13th Jul 2021 at 05:26.
-
-
You suggestion involve still using TFM for more "Smooth" movement, but cleaning the image manually (instead of QTGMC will be doing that)?
Only santiag() worked for me, and it indeed solved anti-aliasing. SMDegrain in the AviSynth page, bring to a dead link, can you maybe share the AVSI file?
And KLMeansCL need a strong machine, the one I use for capturing doesn't meet the requirements.
Thanks! -
dehalo_alpha() can be pretty damaging to the picture. Small details (like those in the characters' faces in the background) get blurred away. You might want to skip it or reduce the strength.Code:
LWlibavVideoSource("Example.avi") ConvertToYV12() Crop(8,0,-8,-0) # levels and saturation adjustments ColorYUV(gain_y=100, off_y=-38, cont_u=100, cont_v=100) # white balance ConvertToRGB() RGBAdjust(rb=-15, bb=-12) RGBAdjust(r=253.0/240.0, b=253.0/192.0) ConvertToYV12() # eliminate combing from time base errors # reduce buzzing/flickering edges, mild noise reduction QTGMC(InputType=2) #remove halos Spline36Resize(400,height) dehalo_alpha(rx=2.5, ry=1.5, brightstr=1.5, darkstr=1.5) MergeChroma(aWarpSharp2(depth=5), aWarpSharp2(depth=20)) KNLMeansCL(d=2, a=2, h=2) U = UtoY().KNLMeansCL(d=2, a=3, h=4) V = Vtoy().KNLMeansCL(d=2, a=3, h=4) YtoUV(U, V, last) # sharpen, darken lines Sharpen(0.3, 0.0) Hysteria(strength=0.75) #restore frame size nnedi3_rpow2(2, cshift="Spline36Resize", fwidth=704, fheight=576) # align chroma ChromaShiftSP(x=-1.5, y=2) -
Thank you for the detailed script jagabo. The results are interesting. It's almost feels like computer animated cartoon now, rather then drawn one. It's very "Pastel"-like. I wasn't able to reproduce it here, but maybe that's because my PC doesn't support the use of KMLMeansCL? (so I can't remove the Halos)
Here's couple of question:
- Something here confuses me. This is a cartoon, and as far as I know cartoons should be IVTCed, not de-interlaced. Why are you de-interlacing this one? and how did you came to the conclusion InputType2 should be used ("badly deinterlaced material")? It's indeed making the picture looks better, but introduce blended fields on other sections of the same video (attached).
- It's interesting you restore frame size using nnedi3_rpow2. What I mostly do is just set SAR value 12/11. I assume resizing using nnedi3_rpow2 is perhaps better then let the player handle the it?Last edited by Okiba; 13th Jul 2021 at 09:36.
-
Dehalo_alpha() was used for halo reduction. KNLMeansCL() was placed there to speed up the processing. You can substitute some other noise reduction filter there.
QTGMC isn't deinterlacing here. It's removing the small combing caused by horizontal time base errors in your cap (you can try Blur(0.0).Sharpen(0.0, 0.6), or Santiag() instead for that). It's also reducing the buzzing and flickering of some edges, and reduces noise a bit.
You'll need to provide that section of video from your source. But QTGMC does sometimes cause blending artifacts.
nnedi3_rpow2 was used there to restore the 704x576 frame size (and 12:11 sampling aspect ratio). A lot of the processing was done at 400x576 -- dehalo_alpha works better at that smaller frame size and the other processing is faster (the aforementioned KNLMeans, for example).
Oh, I noticed later than the levels adjustment was lowering the black level a little too much. I was playing with the levels and forgot that the white balances used ConvertToRGB which crushed those over-dark areas. -
-
I have been trying that. It misses a method called `ex_bs` I couldn't find information about. Maybe something AviSynth+ can't run. EDIT: The first one jagabo linked work, cheers!
I'm trying to put everything specific to this video on the side. Most of the specifics, like colors, white balance and such - I already know how to handle. What I'm trying to do is to grasp the big concepts (as I have a lot of cartoons to process, and I don't want to bother you with each one):
I never used noise reduction filter before. Anything you recommend that is simple to use?Dehalo_alpha() was used for halo reduction. KNLMeansCL() was placed there to speed up the processing. You can substitute some other noise reduction filter there.
Oh, I see. So your script doesn't include neither QTGMC or TFM. So the video will be left Interlaced? No De-interlacing/IVTCing happening? When I'm using TFM(QTGMC(FPSDivisor=2)), does it already do what InputType2 do?QTGMC isn't deinterlacing here.
As I mentioned, what I mostly do is crop, and just set the Aspect Ratio to 12/11 using the SAR flag in ffmpeg. Isn't taking cropped content - upscaling it, actually hurt the quality? as the resolution is so very low?nnedi3_rpow2 was used there to restore the 704x576 frame size (and 12:11 sampling aspect ratio).
And lastly, for a Casual like me, it feels like QTGMC doing a lot of[/FONT][/FONT] good things to the video quality. So wouldn't it be just easier to just go with TFM(QTGMC(Preset="Slow", FPSDivisor=2))? That way QTGMC still doing it magic, but not very aggressively as the Preset is just Slow?
EDIT: With using SMDegrain(), I was even able to set Preset to "Medium" and get the same results as Slow.
Thanks!Last edited by Okiba; 13th Jul 2021 at 13:23.
-
https://github.com/Dogway/Avisynth-Scripts/blob/master/ExTools.avsi
[QUOTE=Okiba;2625227]I never used noise reduction filter before. Anything you recommend that is simple to use?Dehalo_alpha() was used for halo reduction. KNLMeansCL() was placed there to speed up the processing. You can substitute some other noise reduction filter there.
Sharc use SMDegrain. Some other common ones are hqdn3d, fft3dfilter, mcTemporalDenoise, TemporanDegrain, QTGMC has a built in noise reducer too. It's OK for very light denoising. Jjust add EZDenoise=1.0, DenoiseMC=true, to it's list of arguments. You can use higher EZDenoise values but it starts blurring away picture detail. Here's more:
http://avisynth.nl/index.php/External_filters#Denoisers
You mean TFM(clip2=QTGMC(FPSdivisor=2))? TFM is a field matcher, usually used to inverse telecine. After it pairs two fields together it looks to see if there are any comb artifacts. If it finds any it uses a deinterlacer to remove them. If you provide clip2 that clip is used to fix the comb artifacts rather than one of it's internal deinterlacers.
The effective resolution of PAL VHS is about 300x576. Downscaling 400x576, then back up to 704x576 doesn't hurt the image much.
Do whatever gets you the result you want.
Dehalo_alpha() is one of the most damaging filters. I often use an edge mask to limit to only the strongest halos. For example, try replacing the dehalo_alpha() line with:
Code:edges = mt_edge(mode="cartoon", thy1=30, thy2=40).Blur(1.0).mt_expand() Overlay(last, dehalo_alpha(rx=2.5, ry=1.5, brightstr=1.5, darkstr=1.5), mask=edges)
-
Why 3-245? Isn't it suppose to be limited black levels? so 16-235?Levels(16,1.0,235,3,245,coring=false)
You mean TFM(clip2=QTGMC(FPSdivisor=2))?
No way! That was my mistake all along. I know you can change the default TFM de-interlacer, and I preferred QTGMC, but what I missed is a typo that been copied paste from one AVS script to the other, I didn't used clip2! So in reality, TFM was getting POST QTGMC modified frames (/facepalm). That explains a lot
Oh, OK. So what's the point of Upscaling it back? nnedi3_rpow2 makes it sharper? That sounds like a good practice to follow in every video I work with then. Up until now I cropped (sometimes up to 20pixels from each side), and just set SAR value. So it sound like a better approach is to upscale it back with nnedi3 and not stating SAR value to FFMPEG (as the resolution is already 12/11)?The effective resolution of PAL VHS is about 300x576. Downscaling 400x576, then back up to 704x576 doesn't hurt the image much.
I agree, I didn't like what dehalo did to the quality of the frame. I do see a little bit less Halos, but it's not worth it in my own opinion. I couldn't find any difference in picture using edge mask. Maybe I just didn't find a strong enough halo.Dehalo_alpha() is one of the most damaging filters. I often use an edge mask to limit to only the strongest halos.
I spend an hour or do playing around with everything you guys suggested. This worked the best for me:
Thanks!Code:SMDegrain() Santiag() QTGMC(InputType=2) MergeChroma(Levels(20, 1.0, 210, 16, 235, coring=false), last) ChromaShiftSP(x=-1.5, y=2) TFM(clip2=QTGMC(FPSDivisor=2)) Prefetch(3)
-
To expand the luma, exploit the 16...235 range better, adjust the black level, reduce the washed look ...... whatever.
See the histogram waveforms. Left is the original, right is tweaked (and denoised).
Code:Levels(16,1.0,235,3,245,coring=false) Tweak(hue=0.0,cont=1.18,sat=1.3,coring=false)
[Attachment 59844 - Click to enlarge]
Edit:
About Levels, Tweak, Colorspace etc. revisit your former threads:
https://forum.videohelp.com/threads/399085-Color-Blending-Post-Encoding
https://forum.videohelp.com/threads/399404-Level%28%29-And-Why-Does-It-Also-Modify-ChromaLast edited by Sharc; 14th Jul 2021 at 05:10.
-
Last edited by jagabo; 14th Jul 2021 at 08:25.
-
Yea. I understand the basics concept. What I wasn't sure about, is expanding the Luma beyond 16-235. It just, I always sure I have to stick to 16-235, I didn't know I can use something like 2-245 to "stretch" it, so to speak. It makes sense, as while the Luma was expected, they video still doesn't clip 16-235. Didn't know that's a valid optionAbout Levels, Tweak, Colorspace etc. revisit your former threads
Another "Ha!" moment for me. This video is not interlaced. For some reason I was sure that ALL VHS captures were Interlaced (and what got me was also the combing from time base errors). That's why I couldn't understand earlier why you didn't invoke TFM on your script early on.Does your source contain real interlaced frames elsewhere?
Leaving the fact this video doesn't need to be TFM() at all it seems, QTGMC(InputType=2) doesn't do actual de-interlacing right? just cleaning up?Why are you using TFM() after QTGMC()?
It is? because I see difference, there's less combing when using Santiag() + QTGMC(InputType=2), vs using just QTGMC(InputType=2).Also Santiag() before QTGMC() is redundant
Thanks again! -
The original analog VHS video recording is interlaced, and it is recommended that this is maintained when digitizing. Therefore, given the option of selecting interlaced or progressive for capturing, it is recommended that Interlaced be selected and not the De-interlacing option.
I refer to your file "Example.avi". Its format is PAL interlaced (720x576 25i) but with both fields taken from the same instant in time. Therefore it looks like progressive, but exhibits some residual scanline or aliasing artefacts and time base errors. It was probably like this on the VHS tape, or your capturing process applied a deinterlacer at some stage.Last edited by Sharc; 14th Jul 2021 at 11:25.
-
That's very strange!

The capture was done with VirtualDub (Loseless HufYuv). No De-interlacing happened as far as I know. I used the same setup for around 200 Tapes. I just randomly checked couple of files. All the footage from the Camcorder LOOKS interlaced. I'm not so sure now, as it's possible I just consider residual combing artifacts as "interlaced". Correct me if I'm wrong, but the best way to know is to use use Yadif(1) and check frame by frame. If there's movement every frame, It's interlaced. If there's movement once per 2 frames, it's progressive?
So before we continue, is that right? -
Analog SD video was always interlaced. That means it was transmitted (and stored on tape) one field at a time. But those fields could could come from different points in time (an interlaced video camera) or the same point in time (typicial of 24p film). 24p film sources were generally sped up to 25 fps and transmitted as two fields from each film frame. (Here the numbers represents the film frame number, t and b refer to top and bottom fields from those frames, and the plus sign means those two fields are woven together into frames in digital form). So film frames
were transmitted as:Code:1 2 3 4... (25 film frames per second)
Sometimes they are captured and stored in phase, meaning pairs of fields from the same film frame are stored together:Code:1t 1b 2t 2b 3t 3b 4t 4b... (25 film frames becomes 50 video fields per second)
When this happens the video frames look progressive and can be treated as progressive in your processing.Code:1t+1b 2t+2b 3t+3b 4t+4b... (50 fields becomes 25 digital frames per second)
Sometimes they are captured out of phase:
When this happens the digital frames look interlaced (when there is motion) but they can easily be matched with TFM() to restore the original progressive film frames.Code:1b+2t 2b+3t 3b+4t... (50 fields becomes 25 digital frames per second)
Last edited by jagabo; 14th Jul 2021 at 11:37.
-
So the TRANSITION of Analog SD was always interlaced, but they can be store in phases, so while interlaced, they LOOK progressive. I see. Phew. I was worried I did a mistake capturing 200 VHS Tapes, In my mind Progressive was DVD era thingi

So if I am back to the original question, I can determine the type of film by using Yadif(1), find panning shot/section with a lot of movement:
- Movement per frame means I should de-interlace it
- Movement per two frames, means It's progressive - and It doesn't require anything else.
Now the question how the captured out of phase scenario would look like (to use TFM on). You mentioned they would look interlaced when in motion. So after Yadif(1), I should still be getting 1 movement per two frames, but those frames will look if there's motion combed? maybe you have an example for me to check?Last edited by Okiba; 14th Jul 2021 at 12:36.
Similar Threads
-
IVTC on recorded tv - tdecimate() getting hung up on channel logo overlay
By FrankWDoom in forum Video ConversionReplies: 2Last Post: 13th Oct 2020, 13:07 -
TDecimate scrip problem
By kalemvar1 in forum Video ConversionReplies: 4Last Post: 14th May 2020, 13:35 -
Quick question on TIVTC's Tdecimate
By attackworld in forum Newbie / General discussionsReplies: 4Last Post: 11th Mar 2019, 15:05 -
TDecimate vs. Srestore?
By Vitality in forum DVD RippingReplies: 11Last Post: 19th Jan 2019, 19:13 -
QTGMC and TDecimate as a custom IVTC script?
By Vitality in forum RestorationReplies: 9Last Post: 22nd Nov 2018, 16:04


Quote
