Of course, that means you have to visually scan through the entire video to find the differences.Code:v1 = WhateverSource("filename1.ext") v2 = WhateverSource("filename2.ext") Subtract(v1, v2) Levels(120,1,136,0,255) # amplified to make them very obvious
Or you can subtract one image from the other then use AviSynth's conditional filters to detect the frames that differ and print out a list of frame numbers.
When images are exactly the same all Y values will be 126 after Subtract(). Any differences will show up as less or more than that. You may need to adjust the two thresholds depending on how much the output of QTGMC varies.Code:v1 = AviSource("file1.avi") # YV12 source v2 = AviSource("file2.avi") # YV12 source v1 = StackHorizontal(v1, StackVertical(v1.UtoY(), v1.VtoY())) # add U and V channels as luma v2 = StackHorizontal(v2, StackVertical(v2.UtoY(), v2.VtoY())) # add U and V channels as luma Subtract(v1, v2) WriteFileIf(last, "differences.txt", "(YPlaneMax(last) > 126) || (YPlaneMin(last) < 126)", "current_frame", append = false)
+ Reply to Thread
Results 31 to 47 of 47
Here's a script that uses one of the clips from the first post in this thread. I don't know if you'll find it to be of any use:
Mpeg2Source("VTS_05_1_clip.demuxed.d2v") v1 = QTGMC(chromamotion=false) v2 = QTGMC(chromamotion=true) s1 = StackHorizontal(v1, StackVertical(v1.UtoY(), v1.VtoY())) # add U and V channels as luma s2 = StackHorizontal(v2, StackVertical(v2.UtoY(), v2.VtoY())) # add U and V channels as luma Subtract(s1,s2) threshold = 50 WriteFileIf(last, "VTS_05_differences.txt", "(YPlaneMax(last) > (126+threshold)) || (YPlaneMin(last) < (126-threshold))", "current_frame", append = false) StackHorizontal(last, v1, v2) # show diff, and both original videos
I've just discovered that the episodes on youtube have none of blends or artifacts on the DVD...however every 4th frame is duplicate, the bitrate is very low, and it seems to be 720p - who knows what was done to make it that resolution. But it gives me hope there's a better source out there somewhere...
I downloaded one episode. It was badly inverse telecined to 23.976 fps. But each group of 4 frames included one duplicate and was missing one of the original frames. Ie, one original frame was replaced by a duplicate of another. So instead of a sequence like 1,2,3,4 it has 1,3,4,4. Resolution is better than VHS. My guess is it was made from a hard telecined NTSC DVD.
As far as I know, the only DVDs made were the Australian ones I've been working from (region 4) and the French ones (region 2) which seem harder to come by - idk whether the video would be better on the French.
So I got a french DVD, and while it's different...It's certainly not better - it might in fact be worse (the borders certainly are). I'll put it through QTGMC + srestore and compare it to the AU DVD and see if it's any better in terms of blends.
Here are samples from similar scenes, let me know what you think...
I was able to get this smoother motion from the youtube video. I used DoubleFPS2() but some of the other frame rate converters might deliver less distortions in the interpolated frames.
The french clips don't have the chroma blending problems of the clips in the first post. They still some interlace luma ghosting after QTGMC().SRestore() though.
After doing some comparisons the French DVD looks way better than the AU one. And srestore seems to have a better time picking up scene change blends (or there's a lot less blends). So I'll definitely be using it over the AU disc.
Last edited by nacho; 11th Jul 2020 at 00:24.
I have a question regarding color: my first script uses dgindex as a source and just does qtgmc+sres and produces a frame like this:
[Attachment 54112 - Click to enlarge]
I tried a lossless encode with both x264 and ffv1 (ffmpeg) and when I use ffms2 as a source filter for further processing, the red color changes:
[Attachment 54113 - Click to enlarge]
What's happening? It doesn't look lighter/oranger like that when just playing the file in mpc-hc.
If I add: --transfer bt470bg --colormatrix bt470bg --colorprim bt470bg
to my x264 lossless encode it displays like the first image when I use ffms2 source. I'm not sure how to get the colors to be correct without re-encoding with those flags though.
With the ffv1 or untagged input if I do: core.resize.Bicubic(src, matrix_in_s="470bg", transfer_in_s="470bg", primaries_in_s="470bg", matrix_s="709")
It seems to fix it, why does the matrix need to be converted to 709 - is it because that's what my monitor/modern displays use?
Last edited by nacho; 12th Jul 2020 at 01:54.
It sounds to me like your player is displaying the video as rec.709 when it's not flagged. Just flag it as rec.601 every time you encode.
Yep thanks. I think I've accepted I'll have to live with the luma ghosting/outlines. Here's some of the other cleanup I've done (after qtgmc+srestore)
What do you think? In 1-4 I've showed 2 good frames, and 2 frames that have that ghosting. The artifacts are actually lessened a bit by MCTemporalDenoise.
I can see in some scenes like the 5th screenshot there's detail that's been blurred away. But I'm fairly happy with it in other places. Is there anything you think I could do better?
Deblock_QED is for some frames which pretty much look like a grid
My VS script:
###Cleanup### deblock = haf.Deblock_QED(clp=src, quant1=35) denoise = haf.MCTemporalDenoise(i=deblock, settings="very low", useTTmpSm=True, chroma=True, edgeclean=True, stabilize=True) warp = core.warp.AWarpSharp2(clip=denoise, depth=12, blur=1, thresh=56) ###Borders### crop = core.std.Crop(warp, left=10, right=8, top=2, bottom=2) border = asf.FixBrightnessProtect2(crop, column=[1,701], adj_column=[12,12]) fill = asf.FillBorders(border, left=1)
MCTD isn't doing very much. I don't know if it's worth using.
I've been playing around with one of the clips and found TNLMeans(ax=1, ay=3, az=0, h=20) almost completely removes the ghosting artifacts.
[Attachment 54135 - Click to enlarge]
Plus TNLMeans(ax=1, ay=3, az=0, h=20):
[Attachment 54136 - Click to enlarge]
The ghosting is almost completely gone but some other parts of the picture are blurred. It would be great if one could come up with a mask that lets you apply that result to only parts of the frame that have the ghosting. I've been playing with that but so far a solution has eluded me...
The artifacts are edges/lines from either the previous or next frame. Logically, then, it would be good to find edges in the previous/next frames and see if there's a line/edge in an identical place in the current frame, and use that edge as a mask. I have zero experience with masking, and idk if anything like that is possible. What do you think?