Yes, but I played the tape directly to a TV from the original camcorder, and the banding showed up. I think that eliminates the player (newer camcorder) and seems to conclude the tape is the source. I will try to get a tape from a friend to verify the workflow, but I feel pretty sure it's my old tape at this point.
So... I'm off studying Avisynth methods to correct this issue... I have been searching for ways to correct alternate fields, but haven't found anything.
Try StreamFab Downloader and download from Netflix, Amazon, Youtube! Or Try DVDFab and copy Blu-rays! or rip iTunes movies!
+ Reply to Thread
Results 31 to 60 of 128
Thread
-
-
By the way, if you want a bob deinterlacer that replaces the missing field with black you can use an AviSynth script like this:
Code:AviSource("1st-vdub-svideo-Huffyuv-sample2.avi") AssumeTFF() SeparateFields() # since the video is top-field-first the even fields are top fields, the odd fields are bottom fields top = SelectEven() # only the top fields bot = SelectOdd() # only the bottom fields tblack = top.ColorYUV(gain_y=-256, off_y=16, cont_u=-256, cont_v=-256) # a copy of the top field made black bblack = bot.ColorYUV(gain_y=-256, off_y=16, cont_u=-256, cont_v=-256) # a copy of the bottom field made black feven = Interleave(top, bblack).Weave() # combine the top field and the black bottom field to make a frame fodd = Interleave(tblack, bot).Weave() # combine the black top field and the bottom field to make a frame Interleave(feven, fodd) # interleave the even and odd fields (now frames) back into a single video
-
Thanks!! At this point all the working code is helpful. It's lets me see syntax, etc. and gives me cut-and-paste sources. Speaking of AviSynth, I was trying to understand Brad's post to see If I could reproduce what he saw:
Running SeparateFields twice kind of blows my mind... there are no fields to separate in the second pass. The output is weird... "images" separated by semi-distorted stutter "images." I agree with his first conclusion - "varying brightness from scanline-to-scanline" after running once. However, I don't see his "flicker" conclusion after running twice.Last edited by GrouseHiker; 2nd May 2020 at 12:15. Reason: Corrected terminology "fields" vs "images"
-
All SeparrateFields() does is split the frame into to half height frames (one after the other) putting all the even numbered scan lines into the frist, all the odd number scanlines into the second (or vice versa, depending on field order). AviSynth keeps track of whether the video stream is a sequence of frames or fields. After the fist SeparateFields() it knows the stream is now fields and it will refuse to SeparateFields() again. To do so you have to tell it to assume those fields are really frames, hence the AssumeFrameBased() between the two SeparateFields().
But that second separate fields isn't really separating fields, it's just separating the now progressive images into two separate images. In a real interlaced frames the second field is displayed 1/60 of a second later. But in this case that second "field" is displayed at the same time as the first (not literally at the same time, each scan line is displayed 1/15734th second later than the previous). So there is a 120 Hz flicker if you view the result of the double SeparateFields() stepping through frame by frame (You don't see this in realtime playback of the script because your monitor isn't displaying at 120 Hz, you're only seeing every other frame at 60 Hz. Even if you are running a 120Hz display you wouldn't see the flicker because your eyes can't see 120 Hz flicker.) But this does not show up as a 120 Hz flicker when you watch the video on TV. It's 15724 Hz "flicker" -- which you don't see as flicker but rather alternating light and dark horizontal lines.
So all the double SeparateFields() script shows us is that consecutive scanlines of each field alternate in brightness. But we already knew that from looking at the result of a single SeparateFields().
Now, that does hint at something. I just don't know what. -
Yeah, I've been racking my brain over that question... just don't know enough about how the cameras work, images are captured, data is laid on tape, etc. I was just thinking about the fact the banding is not totally eliminated in the first SeparateFields - not what I was expecting.
-
My brain won't let go of this:
First SeparateFields() on 1 Frame @ 1/30 sec = 2 Fields @ 1/60 sec
At this point, difference between Fields could be differences between information from the two tape heads; however, Weird Lines still exist - we have to go deeper.
Running SeparateFields() again on the 1/60 file:
Code:avisource("1986_1011 Pool Sample3 60fps.avi") AssumeTFF().AssumeFrameBased().SeparateFields()
Weird Lines are GONE!!!
Timing is improper between the pairs... In reality, the first pair of "images" are taken from the same instant in time, and the second pair of "images" are from 1/60 sec later. However, these are displayed as evenly spaced in time - 1/120 sec apart.
The display sequence (stepping through) is (assuming Field1 line 0 start and TFF):
Image1A (even lines) - Image1B (even lines) - Image2A (odd lines) - Image2B (odd lines)***
Running this either TFF or BFF: In BOTH CASES, Image1A and Image2B ("Images" 1 & 4) are darker.
*** Field1 and Field2 after the first SeparateFields() don't start at the same line - if Field1 top line is even, Field2 top line is odd.
This doesn't make any sense... switching from TFF to BFF should change result (I think).
In any event, I don't believe this can be attributed to tape heads. dellsam34 was RIGHT!Last edited by GrouseHiker; 2nd May 2020 at 12:12. Reason: Corrected terminology "fields" vs "images"
-
I know that PAL video alternates the phase of the chroma carrier over successive scan lines (of a field). If this was PAL video I'd assume it had to do with that. But it's NTSC video. Maybe there's something about the NTSC chroma carrier that alternates between fields too. I don't know enough about it to say.
-
I decided to work this out graphically. Based on observations of "images" 1 & 4 being darker after running SeparateFields() twice, the predicted Weird Lines should be pairs of dark scan lines separated by pairs of normal scan lines, and based on inspection of blown-up images this is accurate.
UPDATED: TFF vs BFF produces the same pattern, but different pairs of scan lines.
EDIT: After correction, TFF and BFF produce the same sets of dark lines.
[Attachment 53168 - Click to enlarge]Last edited by GrouseHiker; 7th May 2020 at 14:27. Reason: Added TFF vs BFF & Corrected terminology "fields" vs "images" & Corrected matrix
-
Last edited by jagabo; 1st May 2020 at 22:47.
-
A general approach for this would be apply a filter to separated fields, then weave. Something like a vertical blur, or lowpass, or convolution. The side effect is a reduction in details .
Code:AssumeBFF() SeparateFields() vinverse2(amnt=4, uv=2) Weave()
orig
[Attachment 53032 - Click to enlarge]
filtered
[Attachment 53033 - Click to enlarge] -
Thanks, that looks good!
Is there a way in Avisynth to select specific "images" and apply corrections only to those "images"? I searched, but don't know if SelectEvery() would do this.
Also, I'm assuming this is a luma issue in the darker "images" (I believe your "uv=2" indicates this?). Is there a way to measure total, integrated luma in an "image?" I just got AlignExplode v1.2 from Brad working, and I'm working on learning how to read the histogram. However, I don't see any luma numbers.Last edited by GrouseHiker; 2nd May 2020 at 12:04. Reason: Corrected terminology "fields" vs "images"
-
This actually seems to be working after running SeparateFields() once.
Code:Avisource("1986_1011 Pool Sample3 YUY2 60fps.avi") Loadplugin("C:\Program Files (x86)\AviSynth+\plugins64+\EquLines.dll") EquLines(deltamax=10)
Last edited by GrouseHiker; 2nd May 2020 at 10:15.
-
SelectEven and SelectOdd applied on SeparateFields()
eg. You can apply a different filter, or different settings selectively to even or odd fields . In this example, FilterA is applied to even fields, FilterB is applied to odd fields
Code:orig=Avisource("1986_1011 Pool Sample3 YUY2 60fps.avi") even=orig.AssumeBFF().SeparateFields().SelectEven().FilterA odd=orig.AssumeBFF().SeparateFields().SelectOdd().FilterB Interleave(even,odd) Weave()
But in your case, a single field has both darker/brighter lines, hence the vertical blur approach
Another approach would be to separatefields again and attempt to do something; but it probably won't work very well. If you attempt to brighten dark lines, or (darken bright lines), you will introduce new areas of bright/dark lines , because the original defect is non uniform if you look at the histogram. ie. Certain areas are affected more than others . If you look at the pool shot, the bottom half, only the brighter pixels have that wavy sawtooth pattern in the histogram. Darker pixels in the same area do not, or are not affected as severely. Those darker areas correspond to shadow areas, such as the shadows in front of the kids
Also, I'm assuming this is a luma issue in the darker fields (I believe your "uv=2" indicates this?).
Is there a way to measure total, integrated luma in a field?
Even if you measure a single pixel line, 480 x 1 values, then average them (or some other measure like median, mode) , it's not going to be useful.
The distribution is more important in your example. eg. lets say a dark line was -5 relative to a bright line above or below. Adding +5 to Y won't fix it because of the distribution - it's non uniform across a row
If you look at that screenshot above, that "sawtooth" pattern does not affect the darker pixels. If you applied a filter to a line, you would cause a sawtooth pattern in the darker pixels. You would introduce a defect. -
Once again, the problem isn't lighter and darker fields. Only lighter and darker scan lines in the fields. So measuring the average luma of a field won't help. Even if the problem was lighter and darker fields the difference is so small that it would likely be dwarfed by normal brightness changes from field to field.
You can visualize the chroma channels with UtoY() or VtoY().
Code:AviSource("filename.avi") SeparateFields() StackHorizontal(UtoY(),VtoY())
You can visualize the luma alone with GreyScale() or ColorYUV(cont_u=-256, cont_v=-256).
Code:AviSource("filename.avi") SeparateFields() GreyScale()
First just darkening one field to match the other.
Code:AviSource("1986_1011 Pool Sample.avi") SeparateFields() AssumeFrameBased() SeparateFields() evenlines = SelectEven() oddlines = SelectOdd().ColorYUV(gain_y=-6) # darken the bright lines of the field Interleave(evenlines, oddlines) Weave() AssumeFieldBased() Weave()
So I tried using a mask based on brightness to only apply it to brighter parts of the picture:
Code:AviSource("1986_1011 Pool Sample.avi") SeparateFields() AssumeFrameBased() SeparateFields() evenlines = SelectEven() oddlines = SelectOdd() bmask = oddlines.ColorYUV(off_y=-120).ColorYUV(gain_y=512) oddlines = Overlay(oddlines, oddlines.ColorYUV(gain_y=-6),mask=bmask) Interleave(evenlines, oddlines) Weave() AssumeFieldBased() Weave()
Looking at the chroma channels, it might be that areas where U is brighter corresponds to areas with the lines. I'll try playing with that later... -
Thanks for all the good information!!
What I was getting at with regard to selecting "images" is the lines go away after running SeparateFields twice. At this separation level, the 1st and 4th "images" are darker than the 2nd and 3rd. These darker "images" seem to be creating the lines. I was wondering of, say, every 4th "image"could be selected, luma increased and then Weave twice to bring it back?Last edited by GrouseHiker; 2nd May 2020 at 12:01. Reason: Corrected terminology "fields" vs "images"
-
I believe this is what I was thinking except the selection is not odd/even (if I'm understanding the code).
I think I can create a boolean algorithm for selecting if frame count is tracked numerically by Avisynth.
UPDATE: I read the code wrong. You are selecting lines - I was thinking "images."Last edited by GrouseHiker; 2nd May 2020 at 11:59. Reason: Corrected terminology "fields" vs "images"
-
Last edited by GrouseHiker; 2nd May 2020 at 12:01. Reason: Corrected terminology "fields" vs "images"
-
The results of first SeparateFields() is two fields. The result of the second SeparateFields() is not two fields, but rather two images, with every other scanline of a field. That's the point. The word "field" has a specific meaning. If you cut your car in half you don't have two cars.
Here's the result of the third script in post #45 (plus QTGMC and stacking the original on the left, the filtered version on the right). You can see that darkening the bright lines fixes much of the image but creates lines in other parts. -
Last edited by GrouseHiker; 2nd May 2020 at 12:21.
-
...sorry it took so long for me to figure this out...
I have learned:
"Fields" are made up of alternating lines in "Frames"
"Images" are made up of alternating lines in "Fields"
I was hung up on selecting and correcting "Images." However, I realize now the better solution is selecting and correcting the appropriate lines in the "Fields." I believe that is the approach you are taking in your code. -
I don't think there's a formal name for the result of the second SeparateFields(). But they're definitely not fields. I just called them images.
-
Thank you for code! I have run your two options, and I see the new lines created by the 3rd block of code. However, this quoted block of code does a surprisingly good job. I do see some new lines created in a few areas, but the overall result is a HUGE improvement. I'm now working on trying to understand ColorYUV parameters for masking - a major effort in itself.
Can you please explain how your mask "ColorYUV(off_y=-120).ColorYUV(gain_y=512)" accomplishes its purpose? -
ColorYUV(off_y=-120) subtracts 120 from each Y value, Y' = Y - off_y. Remaining Y values now range from 0 to 135. ColorYUV(gain_y=512) multiples the remaining values by 3, Y' = Y * (gain_y + 256) / 256).
So basically, the first ColorYUV is used as a threshold. Pixels below 120 will not be changed. Pixels from 120 to 205 are changed proportionately depedning on their brightness. Pixels above 205 will be fully changed.
You can see the mask by adding return(bmask) any time after it's generated.
Another way to build the same masks is with Levels(120,1,205,0,255). -
Is it possible to point sample in areas where lines originally occurred... to maybe compare YUV values to see if there is a correlation?
-
You can use VirtualDub2 to read RGB and YUV values of individual pixels:
[Attachment 53079 - Click to enlarge]
Though it's a little convoluted. If you haven't used VirtualDub2 before: After opening a video file or AVS script select Video -> Filters... Press the Add... button. In the left pane double click on the Crop filter (some other filters work too). Hold down a shift key while moving the mouse cursor over the preview image.
But it's not entirely accurate. It converts incoming YUV to RGB with a rec.601 matrix. Then converts that RGB back to YUV for that display. The round trip loses some accuracy. -
Great! That's much easier than I was expecting (script - Frame#, x-y coordinates).
For this purpose the absolute value doesn't matter, it's the relative values that might make a difference.
I wonder if the camera processes RGB from it's sensor or YUV? RGB would actually be more intuitive if color is a factor. -
Camera's use RGB sensors. That is usually converted immediately to YUV to be transmitted or recorded. Some modern cameras allow for the raw RBG to be recorded or transmitted.
-
jagabo, your code is based on lighter lines needing to be darkened. I was wondering if you took that path intuitively... I had always assumed these were dark lines needing to be lightened.
Your approach does differ from a theory I have that this is some type of clipping of the brights during processing of the signal from the camera sensor before laying down on tape.
added:
I actually tried reversing your code, but the light lines overcorrecting in the dark areas was a problem.Last edited by GrouseHiker; 3rd May 2020 at 21:03.
-
My way of visualizing Y Offset = -120 is all luma values are shifted down, putting the darker regions out of range (below zero) and the brightest pixels at 115 (235-120) or below. Is this valid?
After some study, I believe I understand the Y Gain 3x multiplier, but I can't come up with the 205 threshold.
Update: Figured it out - 255/3 + 120Last edited by GrouseHiker; 4th May 2020 at 09:18.
Similar Threads
-
TRV120 D8 cam won't play most old 8mm tapes, any recommended 8mm camcorders
By analog2digital in forum CapturingReplies: 1Last Post: 22nd Feb 2020, 11:44 -
Removing weird lines over a lot of the video?
By killerteengohan in forum RestorationReplies: 13Last Post: 15th Feb 2019, 20:05 -
White Horizontal Lines in Hi8 Captures
By dellsam34 in forum CapturingReplies: 5Last Post: 17th Jan 2018, 03:32 -
AviSynth: Weird deinterlacing issue causing 'big' interlace lines to appear
By SMGJohn in forum RestorationReplies: 15Last Post: 9th Jul 2016, 08:42 -
Video is showing weird lines in it
By killerteengohan in forum RestorationReplies: 7Last Post: 7th Aug 2015, 10:58