So... I'm off studying Avisynth methods to correct this issue... I have been searching for ways to correct alternate fields, but haven't found anything.
+ Reply to Thread
Results 31 to 60 of 128
By the way, if you want a bob deinterlacer that replaces the missing field with black you can use an AviSynth script like this:
AviSource("1st-vdub-svideo-Huffyuv-sample2.avi") AssumeTFF() SeparateFields() # since the video is top-field-first the even fields are top fields, the odd fields are bottom fields top = SelectEven() # only the top fields bot = SelectOdd() # only the bottom fields tblack = top.ColorYUV(gain_y=-256, off_y=16, cont_u=-256, cont_v=-256) # a copy of the top field made black bblack = bot.ColorYUV(gain_y=-256, off_y=16, cont_u=-256, cont_v=-256) # a copy of the bottom field made black feven = Interleave(top, bblack).Weave() # combine the top field and the black bottom field to make a frame fodd = Interleave(tblack, bot).Weave() # combine the black top field and the bottom field to make a frame Interleave(feven, fodd) # interleave the even and odd fields (now frames) back into a single video
Last edited by GrouseHiker; 2nd May 2020 at 12:15. Reason: Corrected terminology "fields" vs "images"
But that second separate fields isn't really separating fields, it's just separating the now progressive images into two separate images. In a real interlaced frames the second field is displayed 1/60 of a second later. But in this case that second "field" is displayed at the same time as the first (not literally at the same time, each scan line is displayed 1/15734th second later than the previous). So there is a 120 Hz flicker if you view the result of the double SeparateFields() stepping through frame by frame (You don't see this in realtime playback of the script because your monitor isn't displaying at 120 Hz, you're only seeing every other frame at 60 Hz. Even if you are running a 120Hz display you wouldn't see the flicker because your eyes can't see 120 Hz flicker.) But this does not show up as a 120 Hz flicker when you watch the video on TV. It's 15724 Hz "flicker" -- which you don't see as flicker but rather alternating light and dark horizontal lines.
So all the double SeparateFields() script shows us is that consecutive scanlines of each field alternate in brightness. But we already knew that from looking at the result of a single SeparateFields().
Now, that does hint at something. I just don't know what.
First SeparateFields() on 1 Frame @ 1/30 sec = 2 Fields @ 1/60 sec
At this point, difference between Fields could be differences between information from the two tape heads; however, Weird Lines still exist - we have to go deeper.
Running SeparateFields() again on the 1/60 file:
avisource("1986_1011 Pool Sample3 60fps.avi") AssumeTFF().AssumeFrameBased().SeparateFields()
Weird Lines are GONE!!!
Timing is improper between the pairs... In reality, the first pair of "images" are taken from the same instant in time, and the second pair of "images" are from 1/60 sec later. However, these are displayed as evenly spaced in time - 1/120 sec apart.
The display sequence (stepping through) is (assuming Field1 line 0 start and TFF):
Image1A (even lines) - Image1B (even lines) - Image2A (odd lines) - Image2B (odd lines)***
Running this either TFF or BFF: In BOTH CASES, Image1A and Image2B ("Images" 1 & 4) are darker.
*** Field1 and Field2 after the first SeparateFields() don't start at the same line - if Field1 top line is even, Field2 top line is odd.
This doesn't make any sense... switching from TFF to BFF should change result (I think).
In any event, I don't believe this can be attributed to tape heads. dellsam34 was RIGHT!
Last edited by GrouseHiker; 2nd May 2020 at 12:12. Reason: Corrected terminology "fields" vs "images"
I know that PAL video alternates the phase of the chroma carrier over successive scan lines (of a field). If this was PAL video I'd assume it had to do with that. But it's NTSC video. Maybe there's something about the NTSC chroma carrier that alternates between fields too. I don't know enough about it to say.
I decided to work this out graphically. Based on observations of "images" 1 & 4 being darker after running SeparateFields() twice, the predicted Weird Lines should be pairs of dark scan lines separated by pairs of normal scan lines, and based on inspection of blown-up images this is accurate.
UPDATED: TFF vs BFF produces the same pattern, but different pairs of scan lines.
EDIT: After correction, TFF and BFF produce the same sets of dark lines.
[Attachment 53168 - Click to enlarge]
Last edited by GrouseHiker; 7th May 2020 at 14:27. Reason: Added TFF vs BFF & Corrected terminology "fields" vs "images" & Corrected matrix
Last edited by jagabo; 1st May 2020 at 22:47.
The problem is defined!
The solution is unknown...
AssumeBFF() SeparateFields() vinverse2(amnt=4, uv=2) Weave()
[Attachment 53032 - Click to enlarge]
[Attachment 53033 - Click to enlarge]
Is there a way in Avisynth to select specific "images" and apply corrections only to those "images"? I searched, but don't know if SelectEvery() would do this.
Also, I'm assuming this is a luma issue in the darker "images" (I believe your "uv=2" indicates this?). Is there a way to measure total, integrated luma in an "image?" I just got AlignExplode v1.2 from Brad working, and I'm working on learning how to read the histogram. However, I don't see any luma numbers.
Last edited by GrouseHiker; 2nd May 2020 at 12:04. Reason: Corrected terminology "fields" vs "images"
This actually seems to be working after running SeparateFields() once.
Avisource("1986_1011 Pool Sample3 YUY2 60fps.avi") Loadplugin("C:\Program Files (x86)\AviSynth+\plugins64+\EquLines.dll") EquLines(deltamax=10)
Last edited by GrouseHiker; 2nd May 2020 at 10:15.
eg. You can apply a different filter, or different settings selectively to even or odd fields . In this example, FilterA is applied to even fields, FilterB is applied to odd fields
orig=Avisource("1986_1011 Pool Sample3 YUY2 60fps.avi") even=orig.AssumeBFF().SeparateFields().SelectEven().FilterA odd=orig.AssumeBFF().SeparateFields().SelectOdd().FilterB Interleave(even,odd) Weave()
But in your case, a single field has both darker/brighter lines, hence the vertical blur approach
Another approach would be to separatefields again and attempt to do something; but it probably won't work very well. If you attempt to brighten dark lines, or (darken bright lines), you will introduce new areas of bright/dark lines , because the original defect is non uniform if you look at the histogram. ie. Certain areas are affected more than others . If you look at the pool shot, the bottom half, only the brighter pixels have that wavy sawtooth pattern in the histogram. Darker pixels in the same area do not, or are not affected as severely. Those darker areas correspond to shadow areas, such as the shadows in front of the kids
Also, I'm assuming this is a luma issue in the darker fields (I believe your "uv=2" indicates this?).
Is there a way to measure total, integrated luma in a field?
Even if you measure a single pixel line, 480 x 1 values, then average them (or some other measure like median, mode) , it's not going to be useful.
The distribution is more important in your example. eg. lets say a dark line was -5 relative to a bright line above or below. Adding +5 to Y won't fix it because of the distribution - it's non uniform across a row
If you look at that screenshot above, that "sawtooth" pattern does not affect the darker pixels. If you applied a filter to a line, you would cause a sawtooth pattern in the darker pixels. You would introduce a defect.
Once again, the problem isn't lighter and darker fields. Only lighter and darker scan lines in the fields. So measuring the average luma of a field won't help. Even if the problem was lighter and darker fields the difference is so small that it would likely be dwarfed by normal brightness changes from field to field.
You can visualize the chroma channels with UtoY() or VtoY().
AviSource("filename.avi") SeparateFields() StackHorizontal(UtoY(),VtoY())
You can visualize the luma alone with GreyScale() or ColorYUV(cont_u=-256, cont_v=-256).
AviSource("filename.avi") SeparateFields() GreyScale()
First just darkening one field to match the other.
AviSource("1986_1011 Pool Sample.avi") SeparateFields() AssumeFrameBased() SeparateFields() evenlines = SelectEven() oddlines = SelectOdd().ColorYUV(gain_y=-6) # darken the bright lines of the field Interleave(evenlines, oddlines) Weave() AssumeFieldBased() Weave()
So I tried using a mask based on brightness to only apply it to brighter parts of the picture:
AviSource("1986_1011 Pool Sample.avi") SeparateFields() AssumeFrameBased() SeparateFields() evenlines = SelectEven() oddlines = SelectOdd() bmask = oddlines.ColorYUV(off_y=-120).ColorYUV(gain_y=512) oddlines = Overlay(oddlines, oddlines.ColorYUV(gain_y=-6),mask=bmask) Interleave(evenlines, oddlines) Weave() AssumeFieldBased() Weave()
Looking at the chroma channels, it might be that areas where U is brighter corresponds to areas with the lines. I'll try playing with that later...
Thanks for all the good information!!
Last edited by GrouseHiker; 2nd May 2020 at 12:01. Reason: Corrected terminology "fields" vs "images"
I think I can create a boolean algorithm for selecting if frame count is tracked numerically by Avisynth.
UPDATE: I read the code wrong. You are selecting lines - I was thinking "images."
Last edited by GrouseHiker; 2nd May 2020 at 11:59. Reason: Corrected terminology "fields" vs "images"
Last edited by GrouseHiker; 2nd May 2020 at 12:01. Reason: Corrected terminology "fields" vs "images"
Here's the result of the third script in post #45 (plus QTGMC and stacking the original on the left, the filtered version on the right). You can see that darkening the bright lines fixes much of the image but creates lines in other parts.
Last edited by GrouseHiker; 2nd May 2020 at 12:21.
I have learned:
"Fields" are made up of alternating lines in "Frames"
"Images" are made up of alternating lines in "Fields"
I was hung up on selecting and correcting "Images." However, I realize now the better solution is selecting and correcting the appropriate lines in the "Fields." I believe that is the approach you are taking in your code.
I don't think there's a formal name for the result of the second SeparateFields(). But they're definitely not fields. I just called them images.
Can you please explain how your mask "ColorYUV(off_y=-120).ColorYUV(gain_y=512)" accomplishes its purpose?
ColorYUV(off_y=-120) subtracts 120 from each Y value, Y' = Y - off_y. Remaining Y values now range from 0 to 135. ColorYUV(gain_y=512) multiples the remaining values by 3, Y' = Y * (gain_y + 256) / 256).
So basically, the first ColorYUV is used as a threshold. Pixels below 120 will not be changed. Pixels from 120 to 205 are changed proportionately depedning on their brightness. Pixels above 205 will be fully changed.
You can see the mask by adding return(bmask) any time after it's generated.
Another way to build the same masks is with Levels(120,1,205,0,255).
Is it possible to point sample in areas where lines originally occurred... to maybe compare YUV values to see if there is a correlation?
You can use VirtualDub2 to read RGB and YUV values of individual pixels:
[Attachment 53079 - Click to enlarge]
Though it's a little convoluted. If you haven't used VirtualDub2 before: After opening a video file or AVS script select Video -> Filters... Press the Add... button. In the left pane double click on the Crop filter (some other filters work too). Hold down a shift key while moving the mouse cursor over the preview image.
But it's not entirely accurate. It converts incoming YUV to RGB with a rec.601 matrix. Then converts that RGB back to YUV for that display. The round trip loses some accuracy.
For this purpose the absolute value doesn't matter, it's the relative values that might make a difference.
I wonder if the camera processes RGB from it's sensor or YUV? RGB would actually be more intuitive if color is a factor.
Camera's use RGB sensors. That is usually converted immediately to YUV to be transmitted or recorded. Some modern cameras allow for the raw RBG to be recorded or transmitted.
jagabo, your code is based on lighter lines needing to be darkened. I was wondering if you took that path intuitively... I had always assumed these were dark lines needing to be lightened.
Your approach does differ from a theory I have that this is some type of clipping of the brights during processing of the signal from the camera sensor before laying down on tape.
I actually tried reversing your code, but the light lines overcorrecting in the dark areas was a problem.
Last edited by GrouseHiker; 3rd May 2020 at 21:03.
After some study, I believe I understand the Y Gain 3x multiplier, but I can't come up with the 205 threshold.
Update: Figured it out - 255/3 + 120
Last edited by GrouseHiker; 4th May 2020 at 09:18.