VideoHelp Forum
+ Reply to Thread
Page 2 of 5
FirstFirst 1 2 3 4 ... LastLast
Results 31 to 60 of 128
Thread
  1. Originally Posted by jagabo View Post
    So the problem is either the player or it's on the tape (the original camcorder).
    Yes, but I played the tape directly to a TV from the original camcorder, and the banding showed up. I think that eliminates the player (newer camcorder) and seems to conclude the tape is the source. I will try to get a tape from a friend to verify the workflow, but I feel pretty sure it's my old tape at this point.

    So... I'm off studying Avisynth methods to correct this issue... I have been searching for ways to correct alternate fields, but haven't found anything.
    Quote Quote  
  2. By the way, if you want a bob deinterlacer that replaces the missing field with black you can use an AviSynth script like this:

    Code:
    AviSource("1st-vdub-svideo-Huffyuv-sample2.avi") 
    AssumeTFF()
    
    SeparateFields()
    # since the video is top-field-first the even fields are top fields, the odd fields are bottom fields
    top = SelectEven() # only the top fields
    bot = SelectOdd() # only the bottom fields
    
    tblack = top.ColorYUV(gain_y=-256, off_y=16, cont_u=-256, cont_v=-256) # a copy of the top field made black
    bblack = bot.ColorYUV(gain_y=-256, off_y=16, cont_u=-256, cont_v=-256) # a copy of the bottom field made black
    
    feven = Interleave(top, bblack).Weave() # combine the top field and the black bottom field to make a frame
    fodd = Interleave(tblack, bot).Weave() # combine the black top field and the bottom field to make a frame
    
    Interleave(feven, fodd) # interleave the even and odd fields (now frames) back into a single video
    It's not really useful for much of anything.
    Quote Quote  
  3. Originally Posted by jagabo View Post
    By the way, if you want a bob deinterlacer that replaces the missing field with black you can use an AviSynth script like this:
    Thanks!! At this point all the working code is helpful. It's lets me see syntax, etc. and gives me cut-and-paste sources. Speaking of AviSynth, I was trying to understand Brad's post to see If I could reproduce what he saw:
    Originally Posted by Brad View Post
    For some reason, the luma has 120Hz flicker. This Avisynth script probably won't mean anything to you, but for other readers:
    Code:
    SeparateFields()
     AssumeFrameBased().SeparateFields()
    If look at the fields we see varying brightness from scanline-to-scanline. If we separate the 240@60 again down to 120@120, the lines in the image disappear and we see the varying brightness as flicker.
    Running SeparateFields twice kind of blows my mind... there are no fields to separate in the second pass. The output is weird... "images" separated by semi-distorted stutter "images." I agree with his first conclusion - "varying brightness from scanline-to-scanline" after running once. However, I don't see his "flicker" conclusion after running twice.
    Last edited by GrouseHiker; 2nd May 2020 at 12:15. Reason: Corrected terminology "fields" vs "images"
    Quote Quote  
  4. Originally Posted by GrouseHiker View Post
    Originally Posted by jagabo View Post
    By the way, if you want a bob deinterlacer that replaces the missing field with black you can use an AviSynth script like this:
    Thanks!! At this point all the working code is helpful. It's lets me see syntax, etc. and gives me cut-and-paste sources. Speaking of AviSynth, I was trying to understand Brad's post to see If I could reproduce what he saw:
    Originally Posted by Brad View Post
    For some reason, the luma has 120Hz flicker. This Avisynth script probably won't mean anything to you, but for other readers:
    Code:
    SeparateFields()
     AssumeFrameBased().SeparateFields()
    If look at the fields we see varying brightness from scanline-to-scanline. If we separate the 240@60 again down to 120@120, the lines in the image disappear and we see the varying brightness as flicker.
    Running SeparateFields twice kind of blows my mind... there are no fields to separate in the second pass.
    All SeparrateFields() does is split the frame into to half height frames (one after the other) putting all the even numbered scan lines into the frist, all the odd number scanlines into the second (or vice versa, depending on field order). AviSynth keeps track of whether the video stream is a sequence of frames or fields. After the fist SeparateFields() it knows the stream is now fields and it will refuse to SeparateFields() again. To do so you have to tell it to assume those fields are really frames, hence the AssumeFrameBased() between the two SeparateFields().

    But that second separate fields isn't really separating fields, it's just separating the now progressive images into two separate images. In a real interlaced frames the second field is displayed 1/60 of a second later. But in this case that second "field" is displayed at the same time as the first (not literally at the same time, each scan line is displayed 1/15734th second later than the previous). So there is a 120 Hz flicker if you view the result of the double SeparateFields() stepping through frame by frame (You don't see this in realtime playback of the script because your monitor isn't displaying at 120 Hz, you're only seeing every other frame at 60 Hz. Even if you are running a 120Hz display you wouldn't see the flicker because your eyes can't see 120 Hz flicker.) But this does not show up as a 120 Hz flicker when you watch the video on TV. It's 15724 Hz "flicker" -- which you don't see as flicker but rather alternating light and dark horizontal lines.

    So all the double SeparateFields() script shows us is that consecutive scanlines of each field alternate in brightness. But we already knew that from looking at the result of a single SeparateFields().

    Now, that does hint at something. I just don't know what.
    Quote Quote  
  5. Originally Posted by jagabo View Post
    Now, that does hint at something. I just don't know what.
    Yeah, I've been racking my brain over that question... just don't know enough about how the cameras work, images are captured, data is laid on tape, etc. I was just thinking about the fact the banding is not totally eliminated in the first SeparateFields - not what I was expecting.
    Quote Quote  
  6. Originally Posted by jagabo View Post
    So all the double SeparateFields() script shows us is that consecutive scanlines of each field alternate in brightness. But we already knew that from looking at the result of a single SeparateFields().
    My brain won't let go of this:

    First SeparateFields() on 1 Frame @ 1/30 sec = 2 Fields @ 1/60 sec

    At this point, difference between Fields could be differences between information from the two tape heads; however, Weird Lines still exist - we have to go deeper.

    Running SeparateFields() again on the 1/60 file:
    Code:
    avisource("1986_1011 Pool Sample3 60fps.avi")
    AssumeTFF().AssumeFrameBased().SeparateFields()
    Second SeparateFields() on those 2 Fields (forced Frames) @ 1/60 sec = 4 "Images" @ 1/120 sec...
    Weird Lines are GONE!!!
    Timing is improper between the pairs... In reality, the first pair of "images" are taken from the same instant in time, and the second pair of "images" are from 1/60 sec later. However, these are displayed as evenly spaced in time - 1/120 sec apart.

    The display sequence (stepping through) is (assuming Field1 line 0 start and TFF):

    Image1A (even lines) - Image1B (even lines) - Image2A (odd lines) - Image2B (odd lines)***

    Running this either TFF or BFF: In BOTH CASES, Image1A and Image2B ("Images" 1 & 4) are darker.

    *** Field1 and Field2 after the first SeparateFields() don't start at the same line - if Field1 top line is even, Field2 top line is odd.

    This doesn't make any sense... switching from TFF to BFF should change result (I think).

    In any event, I don't believe this can be attributed to tape heads. dellsam34 was RIGHT!
    Image Attached Files
    Last edited by GrouseHiker; 2nd May 2020 at 12:12. Reason: Corrected terminology "fields" vs "images"
    Quote Quote  
  7. I know that PAL video alternates the phase of the chroma carrier over successive scan lines (of a field). If this was PAL video I'd assume it had to do with that. But it's NTSC video. Maybe there's something about the NTSC chroma carrier that alternates between fields too. I don't know enough about it to say.
    Quote Quote  
  8. I decided to work this out graphically. Based on observations of "images" 1 & 4 being darker after running SeparateFields() twice, the predicted Weird Lines should be pairs of dark scan lines separated by pairs of normal scan lines, and based on inspection of blown-up images this is accurate.

    UPDATED: TFF vs BFF produces the same pattern, but different pairs of scan lines.

    EDIT: After correction, TFF and BFF produce the same sets of dark lines.

    Image
    [Attachment 53168 - Click to enlarge]
    Last edited by GrouseHiker; 7th May 2020 at 14:27. Reason: Added TFF vs BFF & Corrected terminology "fields" vs "images" & Corrected matrix
    Quote Quote  
  9. Originally Posted by GrouseHiker View Post
    pairs of dark scan lines separated by pairs of normal scan lines
    Which is exactly what's in the original interlaced video.
    Last edited by jagabo; 1st May 2020 at 22:47.
    Quote Quote  
  10. The problem is defined!

    The solution is unknown...
    Quote Quote  
  11. Originally Posted by GrouseHiker View Post

    So... I'm off studying Avisynth methods to correct this issue... I have been searching for ways to correct alternate fields, but haven't found anything.
    A general approach for this would be apply a filter to separated fields, then weave. Something like a vertical blur, or lowpass, or convolution. The side effect is a reduction in details .

    Code:
    AssumeBFF()
    SeparateFields()
    vinverse2(amnt=4, uv=2)
    Weave()
    These settings attempt to favor preserving details over blurring to reduce artifacts; you can use stronger settings or other stronger filters if there are other sections that have worse dark/light lines

    orig
    Image
    [Attachment 53032 - Click to enlarge]


    filtered
    Image
    [Attachment 53033 - Click to enlarge]
    Quote Quote  
  12. Originally Posted by poisondeathray View Post
    A general approach for this would be apply a filter to separated fields, then weave.
    Thanks, that looks good!

    Is there a way in Avisynth to select specific "images" and apply corrections only to those "images"? I searched, but don't know if SelectEvery() would do this.

    Also, I'm assuming this is a luma issue in the darker "images" (I believe your "uv=2" indicates this?). Is there a way to measure total, integrated luma in an "image?" I just got AlignExplode v1.2 from Brad working, and I'm working on learning how to read the histogram. However, I don't see any luma numbers.
    Last edited by GrouseHiker; 2nd May 2020 at 12:04. Reason: Corrected terminology "fields" vs "images"
    Quote Quote  
  13. This actually seems to be working after running SeparateFields() once.

    Code:
    Avisource("1986_1011 Pool Sample3 YUY2 60fps.avi")
    Loadplugin("C:\Program Files (x86)\AviSynth+\plugins64+\EquLines.dll")
    EquLines(deltamax=10)
    Looking harder - probably not.
    Last edited by GrouseHiker; 2nd May 2020 at 10:15.
    Quote Quote  
  14. Originally Posted by GrouseHiker View Post


    Is there a way in Avisynth to select specific fields and apply corrections only to those fields? I searched, but don't know if SelectEvery() would do this.
    SelectEven and SelectOdd applied on SeparateFields()

    eg. You can apply a different filter, or different settings selectively to even or odd fields . In this example, FilterA is applied to even fields, FilterB is applied to odd fields

    Code:
    orig=Avisource("1986_1011 Pool Sample3 YUY2 60fps.avi")
    even=orig.AssumeBFF().SeparateFields().SelectEven().FilterA
    odd=orig.AssumeBFF().SeparateFields().SelectOdd().FilterB
    Interleave(even,odd)
    Weave()

    But in your case, a single field has both darker/brighter lines, hence the vertical blur approach

    Another approach would be to separatefields again and attempt to do something; but it probably won't work very well. If you attempt to brighten dark lines, or (darken bright lines), you will introduce new areas of bright/dark lines , because the original defect is non uniform if you look at the histogram. ie. Certain areas are affected more than others . If you look at the pool shot, the bottom half, only the brighter pixels have that wavy sawtooth pattern in the histogram. Darker pixels in the same area do not, or are not affected as severely. Those darker areas correspond to shadow areas, such as the shadows in front of the kids

    Also, I'm assuming this is a luma issue in the darker fields (I believe your "uv=2" indicates this?).
    For vinverse2, uv=2 means don't touch the chroma planes, just copy them. So the filter only affects the Y Plane

    Is there a way to measure total, integrated luma in a field?
    In what way? a single number for the total average ? It's not going to be useful since both bright/dark affect a single field

    Even if you measure a single pixel line, 480 x 1 values, then average them (or some other measure like median, mode) , it's not going to be useful.
    The distribution is more important in your example. eg. lets say a dark line was -5 relative to a bright line above or below. Adding +5 to Y won't fix it because of the distribution - it's non uniform across a row

    If you look at that screenshot above, that "sawtooth" pattern does not affect the darker pixels. If you applied a filter to a line, you would cause a sawtooth pattern in the darker pixels. You would introduce a defect.
    Quote Quote  
  15. Once again, the problem isn't lighter and darker fields. Only lighter and darker scan lines in the fields. So measuring the average luma of a field won't help. Even if the problem was lighter and darker fields the difference is so small that it would likely be dwarfed by normal brightness changes from field to field.

    You can visualize the chroma channels with UtoY() or VtoY().

    Code:
    AviSource("filename.avi")
    SeparateFields()
    StackHorizontal(UtoY(),VtoY())
    But in this video the alternating light/dark lines don't appear in the chroma channels, only the luma.

    You can visualize the luma alone with GreyScale() or ColorYUV(cont_u=-256, cont_v=-256).

    Code:
    AviSource("filename.avi")
    SeparateFields()
    GreyScale()
    VInverse does blur the picture noticeably if you zoom into still shots. I tried a few alternatives:

    First just darkening one field to match the other.

    Code:
    AviSource("1986_1011 Pool Sample.avi") 
    SeparateFields()
    AssumeFrameBased()
    SeparateFields()
    evenlines = SelectEven()
    oddlines = SelectOdd().ColorYUV(gain_y=-6) # darken the bright lines of the field
    Interleave(evenlines, oddlines)
    Weave()
    AssumeFieldBased()
    Weave()
    Unfortunately, that eliminated the bands in parts of the picture but created bands in areas where there were none before.

    So I tried using a mask based on brightness to only apply it to brighter parts of the picture:

    Code:
    AviSource("1986_1011 Pool Sample.avi") 
    SeparateFields()
    AssumeFrameBased()
    SeparateFields()
    evenlines = SelectEven()
    oddlines = SelectOdd()
    
    bmask = oddlines.ColorYUV(off_y=-120).ColorYUV(gain_y=512)
    oddlines = Overlay(oddlines, oddlines.ColorYUV(gain_y=-6),mask=bmask)
    
    Interleave(evenlines, oddlines)
    Weave()
    AssumeFieldBased()
    Weave()
    That protected dark parts of the picture but not all dark areas are free of the lines. You can lower the threshold (off_y=120) to apply the filter to darker areas that do have the lines but then it starts creating lines where there were none before.

    Looking at the chroma channels, it might be that areas where U is brighter corresponds to areas with the lines. I'll try playing with that later...
    Quote Quote  
  16. Thanks for all the good information!!

    Originally Posted by poisondeathray View Post
    Another approach would be to separatefields again and attempt to do something; but it probably won't work very well. If you attempt to brighten dark lines, or (darken bright lines), you will introduce new areas of bright/dark lines , because the original defect is non uniform if you look at the histogram. ie. Certain areas are affected more than others . If you look at the pool shot, the bottom half, only the brighter pixels have that wavy sawtooth pattern in the histogram. Darker pixels in the same area do not, or are not affected as severely. Those darker areas correspond to shadow areas, such as the shadows in front of the kids
    Originally Posted by jagabo View Post
    Once again, the problem isn't lighter and darker fields. Only lighter and darker scan lines in the fields. So measuring the average luma of a field won't help. Even if the problem was lighter and darker fields the difference is so small that it would likely be dwarfed by normal brightness changes from field to field.
    What I was getting at with regard to selecting "images" is the lines go away after running SeparateFields twice. At this separation level, the 1st and 4th "images" are darker than the 2nd and 3rd. These darker "images" seem to be creating the lines. I was wondering of, say, every 4th "image"could be selected, luma increased and then Weave twice to bring it back?
    Last edited by GrouseHiker; 2nd May 2020 at 12:01. Reason: Corrected terminology "fields" vs "images"
    Quote Quote  
  17. Originally Posted by jagabo View Post

    Code:
    AviSource("1986_1011 Pool Sample.avi") 
    SeparateFields()
    AssumeFrameBased()
    SeparateFields()
    evenlines = SelectEven()
    oddlines = SelectOdd().ColorYUV(gain_y=-6) # darken the bright lines of the field
    Interleave(evenlines, oddlines)
    Weave()
    AssumeFieldBased()
    Weave()
    I believe this is what I was thinking except the selection is not odd/even (if I'm understanding the code).

    I think I can create a boolean algorithm for selecting if frame count is tracked numerically by Avisynth.

    UPDATE: I read the code wrong. You are selecting lines - I was thinking "images."
    Last edited by GrouseHiker; 2nd May 2020 at 11:59. Reason: Corrected terminology "fields" vs "images"
    Quote Quote  
  18. Originally Posted by jagabo View Post
    Once again, the problem isn't lighter and darker fields. Only lighter and darker scan lines in the fields.
    I don't have the trained eyes y'all have, but I believe I was seeing (after SeparateFields twice) the lines are gone and the difference is lighter and darker "images." The matrix demonstrates this.
    Last edited by GrouseHiker; 2nd May 2020 at 12:01. Reason: Corrected terminology "fields" vs "images"
    Quote Quote  
  19. Originally Posted by GrouseHiker View Post
    Originally Posted by jagabo View Post
    Once again, the problem isn't lighter and darker fields. Only lighter and darker scan lines in the fields.
    I don't have the trained eyes y'all have, but I believe I was seeing (after SeparateFields twice) the lines are gone and the difference is lighter and darker fields. The matrix demonstrates this.
    The results of first SeparateFields() is two fields. The result of the second SeparateFields() is not two fields, but rather two images, with every other scanline of a field. That's the point. The word "field" has a specific meaning. If you cut your car in half you don't have two cars.

    Here's the result of the third script in post #45 (plus QTGMC and stacking the original on the left, the filtered version on the right). You can see that darkening the bright lines fixes much of the image but creates lines in other parts.
    Image Attached Files
    Quote Quote  
  20. Originally Posted by jagabo View Post
    The results of first SeparateFields() is two fields. The result of the second SeparateFields() is not two fields, but rather two images, with every other scanline of a field. That's the point. The word "field" has a specific meaning. If you cut your car in half you don't have two cars.
    Thanks. I was struggling with that terminology.

    Y'all have provided a LOT of good stuff! It's going to take me a while to catch up.
    Last edited by GrouseHiker; 2nd May 2020 at 12:21.
    Quote Quote  
  21. Originally Posted by GrouseHiker View Post
    Thanks for all the good information!!

    Originally Posted by jagabo View Post
    Once again, the problem isn't lighter and darker fields. Only lighter and darker scan lines in the fields.
    What I was getting at with regard to selecting "images" is the lines go away after running SeparateFields twice. At this separation level, the 1st and 4th "images" are darker than the 2nd and 3rd. These darker "images" seem to be creating the lines. I was wondering of, say, every 4th "image"could be selected, luma increased and then Weave twice to bring it back?
    ...sorry it took so long for me to figure this out...
    I have learned:

    "Fields" are made up of alternating lines in "Frames"
    "Images" are made up of alternating lines in "Fields"

    I was hung up on selecting and correcting "Images." However, I realize now the better solution is selecting and correcting the appropriate lines in the "Fields." I believe that is the approach you are taking in your code.
    Quote Quote  
  22. I don't think there's a formal name for the result of the second SeparateFields(). But they're definitely not fields. I just called them images.
    Quote Quote  
  23. Originally Posted by jagabo View Post
    ...Unfortunately, that eliminated the bands in parts of the picture but created bands in areas where there were none before.

    So I tried using a mask based on brightness to only apply it to brighter parts of the picture:

    Code:
    AviSource("1986_1011 Pool Sample.avi") 
    SeparateFields()
    AssumeFrameBased()
    SeparateFields()
    evenlines = SelectEven()
    oddlines = SelectOdd()
    
    bmask = oddlines.ColorYUV(off_y=-120).ColorYUV(gain_y=512)
    oddlines = Overlay(oddlines, oddlines.ColorYUV(gain_y=-6),mask=bmask)
    
    Interleave(evenlines, oddlines)
    Weave()
    AssumeFieldBased()
    Weave()
    That protected dark parts of the picture but not all dark areas are free of the lines. You can lower the threshold (off_y=120) to apply the filter to darker areas that do have the lines but then it starts creating lines where there were none before.

    Looking at the chroma channels, it might be that areas where U is brighter corresponds to areas with the lines. I'll try playing with that later...
    Thank you for code! I have run your two options, and I see the new lines created by the 3rd block of code. However, this quoted block of code does a surprisingly good job. I do see some new lines created in a few areas, but the overall result is a HUGE improvement. I'm now working on trying to understand ColorYUV parameters for masking - a major effort in itself.

    Can you please explain how your mask "ColorYUV(off_y=-120).ColorYUV(gain_y=512)" accomplishes its purpose?
    Quote Quote  
  24. ColorYUV(off_y=-120) subtracts 120 from each Y value, Y' = Y - off_y. Remaining Y values now range from 0 to 135. ColorYUV(gain_y=512) multiples the remaining values by 3, Y' = Y * (gain_y + 256) / 256).

    So basically, the first ColorYUV is used as a threshold. Pixels below 120 will not be changed. Pixels from 120 to 205 are changed proportionately depedning on their brightness. Pixels above 205 will be fully changed.

    You can see the mask by adding return(bmask) any time after it's generated.

    Another way to build the same masks is with Levels(120,1,205,0,255).
    Quote Quote  
  25. Is it possible to point sample in areas where lines originally occurred... to maybe compare YUV values to see if there is a correlation?
    Quote Quote  
  26. You can use VirtualDub2 to read RGB and YUV values of individual pixels:

    Image
    [Attachment 53079 - Click to enlarge]


    Though it's a little convoluted. If you haven't used VirtualDub2 before: After opening a video file or AVS script select Video -> Filters... Press the Add... button. In the left pane double click on the Crop filter (some other filters work too). Hold down a shift key while moving the mouse cursor over the preview image.

    But it's not entirely accurate. It converts incoming YUV to RGB with a rec.601 matrix. Then converts that RGB back to YUV for that display. The round trip loses some accuracy.
    Quote Quote  
  27. Originally Posted by jagabo View Post
    But it's not entirely accurate. It converts incoming YUV to RGB with a rec.601 matrix. Then converts that RGB back to YUV for that display. The round trip loses some accuracy.
    Great! That's much easier than I was expecting (script - Frame#, x-y coordinates).

    For this purpose the absolute value doesn't matter, it's the relative values that might make a difference.

    I wonder if the camera processes RGB from it's sensor or YUV? RGB would actually be more intuitive if color is a factor.
    Quote Quote  
  28. Camera's use RGB sensors. That is usually converted immediately to YUV to be transmitted or recorded. Some modern cameras allow for the raw RBG to be recorded or transmitted.
    Quote Quote  
  29. jagabo, your code is based on lighter lines needing to be darkened. I was wondering if you took that path intuitively... I had always assumed these were dark lines needing to be lightened.

    Your approach does differ from a theory I have that this is some type of clipping of the brights during processing of the signal from the camera sensor before laying down on tape.

    added:
    I actually tried reversing your code, but the light lines overcorrecting in the dark areas was a problem.
    Last edited by GrouseHiker; 3rd May 2020 at 21:03.
    Quote Quote  
  30. Originally Posted by jagabo View Post
    Pixels from 120 to 205 are changed proportionately depedning on their brightness. Pixels above 205 will be fully changed.
    My way of visualizing Y Offset = -120 is all luma values are shifted down, putting the darker regions out of range (below zero) and the brightest pixels at 115 (235-120) or below. Is this valid?

    After some study, I believe I understand the Y Gain 3x multiplier, but I can't come up with the 205 threshold.

    Update: Figured it out - 255/3 + 120
    Last edited by GrouseHiker; 4th May 2020 at 09:18.
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!