VideoHelp Forum




+ Reply to Thread
Page 4 of 5
FirstFirst ... 2 3 4 5 LastLast
Results 91 to 120 of 128
  1. Originally Posted by GrouseHiker View Post
    This loads the file and runs, but the double SeparateFields problem remains:
    Code:
    LWlibavVideoSource("1986_1011 Pool Sample DV.avi")
    SeparateFields()
     AssumeFrameBased().SeparateFields()
    I will try on a different computer in a few hours
    Which problem ? The error message about no video stream ?

    Post the error message verbatim

    That script above works for me, no error message

    Which avisynth version ?
    Quote Quote  
  2. Everything runs ok on this different machine (T3600):

    [OS/Hardware info]
    Operating system: Windows 10 (x64) (Build 18363)

    CPU: Intel(R) Xeon(R) CPU E5-1650 0 @ 3.20GHz / Sandy Bridge-E (Xeon)
    MMX, SSE, SSE2, SSE3, SSSE3, SSE4.1, SSE4.2, AVX
    6 physical cores / 12 logical cores


    [Avisynth info]
    VersionString: AviSynth+ 3.5 (r3106, 3.5, x86_64)
    VersionNumber: 2.60
    File / Product version: 3.5.0.0 / 3.5.0.0
    Interface Version: 7
    Multi-threading support: Yes
    Avisynth.dll location: C:\Windows\SYSTEM32\avisynth.dll
    Avisynth.dll time stamp: 2020-04-02, 22:27:26 (UTC)
    PluginDir2_5 (HKLM, x64): C:\Program Files (x86)\AviSynth+\plugins64
    PluginDir+ (HKLM, x64): C:\Program Files (x86)\AviSynth+\plugins64+


    [C++ 2.6 Plugins (64 Bit)]
    C:\Program Files (x86)\AviSynth+\plugins64+\ConvertStacked.dll [2020-04-02]
    C:\Program Files (x86)\AviSynth+\plugins64+\DirectShowSource.dll [2020-04-02]
    C:\Program Files (x86)\AviSynth+\plugins64+\ImageSeq.dll [2020-04-02]
    C:\Program Files (x86)\AviSynth+\plugins64+\LSMASHSource.dll [2020-03-22]
    C:\Program Files (x86)\AviSynth+\plugins64+\Shibatch.dll [2020-04-02]
    C:\Program Files (x86)\AviSynth+\plugins64+\TimeStretch.dll [2020-04-02]
    C:\Program Files (x86)\AviSynth+\plugins64+\VDubFilter.dll [2020-04-02]

    [Scripts (AVSI)]
    C:\Program Files (x86)\AviSynth+\plugins64+\colors_rgb.avsi [2020-03-12]

    [Uncategorized files]
    C:\Program Files (x86)\AviSynth+\plugins64+\colors_rgb.txt [2020-03-12]

    For some reason, I had to add format="YUY2" on this machine:

    Code:
    LWlibavVideoSource("1986_1011 Pool Sample DV.avi", format="YUY2")
    Last edited by GrouseHiker; 6th May 2020 at 15:38.
    Quote Quote  
  3. Originally Posted by poisondeathray View Post
    Originally Posted by GrouseHiker View Post
    This loads the file and runs, but the double SeparateFields problem remains:
    Code:
    LWlibavVideoSource("1986_1011 Pool Sample DV.avi")
    SeparateFields()
     AssumeFrameBased().SeparateFields()
    I will try on a different computer in a few hours
    Which problem ? The error message about no video stream ?

    Post the error message verbatim

    That script above works for me, no error message

    Which avisynth version ?
    No, the original error trying to run SeparateFields twice - described starting in post #77

    Everything on the problem computer (Opti790) is current and was reinstalled just in case. Script were running fine on the Opti790 until yesterday. Just tested on the computer I'm on now (T3600) and the script works fine.

    The Opti790 issue is not specific to DV encoding. Same problem with YUY2.

    By the way, everything I know about video I have learned within the last month...
    Quote Quote  
  4. Originally Posted by GrouseHiker View Post

    For some reason, I had to add format="YUY2" on this machine:

    Code:
    LWlibavVideoSource("1986_1011 Pool Sample DV.avi", format="YUY2")


    If one of your filters is not compatible with YV411, I would use ConvertToYUY2(interlaced=true) instead. Forcing the lsmash decoder to output YUY2 can give you undesireable results.

    LWlibavVideoSource("1986_1011 Pool Sample DV.avi")
    ConvertToYUY2(interlaced=true)

    If you compare you will see chroma shifting ,bleeding and misalignment if you force YUY2 from the decoder, instead of doing a proper YV411 => YUY2 interlaced conversion

    animated gif

    Image
    [Attachment 53151 - Click to enlarge]
    Quote Quote  
  5. Originally Posted by poisondeathray View Post

    If one of your filters is not compatible with YV411, I would use ConvertToYUY2(interlaced=true) instead. Forcing the lsmash decoder to output YUY2 can give you undesireable results.

    LWlibavVideoSource("1986_1011 Pool Sample DV.avi")
    ConvertToYUY2(interlaced=true)

    If you compare you will see chroma shifting ,bleeding and misalignment if you force YUY2 from the decoder, instead of doing a proper YV411 => YUY2 interlaced conversion
    Good to know... Thanks!
    Quote Quote  
  6. Putting the computer problem aside for a moment (maybe it should be moved to a new thread anyway), I'm trying to develop a deeper understanding of the scripting techniques jagabo has developed for the Weird Lines.

    When this code is run on a DV file, I assume Avisynth knows to use BFF on the first SeparateFields().

    Code:
    SeparateFields()
     AssumeFrameBased().SeparateFields()
    Does the same line pattern (BFF in this case) get used on the second SeparateFields()?

    ADDED:
    I believe I just answered my question - it does NOT. When I add the second TFF in jagabo's code, the script doesn't' work right.

    Code:
    AssumeTFF()
    SeparateFields()
    AssumeFrameBased()
    AssumeTFF()
    SeparateFields()
    Last edited by GrouseHiker; 6th May 2020 at 17:08. Reason: More testing
    Quote Quote  
  7. Originally Posted by GrouseHiker View Post
    Putting the computer problem aside for a moment (maybe it should be moved to a new thread anyway), I'm trying to develop a deeper understanding of the scripting techniques jagabo has developed for the Weird Lines.

    When this code is run on a DV file, I assume Avisynth knows to use BFF on the first SeparateFields().

    Code:
    SeparateFields()
     AssumeFrameBased().SeparateFields()
    Does the same line pattern (BFF in this case) get used on the second SeparateFields()?
    Internally, Avisynth assumes BFF by default . (Some exceptions - some source filters can override that, such as MPEG2Source, DGSource)

    You can override explicitly by using AssumeBFF() , or AssumeTFF()

    Yes, the internal field order is carried through, until otherwise specified

    You can check what avisynth "thinks" the current field order is at any point in the script by using info()
    Quote Quote  
  8. I was editing my last post while you were posting.

    Originally Posted by poisondeathray View Post

    Internally, Avisynth assumes BFF by default . (Some exceptions - some source filters can override that, such as MPEG2Source, DGSource)

    You can override explicitly by using AssumeBFF() , or AssumeTFF()

    Yes, the internal field order is carried through, until otherwise specified
    Based on my testing just now, Avisynth reverts back to default BFF for the second SeparateFields()

    I'll try verifying using info().

    ADDED:
    After testing using info(), the parity is Top field before the last SeparateFeilds() and Bottom after the last SeparateFeilds()
    Last edited by GrouseHiker; 6th May 2020 at 17:19. Reason: Testing using info()
    Quote Quote  
  9. Originally Posted by GrouseHiker View Post
    After testing using info(), the parity is Top field before the last SeparateFeilds() and Bottom after the last SeparateFeilds()
    Are you still using a DV AVI and AviSource()? Right after opening the video it should be frame based and BFF. After the first SeparateFields() it should be field based and BFF. After AssumFrameBased() it should be frame based and BFF. After the second SeparateFields() it should be field based and BFF.

    If you are starting with a TFF video and AviSynth knows it's TFF: Right after opening the video it should be frame based and TFF. After the first SeparateFields() it should be field based and TFF. After AssumFrameBased() it should be frame based and BFF. After the second SeparateFields() it should be field based and BFF.
    Quote Quote  
  10. Originally Posted by jagabo View Post
    Originally Posted by GrouseHiker View Post
    After testing using info(), the parity is Top field before the last SeparateFeilds() and Bottom after the last SeparateFeilds()
    Are you still using a DV AVI and AviSource()? Right after opening the video it should be frame based and BFF. After the first SeparateFields() it should be field based and BFF. After AssumFrameBased() it should be frame based and BFF. After the second SeparateFields() it should be field based and BFF.

    If you are starting with a TFF video and AviSynth knows it's TFF: Right after opening the video it should be frame based and TFF. After the first SeparateFields() it should be field based and TFF. After AssumFrameBased() it should be frame based and BFF. After the second SeparateFields() it should be field based and BFF.
    I'm back on the Opti790 that's not running the double SeparateFields(), so I can't test anything.

    I was testing on the T3600 with both DV (LWlibavVideoSource() and YUY2 (AviSource()). I've found what appear to be disturbing differences, but I'll save that until I can reconfirm my testing.

    However, it appears you are indicating AssumeFrameBased() always reverts to BFF, whether or not the file is DV or YUY2. Is that correct?

    By the way, I believe your code works on the DV file if either BFF is forced for both SeparateFields() OR TFF is forced for both SeparateFields(). Since this just programming/housekeeping and not viewing, it makes sense.

    Also, I'm going to work on finding anything I can do to get this Opti790 running the double SeparateFields() again. If I can't figure it out, should I start a new thread to avoid contaminating this one more?
    Quote Quote  
  11. Originally Posted by GrouseHiker View Post
    However, it appears you are indicating AssumeFrameBased() always reverts to BFF, whether or not the file is DV or YUY2. Is that correct?
    In the two cases I gave, yes. I never checked what happened when you applied AssumeFrameBased() with a TFF video that was already frame based. As it turns out, AviSynth "forgets" it was TFF and assumes BFF. So yes, any time you AssumeFrameBased() the video is flagged BFF.

    <edit>
    Yes, confirmed:
    AssumeFrameBased throws away the existing information and assumes that the clip is frame-based, with the bottom (even) field dominant in each frame.
    http://avisynth.nl/index.php/AssumeFrameBased
    </edit/

    Originally Posted by GrouseHiker View Post
    By the way, I believe your code works on the DV file if either BFF is forced for both SeparateFields() OR TFF is forced for both SeparateFields(). Since this just programming/housekeeping and not viewing, it makes sense.
    It works, but the field order after the first SeparateFields() is wrong. If you step through the video at that point you'll find jerky motion. The second SeparateFields() doesn't really matter because the two resulting... let's call them semi-fields?... don't really have a temporal order. They both represent the same point in time. You just have to keep track when you weave back together (some filters may change their status from fields to frames -- so you have to AssumeFieldBased() and AssumeT/BFF() before weaving them back together.
    Last edited by jagabo; 6th May 2020 at 21:30.
    Quote Quote  
  12. Yeah!... semi-fields. That's probably less ambiguous than "images." Somebody mark up the video dictionary!

    Originally Posted by jagabo View Post
    It works, but the field order after the first SeparateFields() is wrong. If you step through the video at that point you'll find jerky motion.
    I'm thinking for the purposes of correcting semi-fields (scan lines) at some interval and grouping, it probably doesn't matter how they're taken apart as long as it achieves the end result (correction) and they're put back together correctly... Valid?

    By the way, is there any degradation of the video stream by doing SeparateFields() and weaving back? As I play with this, there are significant quality differences from one semi-field to another... Maybe tailor other corrections (e.g., lizard skin, worms, chain link...) for groups of semi-fields that exhibit similar defects and leave the good ones alone?
    Last edited by GrouseHiker; 7th May 2020 at 00:08.
    Quote Quote  
  13. Originally Posted by GrouseHiker View Post
    Yeah!... semi-fields. That's probably less ambiguous than "images." Somebody mark up the video dictionary!

    Originally Posted by jagabo View Post
    It works, but the field order after the first SeparateFields() is wrong. If you step through the video at that point you'll find jerky motion.
    I'm thinking for the purposes of correcting semi-fields (scan lines) at some interval and grouping, it probably doesn't matter how they're taken apart as long as it achieves the end result (correction) and they're put back together correctly... Valid?
    That's correct.

    Originally Posted by GrouseHiker View Post
    By the way, is there any degradation of the video stream by doing SeparateFields() and weaving back?
    No.

    Originally Posted by GrouseHiker View Post
    As I play with this, there are significant quality differences from one semi-field to another... Maybe tailor other corrections (e.g., lizard skin, worms, chain link...) for groups of semi-fields that exhibit similar defects and leave the good ones alone?
    That's a possibility.
    Quote Quote  
  14. Regarding the Opti790 problem with running SeparateFields() twice... The script runs ok in AvsPmod and MPC-HC, so I'm more convinced it is a VirtualDub issue. Should I start a new thread on this issue?
    Quote Quote  
  15. Empty VirtualDub's plugins folder and see if the problem persists.
    Quote Quote  
  16. Running VirtualDub32 - emptied the plugins64 folder - problem persists.

    Running VirtualDub32 - emptied the plugins32 folder - problem FIXED!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

    I'll try adding back plugins one by one...

    CORRECTION:

    I had changed the VirtualDub shortcut during testing and never changed it back - thought I was running 64-bit but was 32-bit... I revised the text above.

    Fixed the shortcut and:

    Running VirtualDub64 - emptied the plugins64 folder - problem FIXED.

    I'M BACK!

    UPDATE:
    Problem was FFInputDriver.vdplugin. I left it out.
    Last edited by GrouseHiker; 7th May 2020 at 21:12. Reason: Problem ID
    Quote Quote  
  17. jagabo, your code

    Originally Posted by jagabo View Post
    Code:
    SeparateFields()
    AssumeFrameBased()
    SeparateFields()
    evenlines = SelectEven()
    oddlines = SelectOdd()
    
    bmask = oddlines.ColorYUV(off_y=-120).ColorYUV(gain_y=512)
    oddlines = Overlay(oddlines, oddlines.ColorYUV(gain_y=-6),mask=bmask)
    
    Interleave(evenlines, oddlines)
    Weave()
    AssumeFieldBased()
    Weave()
    Seems like it should be exactly the same as:

    Code:
    SeparateFields()
    AssumeFrameBased()
    SeparateFields()
    
    modlines = SelectEvery(2, 1)
    
    bmask = modlines.ColorYUV(off_y=-120).ColorYUV(gain_y=512)
    modlines = Overlay(modlines, modlines.ColorYUV(gain_y=-6),mask=bmask)
    
    evenlines = SelectEven()
    oddlines = SelectOdd()
    
    Interleave(evenlines, oddlines)
    Weave()
    AssumeFieldBased()
    Weave()
    It's not for some reason.

    I'm trying to figure out SelectEvery(), since it seems to have more flexibility than SelectOdd() and SelectEven().
    Last edited by GrouseHiker; 7th May 2020 at 16:00.
    Quote Quote  
  18. I think this may be pretty close to correcting just as well as the code jagabo posted back in #62.:
    Code:
    LWlibavVideoSource("1986_1011 Pool Sample DV.avi") # use separate function if stream error shows up
    ConvertToYUY2(interlaced=true)
    
    AssumeBFF #DV
    SeparateFields()
    AssumeFrameBased()
    AssumeBFF
    SeparateFields()
    
    modlines = SelectEvery(2, 1) #Trying this out instead of SelectOdd()
    
    bmask = modlines.ColorYUV(off_y=-120).ColorYUV(gain_y=512)
    modlines = Overlay(modlines, modlines.ColorYUV(gain_y=-6),mask=bmask)
    
    evenlines = SelectEven()
    oddlines = SelectOdd()
    
    Interleave(evenlines, oddlines)
    Weave()
    AssumeFieldBased()
    Weave()
    Quote Quote  
  19. Originally Posted by GrouseHiker View Post
    I think this may be pretty close to correcting just as well as the code jagabo posted back in #62.:
    Code:
    LWlibavVideoSource("1986_1011 Pool Sample DV.avi") # use separate function if stream error shows up
    ConvertToYUY2(interlaced=true)
    
    AssumeBFF #DV
    SeparateFields()
    AssumeFrameBased()
    AssumeBFF
    SeparateFields()
    
    modlines = SelectEvery(2, 1) #Trying this out instead of SelectOdd()
    
    bmask = modlines.ColorYUV(off_y=-120).ColorYUV(gain_y=512)
    modlines = Overlay(modlines, modlines.ColorYUV(gain_y=-6),mask=bmask)
    
    evenlines = SelectEven()
    oddlines = SelectOdd()
    
    Interleave(evenlines, oddlines)
    Weave()
    AssumeFieldBased()
    Weave()
    That code doesn't do anything. You create a stream called modlines manipulate it, then ignore it.

    Remember: when you don't specify a stream by name the name last is assumed. So this:

    Code:
    SeparateFields()
    
    modlines = SelectEvery(2, 1) #Trying this out instead of SelectOdd()
    
    bmask = modlines.ColorYUV(off_y=-120).ColorYUV(gain_y=512)
    modlines = Overlay(modlines, modlines.ColorYUV(gain_y=-6),mask=bmask)
    
    evenlines = SelectEven()
    oddlines = SelectOdd()
    is the same as:

    Code:
    last = SeparateFields(last)
    
    modlines = SelectEvery(last, 2, 1) #Trying this out instead of SelectOdd()
    
    bmask = modlines.ColorYUV(off_y=-120).ColorYUV(gain_y=512)
    modlines = Overlay(modlines, modlines.ColorYUV(gain_y=-6),mask=bmask)
    
    evenlines = SelectEven(last)
    oddlines = SelectOdd(last)
    and the same as:

    Code:
    last = SeparateFields(last)
    
    evenlines = SelectEven(last)
    oddlines = SelectOdd(last)
    Last edited by jagabo; 7th May 2020 at 23:13.
    Quote Quote  
  20. Originally Posted by jagabo View Post
    That code doesn't do anything. You create a stream called modlines manipulate it, then ignore it.

    Remember: when you don't specify a stream by name the name last is assumed.
    Thanks for the analysis. Yes... I was just coming to that conclusion in another piece of test code. Darkening the lines way down, and nothing happened. When I did that in your code, they darkened.

    I was researching whether or not this could be put back together instead of using SelectOdd and SelectEven - didn't find anything yet. The YUY2 sample seems to have a different pattern. Can this be reassembled in any way after correcting modlines?

    Code:
    modlines = SelectEvery(4, 0, 3)
    oklines = SelectEvery(4, 1, 2)
    Quote Quote  
  21. Figured it out... You're a patient teacher jagabo.

    Code:
    modEven = SelectEvery(4, 0)
    modOdd = SelectEvery(4, 3)
    okOdd = SelectEvery(4, 1)
    okEven = SelectEvery(4, 2)
    
    Amask = modEven.ColorYUV(off_y=-120).ColorYUV(gain_y=512)
    modEven = Overlay(modEven, modEven.ColorYUV(gain_y=-4),mask=Amask)
    
    Bmask = modOdd.ColorYUV(off_y=-120).ColorYUV(gain_y=512)
    modOdd = Overlay(modOdd, modOdd.ColorYUV(gain_y=-6),mask=Bmask)
    
    evenlines = Interleave(modEven, okEven)
    oddlines = Interleave(okOdd, modOdd)
    
    Interleave(evenlines, oddlines)
    Weave()
    AssumeFieldBased()
    Weave()
    Before and After YUY2 attached.

    jagabo, your last code is better, but I picked this earlier one for testing simplicity. This technique should apply for hopefully all the YUY2 files. Your code works for the DV files.
    Image Attached Files
    Last edited by GrouseHiker; 8th May 2020 at 00:57. Reason: Added Files
    Quote Quote  
  22. Yes, that is working correctly.

    By the way, the YUY2 video is TFF. It doesn't matter for the sequence of filters you are using. But if you were doing any kind of temporal filtering that caused information from adjacent fields to mix (like temporal noise reduction) there might be problems.
    Quote Quote  
  23. Originally Posted by jagabo View Post
    Yes, that is working correctly.

    By the way, the YUY2 video is TFF. It doesn't matter for the sequence of filters you are using. But if you were doing any kind of temporal filtering that caused information from adjacent fields to mix (like temporal noise reduction) there might be problems.
    Not copied in the above code is this part:
    Code:
    AssumeTFF #YUY2
    #AssumeBFF #DV
    SeparateFields()
    AssumeFrameBased()
    AssumeTFF
    SeparateFields()
    I have decided to state TFF/BFF from now on to avoid surprises, but I will be wary of potential "temporal filtering" issues.

    Thank you jagabo for graciously sharing your expertise. I have come a long way with your help. Thanks also to poisondeathray and delsam34 (it's NOT the tape heads).

    Although there is always room for further experimentation, this Weird Lines issue is RESOLVED!!

    Now I will move toward all the other pieces of my workflow along with learning other aspects of restoration.
    Quote Quote  
  24. Originally Posted by GrouseHiker View Post
    Now I will move toward all the other pieces of my workflow along with learning other aspects of restoration.
    Does your VCR have a sharpness control? If so, try turning it down. There are oversharpening halos at sharp vertical edges. Sharpening filters also increase noise. Some of the herringbone noise may be caused or accentuated by a sharpening filter. Also check your capture device's proc amp and capture software. Sharpening filters there will increase noise too. The halos are especially hard to remove so it's best not to generate them in the first place.
    Quote Quote  
  25. Originally Posted by jagabo View Post
    Originally Posted by GrouseHiker View Post
    Now I will move toward all the other pieces of my workflow along with learning other aspects of restoration.
    Does your VCR have a sharpness control? If so, try turning it down. There are oversharpening halos at sharp vertical edges. Sharpening filters also increase noise. Some of the herringbone noise may be caused or accentuated by a sharpening filter. Also check your capture device's proc amp and capture software. Sharpening filters there will increase noise too. The halos are especially hard to remove so it's best not to generate them in the first place.
    These are all 8mm Video8 played on a Sony DCR-TRV3509 via s-video (Magewell Pro Capture HDMI) or firewire direct. It looks like my only play options are DNR and TBC. I will experiment with both of these to evaluate capture quality. I had read about proc amp issues, and have always left it at default.

    I'm also thinking of evaluating capturing higher quality 4:4:4 10-bit (V410 option on card) and immediately converting (ffmpeg?) to 4:2:2 10-bit. I had grabbed MagicYUV for this since I didn't find other options.
    Quote Quote  
  26. 4:4:4 and 10 bit aren't going make any difference. There's less than 4:2:2 on the tape and only about 5 bits of signal. All you're going to get with 4:4:4 10 bit caps is bigger files with more precise noise.
    Quote Quote  
  27. Originally Posted by jagabo View Post
    4:4:4 and 10 bit aren't going make any difference. There's less than 4:2:2 on the tape and only about 5 bits of signal. All you're going to get with 4:4:4 10 bit caps is bigger files with more precise noise.
    This is not documented in the card spec, but it looks like I can capture 4:2:2 10 bit (attached). What do you think?

    Edit: The more I research this, the more I'm not sure about this capture format. The 4:2:2 10-bit output may go through a conversion - the VirtualDub settings are not extremely clear.
    Image Attached Files
    Last edited by GrouseHiker; 8th May 2020 at 18:21.
    Quote Quote  
  28. Originally Posted by jagabo View Post
    4:4:4 and 10 bit aren't going make any difference. There's less than 4:2:2 on the tape and only about 5 bits of signal. All you're going to get with 4:4:4 10 bit caps is bigger files with more precise noise.
    Can precise noise be corrected more efficiently without unwanted impacts... then convert to 4:2:2 8 bit?
    Quote Quote  
  29. Originally Posted by jagabo View Post
    You can use VirtualDub2 to read RGB and YUV values of individual pixels:

    Image
    [Attachment 53079 - Click to enlarge]


    ... After opening a video file or AVS script select Video -> Filters... Press the Add... button. In the left pane double click on the Crop filter (some other filters work too). Hold down a shift key while moving the mouse cursor over the preview image.
    Using this to look at the pool area with the Weird Lines, I notice most of the pixels have RGB "B" value > 235 with many peaked at 255 (graph below). Is that normal?

    ADDED:
    Converting to YCbCr, the trend is not as intuitive.
    If this is color graphic accurate, reducing U should have it's major impact on Blue... which is good in this case.

    Image
    [Attachment 53204 - Click to enlarge]
    Image Attached Thumbnails Click image for larger version

Name:	Weird Lines Pixels.png
Views:	65
Size:	32.9 KB
ID:	53203  

    Last edited by GrouseHiker; 8th May 2020 at 21:56. Reason: Added Chart
    Quote Quote  
  30. Originally Posted by GrouseHiker View Post
    Using this to look at the pool area with the Weird Lines, I notice most of the pixels have RGB "B" value > 235 with many peaked at 255 (graph below). Is that normal?
    RGB values on the screen should be full range, 0 to 255. The 16-235 limit is for Y, 16-240 is for U and V. So RGB values over 235 (and below 16) are normal and expected. For example, pure black, Y=16, U=V=128, should be R=G=B=0. And pure white, Y=235, U=V=128, should be R=G=B=255.

    But not all combinations of YUV between those stated limits result in valid RGB colors. In fact, only about 25 percent are legal. If you look at the RGB cube inside the YUV cube you can see:

    Click image for larger version

Name:	0EF01A88-F874-4ECB-B2B6-3ADC38636CD4-imageId=FE9BEAD5-12E5-4244-80E1-E61EB8211A76.jpg
Views:	511
Size:	29.5 KB
ID:	53291
    https://software.intel.com/en-us/node/503873

    The limits only correspond to the 8 corners of the YUV cube.

    That said, your captures do have values outside that YUV cube, which is why you are getting B pegged at 255. Here's a frame with the out-of-gamut YUV values colored in red:

    Image
    [Attachment 53292 - Click to enlarge]


    Issues like this are quite common with consumer video tape caps. You generally don't worry about it while capturing as long as YUV aren't crushed at the 0 and 255 extremes. Because you can fix it later.
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!