VideoHelp Forum
+ Reply to Thread
Results 1 to 20 of 20
Thread
  1. In 396936-Improper-Display-Aspect-Ratio, jagabo helped me verify whether or not my file (captured from a Sony DCR-TRV350 via a PCIe card - Magewell Capture Pro HDMI) was interlaced. My goal is to capture interlaced video to preserve the original as well as possible.

    Originally Posted by GrouseHiker View Post
    Originally Posted by jagabo View Post
    That's linux syntax. For windows you can use:

    Code:
    ffmpeg -i filename.avi -filter:v idet -frames:v 100 -an -f null -
    Or open your video in VirtualDub2 and look at it in the input pane. You should see comb artifacts if it's interlaced.

    Thanks!! As I suspected (based on viewing), the s-video captured file is not interlaced:
    [Parsed_idet_0 @ 0000023ad9a7cf00] Single frame detection: TFF: 0 BFF: 0 Progressive: 100 Undetermined: 1
    [Parsed_idet_0 @ 0000023ad9a7cf00] Multi frame detection: TFF: 0 BFF: 0 Progressive: 100 Undetermined: 1

    Now, I've just got to figure out why...
    Video>Capture Filters>Video in VirtualDub show a "Deinterlace" option with no checkbox and no option for leaving interlaced:
    Image
    [Attachment 52943 - Click to enlarge]


    Am I looking at the wrong menu option?
    Quote Quote  
  2. Weave is the option you want.
    Quote Quote  
  3. Originally Posted by jagabo View Post
    Weave is the option you want.
    That was the secret... "weave" means "interlaced." Changed to look at 500 frames:

    [Parsed_idet_0 @ 0000018f5670abc0] Single frame detection: TFF: 180 BFF: 0 Progressive: 0 Undetermined: 321
    [Parsed_idet_0 @ 0000018f5670abc0] Multi frame detection: TFF: 485 BFF: 0 Progressive: 0 Undetermined: 16

    Thanks for getting me past that hurdle!
    Quote Quote  
  4. I never really liked calling weave a type of deinterlacing. But if you consider that analog video is transmitted as a series of fields then weaving pairs them together into frames can be considered deinterlacing. That is, the act of turning a sequence if fields into a sequence of frames is deinterlacing.
    Quote Quote  
  5. From https://www.intel.com/content/dam/www/programmable/us/en/pdfs/literature/wp/wp-01117-h...nterlacing.pdf

    "Both bob and weave deinterlacing can affect the image quality, especially when there is motion."

    Is the Magewell capture card impacting quality of the archived video by not giving a "no deinterlacing" option and doing weave "deinterlacing?" This would be very disappointing...

    By the way, VirtualDub seems to recognize the file as interlaced, since it seems to properly deinterlace in "Preview" using Bob.
    Quote Quote  
  6. On second thought... maybe not "properly." This is a "deinterlaced" screen capture from the VirtualDub2 "Output" display. I know the lighting is terrible, but that applies to a lot I the material I have to work with. Maybe the sawtooth edges are inevitable?
    Image
    [Attachment 52956 - Click to enlarge]
    Quote Quote  
  7. Sawtooth edges eliminated using ELA deinterlace:
    Image
    [Attachment 52958 - Click to enlarge]
    Quote Quote  
  8. Originally Posted by GrouseHiker View Post
    Is the Magewell capture card impacting quality of the archived video by not giving a "no deinterlacing" option and doing weave "deinterlacing?" This would be very disappointing...
    I don't see a "no deinterlacing" option in the pulldown you pictured, only Weave, Blend, Top field only, Bottom field only. Of those only Weave is lossless -- it contains all the information that was captured.

    Let's ignore the fact that analog video engineers consider weaving a type of deinterlacing. People dealing with digital video don't start with fields, they start with frames. To them a frame that shows comb artifacts when there is motion is "interlaced". A frame that shows no artifacts even though there is motion is progressive.

    Originally Posted by GrouseHiker View Post
    By the way, VirtualDub seems to recognize the file as interlaced, since it seems to properly deinterlace in "Preview" using Bob.
    VirtualDub's deinterlacing filters will work on any video you apply them to. That doesn't mean it recognizes which videos are interlaced and which are progressive. Applying a deinterlacing filter to progressive video only damages it. To VirtualDub a frame of video is just a frame of video. It's up to you to tell it what to do with it.
    Quote Quote  
  9. Originally Posted by jagabo View Post
    ... Of those only Weave is lossless -- it contains all the information that was captured.
    Thanks... That's good news! I will keep the Magewell card.
    Quote Quote  
  10. the act of turning a sequence if fields into a sequence of frames is deinterlacing
    What would be your definition of "interlacing"?
    Quote Quote  
  11. One thing that is troubling me about Weave deinterlacing - Shouldn't the frame rate double if the Fields are being turned into Frames?

    Is Weave deinterlacing the purest form that analog, interlaced video can be digitally represented? If 2 Fields are combined into 1 Frame, it seems we've done damage to the original quality.
    Last edited by GrouseHiker; 28th Apr 2020 at 16:40.
    Quote Quote  
  12. Originally Posted by GrouseHiker View Post
    One thing that is troubling me about Weave deinterlacing - Shouldn't the frame rate double if the Fields are being turned into Frames?
    No. Pairs of fields are turned into frames by weaving them together. In the original interlaced signal a field is only every other scanline of the picture. One field is all the even scan lines, the other all the odd scan lines. In color NTSC video there are 59.94 fields per second, alternating between even and odd fields. In the part of the frame that's usually captured there are 240 lines per field. Weaving them together gives you 480 scan lines at 29.97 frames per second. When they are played back on an interlaced device each frame is split back into two fields that are sent out at 59.94 fields per second, the same as the original signal.

    Originally Posted by GrouseHiker View Post
    Is Weave deinterlacing the purest form that analog, interlaced video can be digitally represented? If 2 Fields are combined into 1 Frame, it seems we've done damage to the original quality.
    I've never seen a capture device do this but it would be possible for each field to be saved separately. So the video could be saved as 59.94 240 line fields per second. VirtualDub's deinterlace filter has a mode that separates the two fields and stacks them horizontally rather than woven. It's also possible to stack the two fields vertically rather than horizontally. But such filters are for specialized purposes.

    The advantage of weaving is that you have the full 480 line resolution. Still parts of the picture look fine. The disadvantage is that there are comb artifacts wherever there is motion.
    Quote Quote  
  13. Originally Posted by jagabo View Post
    Weaving them together gives you 480 scan lines at 29.97 frames per second. When they are played back on an interlaced device each frame is split back into two fields that are sent out at 59.94 fields per second, the same as the original signal.
    Thanks for clarifying... that's the key point for me... that the original fields are there if needed on an interlaced device. I guess it's a mystery as to how the Magewell card actually codes the information... doesn't matter as long as the original fields can be broken out.

    I found this while researching the subject, but I don't know if the Magewell card conforms:

    https://www.sciencedirect.com/topics/engineering/interlaced-video
    In addition to regular frame coding in which lines from both fields are included in each macroblock, H.264/AVC provides two options for special handling of interlaced video field coding and macroblock-adaptive field/frame coding (MB-AFF).
    Quote Quote  
  14. Originally Posted by GrouseHiker View Post
    I found this while researching the subject, but I don't know if the Magewell card conforms:

    https://www.sciencedirect.com/topics/engineering/interlaced-video
    In addition to regular frame coding in which lines from both fields are included in each macroblock, H.264/AVC provides two options for special handling of interlaced video field coding and macroblock-adaptive field/frame coding (MB-AFF).
    That has to do with how an encoder handles interlaced frames internally. For viewing purposes, an interlaced frame went into the encoder and an interlaced frame comes out of the decoder -- it doesn't matter what happened in between.
    Quote Quote  
  15. Originally Posted by jagabo View Post
    That has to do with how an encoder handles interlaced frames internally. For viewing purposes, an interlaced frame went into the encoder and an interlaced frame comes out of the decoder -- it doesn't matter what happened in between.
    I'm in way over my head, but in researching another issue with flicker, I figured out how to run my first Avisynth script:
    Code:
    avisource("test1.avi")
    SeparateFields()
    My understanding is this script splits out the Fields and doubles the frame rate. This probably needs some tweaking, since the Fields don't line up vertically; however, I noticed (while stepping through), the Fields seem to be representative of continuous flow of motion (hands in the beginning). This leads me believe the Magewell implementation is accurate, and individual fields are preserved. However, I don't understand why the Field "frames" appear complete.
    Image Attached Files
    Quote Quote  
  16. SeparateFields() is ok for testing but not good for viewing video. The two fields will never align vertically -- the lines of one field are in between the lines of the other field in the original video. And it will leave you with a lot of aliasing artifacts.

    I haven't seen an actual capture clip but from your description there's no doubt the Magewell card is capturing properly. And it's a well respected card. It wouldn't be if it couldn't capture interlaced video properly.

    If you want to test for interlacing use a test clip with a lot more motion. Especially horizontal motion. Medium speed panning shots are good for this.
    Quote Quote  
  17. Capturing Memories dellsam34's Avatar
    Join Date
    Jan 2016
    Location
    Member Since 2005, Re-joined in 2016
    Search PM
    Have you tried QTGMC for de-interlacing? it yields better results than Vdub or any other GUI software, you should give it a try.
    Last edited by dellsam34; 29th Apr 2020 at 02:39.
    Quote Quote  
  18. Originally Posted by dellsam34 View Post
    Have you tried QTGMC for de-interlacing? it yields better results than Vdub or any other GUI software, you should give it a try.
    I'm still in my learning and equipment/software verification stage. Based on what I have learned from this forum and other study, I am planning to use QTGMC for deinterlacing. Yesterday found this very informative how-to on the subject: Deinterlacing SD video with AVISynth+, QTGMC, and FFMPEG Tutorial However, I'm hoping the 64-bit versions are working now.

    The deinterlacing I have been doing in VDub so far is only for testing, and I have noticed the interlace combing has not been 100% eliminated.

    I really appreciate the generous help you, jagabo, and others have offered, and I'm now confident in the Magewell capture card.
    Quote Quote  
  19. Yes, QTGMC() gives the best visual quality for most material. It's hard to get set up because it has so many dependencies you have to find, download, and install. It's worth the effort if you have lots of video you want to deinterlace. But it's not the best choice for analysis as it combines elements from both fields and performs a lot of cleanup of buzzing/aliased edges, etc. SeparateFields() or Bob(0.0, 1.0) is better for analysis as they cleanly separate the two fields. But the aliasing they leave behind is not great for viewing. Bob is closest to what a CRT TV does.

    I took you SeparateFields() clip, wove the fields back together, did some fast/crude white balance and levels adjustments, and applied QTGMC()...

    Code:
    AviSource("Test 1 pass 60 fps.avi") 
    RGBAdjust(b=167.0/237.0) # crude white balance
    AssumeFieldBased()
    AssumeTFF()
    Weave() # weave the two fields back together.
    ConvertToYV12(interlaced=true)
    ColorYUV(gain_y=60, off_y=-20, cont_u=200, cont_v=200) # levels and saturation
    QTGMC()
    Image Attached Files
    Quote Quote  
  20. Originally Posted by jagabo View Post
    I took you SeparateFields() clip, wove the fields back together, did some fast/crude white balance and levels adjustments, and applied QTGMC()...
    NICE!!! Thank you! Now I have a target standard... and some code...
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!