VideoHelp Forum
+ Reply to Thread
Page 2 of 4
FirstFirst 1 2 3 4 LastLast
Results 31 to 60 of 119
Thread
  1. Originally Posted by Okiba View Post
    So if I am back to the original question, I can determine the type of film by using Yadif(1), find panning shot/section with a lot of movement:

    - Movement per frame means I should de-interlace it
    - Movement per two frames, means It's progressive - and It doesn't require anything else.
    Basically yes, or use separatefields().

    Now the question how the captured out of phase scenario would look like (to use TFM on). You mentioned they would look interlaced when in motion. So after Yadif(1), I should still be getting 1 movement per two frames, but those frames will look if there's motion combed? maybe you have an example for me to check?
    Example for phase-shifted:
    PhaseShifted.mp4
    Last edited by Sharc; 14th Jul 2021 at 13:25.
    Quote Quote  
  2. Originally Posted by Okiba View Post
    Now the question how the captured out of phase scenario would look like (to use TFM on). You mentioned they would look interlaced when in motion. So after Yadif(1), I should still be getting 1 movement per two frames
    Yes.
    Quote Quote  
  3. Thanks for sharing the video. I can see Yoda always Interlaced during normal frame by frame viewing. So I assume that's a sign. When I Bob with Yadif(1), I don't see anything interlaced or combed, but instead sort of a "Unfocused" halos.

    Attaching another example. Leaving other problems aside (noise, Chrome Shift, Dot Crawling etc), is that Phase Shifted or Progressive? It looks like it have 1 Movement per frame (so Interlaced) with frame blending (So QTGMC.Srestore)?
    Image Attached Files
    Quote Quote  
  4. Originally Posted by Okiba View Post
    I can see Yoda always Interlaced during normal frame by frame viewing. So I assume that's a sign. When I Bob with Yadif(1), I don't see anything interlaced or combed, but instead sort of a "Unfocused" halos.
    I thought this was all covered in my first reply:
    Originally Posted by manono View Post
    If every frame is interlaced, then the chances are good it's phase-shifted and TFM alone should work. The way you decide if that's true or not is to either separate the fields or bob the video and check if every field has a duplicate.
    Yet you didn't notice every field of the Yoda video Sharc took the time to prepare and upload had a duplicate?
    ...is that Phase Shifted or Progressive?
    Neither. Phase shifted is progressive. It just needs TFM to restore its progressive nature. Your Karate Kid.avi is field blended.
    Quote Quote  
  5. Yet you didn't notice every field of the Yoda video Sharc took the time to prepare and upload had a duplicate?
    I did. The reason I asked was because the "example.avi" I shared previously - had duplicate 2nd frame. But there wasn't any need to TFM it. The Yoda video also also have duplicate frames, but is Phase Shifted and need to be TFMed. What I gathered from the examples and what we talked about here, is that in Phase Shifted, the video will look interlaced during normal movement (ie - not bobed).

    This cleared some base miss-understand I had about the basics. Thanks everyone!
    Quote Quote  
  6. Beware that Sharc's out-of-phase sample is flagged with the wrong field order. It contains bff frames but it's encoded tff.

    Code:
    Video
    ID                                       : 1
    Format                                   : AVC
    Format/Info                              : Advanced Video Codec
    Format profile                           : High@L4
    Format settings                          : CABAC / 4 Ref Frames
    Format settings, CABAC                   : Yes
    Format settings, Reference frames        : 4 frames
    Format settings, GOP                     : M=4, N=50
    Codec ID                                 : avc1
    Codec ID/Info                            : Advanced Video Coding
    Duration                                 : 1 min 1 s
    Bit rate mode                            : Variable
    Bit rate                                 : 1 302 kb/s
    Maximum bit rate                         : 25.0 Mb/s
    Width                                    : 720 pixels
    Height                                   : 576 pixels
    Display aspect ratio                     : 1.85:1
    Original display aspect ratio            : 16:9
    Frame rate mode                          : Constant
    Frame rate                               : 25.000 FPS
    Standard                                 : PAL
    Color space                              : YUV
    Chroma subsampling                       : 4:2:0
    Bit depth                                : 8 bits
    Scan type                                : MBAFF
    Scan type, store method                  : Interleaved fields
    Scan order                               : Top Field First
    Bits/(Pixel*Frame)                       : 0.126
    Stream size                              : 9.48 MiB (100%)
    Writing library                          : x264 core 163 r3059 b684ebe
    Encoding settings                        : cabac=1 / ref=5 / deblock=1:0:0 / analyse=0x3:0x113 / me=hex / subme=7 / psy=1 / psy_rd=1.00:0.00 / mixed_ref=1 / me_range=16 / chroma_me=1 / trellis=1 / 8x8dct=1 / cqm=0 / deadzone=21,11 / fast_pskip=1 / chroma_qp_offset=-2 / threads=9 / lookahead_threads=1 / sliced_threads=0 / nr=0 / decimate=1 / interlaced=tff / bluray_compat=0 / constrained_intra=0 / bframes=3 / b_pyramid=1 / b_adapt=1 / b_bias=0 / direct=3 / weightb=1 / open_gop=0 / weightp=0 / keyint=50 / keyint_min=1 / scenecut=40 / intra_refresh=0 / rc_lookahead=50 / rc=crf / mbtree=1 / crf=20.0 / qcomp=0.60 / qpmin=0 / qpmax=69 / qpstep=4 / vbv_maxrate=25000 / vbv_bufsize=31250 / crf_max=0.0 / nal_hrd=vbr / filler=0 / ip_ratio=1.40 / aq=1:1.00
    Encoded date                             : UTC 2021-07-14 18:21:32
    Tagged date                              : UTC 2021-07-14 18:21:32
    Codec configuration box                  : avcC
    Some source filters will see the tff flag and play the fields in the wrong order:

    Code:
    LWlibavVideoSource("PhaseShifted.mp4", prefer_hw=2) # returns tff encoding setting
    Yadif(mode=1)
    # back and forth motion every field
    Code:
    LSMASHVideoSource("PhaseShifted.mp4", prefer_hw=2) # ignores encoding setting, assumes bff
    Yadif(mode=1)
    # motion every other field
    Quote Quote  
  7. Originally Posted by Okiba View Post
    Attaching another example. Leaving other problems aside (noise, Chrome Shift, Dot Crawling etc), is that Phase Shifted or Progressive? It looks like it have 1 Movement per frame (so Interlaced) with frame blending (So QTGMC.Srestore)?
    That is a field blended NTSC to PAL conversion. It's encoded bff. Use SRestore to remove blending and restore the video to 23.976 fps:

    Code:
    LWlibavVideoSource("Karate Kid.avi", format="yuy2") # I don't have a VFW huffyuv decoder installed
    AssumeBFF() # 
    Yadif(mode=1) # or QTGMC
    SRestore(frate=23.976)
    Quote Quote  
  8. Yep, I remember BFF/TFF from the CamCorder Footage
    I update my "Starting Point" script for the other Cartoons:

    Code:
    Crop(8, 0, -8, 0)
    
    # Black Level/Color Adjustments first, to reduce posterization problems. 
    MergeChroma(Levels(20, 1.0, 210, 16, 235, coring=false), last)
    
    # Those are pretty much video specific. Most of the Cartoon suffer from noise, so SMDegrain() is probably almost always good. QTGMC(InputType=2) helps to counter Time based 
    # combing issues with will probably being present in all videos as they are captured from the same setup. 
    SMDegrain()
    QTGMC(InputType=2)
    
    # Pick your weapons based on the type of the video:
    # Phase Shifting: TFM(clip2=QTGMC(FPSDivisor=2))
    # Blended fields: SRestore(frate=?)/Tdecimate()
    
    # Return the original size
    nnedi3_rpow2(2, cshift="Spline36Resize", fwidth=704, fheight=576)
    
    ChromaShiftSP(x=-1.5, y=2)
    
    Prefetch(3)
    Is the order of things efficient/correct?
    This can pretty much be used also on the Camcorder footage, Just it will always be de-interlacing with QTGMC normally - and there is no need in SMDGrain probably. Will check it out later.

    Thanks everyone!
    Last edited by Okiba; 15th Jul 2021 at 03:06.
    Quote Quote  
  9. Originally Posted by jagabo View Post
    Beware that Sharc's out-of-phase sample is flagged with the wrong field order. It contains bff frames but it's encoded tff.
    Yes, my bad. Here the correctly flagged version:
    Image Attached Files
    Quote Quote  
  10. QTGMC(InputType=2) is not deinterlacing -- it's cleaning up small residual combing. It will badly mess up interlaced frames. You should not use it on interlaced video, NTSC telecined film, or out-of-phase PAL -- unless you've first made itprogressive.

    This particular clip is progressive but SMDegrain() should include the "interlaced=true" argument when used on interlaced frames.

    Why are you using nnedi3_rpow2() to "return the original size" when it's already the original size?
    Quote Quote  
  11. Yes, my bad. Here the correctly flagged version:
    That's pretty inserting. How you Phase Shifted Yoda, Cool example, thanks

    This particular clip is progressive but SMDegrain() should include the "interlaced=true" argument when used on interlaced frames.
    That's good to know, thank you.

    Why are you using nnedi3_rpow2() to "return the original size" when it's already the original size?
    Oh, wait - I was sure I need to return the size after cropping the black bars. But now I see the after 8/0/-8/0 - the size is actually 702x576, which is VHS Pal size. So indeed no need to resize. What happen if I crop more? The example.avi I shared for example need more cropping to get the black bars down. should then I use nnedi_3 to return it to 702x576?
    Last edited by Okiba; 15th Jul 2021 at 11:00.
    Quote Quote  
  12. Originally Posted by jagabo View Post
    This particular clip is progressive but SMDegrain() should include the "interlaced=true" argument when used on interlaced frames.
    Yes, SMDegrain is one of the few spatial-temporal denoising filters with interlaced support, and I have been using it accordingly in the past. Now I was confused reading
    http://avisynth.nl/index.php/SMDegrain
    It has internal switches for interlaced or YUY2 content (but you should not use it in avs 2.6 and avs+ and use YV16 instead),...

    So does this mean that either one should now convert to YV16, or use it for interlaced YUY2 and YV12 like
    Code:
    even=separatefields().selecteven().SMDegrain(interlaced=false)
    odd=separatefields().selectodd().SMDegrain(interlaced=false)
    Interleave(even,odd).AssumeFieldBased().weave()
    Confused ......
    Quote Quote  
  13. Member Skiller's Avatar
    Join Date
    Oct 2013
    Location
    Germany
    Search PM
    Originally Posted by Sharc View Post
    So does this mean that either one should now convert to YV16
    Yes, it means you should convert YUY2 to YV16. All technical details aside, the two are the same to the end user. I wouldn't even call it a "conversion", more like "rearrangement".

    In other words, going between YUY2 and YV16 is lossless.
    Quote Quote  
  14. Originally Posted by Skiller View Post
    Originally Posted by Sharc View Post
    So does this mean that either one should now convert to YV16
    Yes, it means you should convert YUY2 to YV16. All technical details aside, the two are the same to the end user. I wouldn't even call it a "conversion", more like "rearrangement".

    In other words, going between YUY2 and YV16 is lossless.
    So it means that the interlaced=true parameter in SMDegrain can be used on YUY2 interlaced video like
    Code:
    converttoYV16(interlaced=true)
    SMDegrain(interlaced=true)
    Quote Quote  
  15. Captures & Restoration lollo's Avatar
    Join Date
    Jul 2018
    Location
    Italy
    Search Comp PM
    If you look inside SMdegrain code you will find that when you specify interlaced=true it does inputY.AssumeTFF().SeparateFields(), the same that you wrote few posts above
    Quote Quote  
  16. Originally Posted by lollo View Post
    If you look inside SMdegrain code you will find that when you specify interlaced=true it does inputY.AssumeTFF().SeparateFields(), the same that you wrote few posts above
    Yes, I noticed this and some time ago I did some tests comparing the 2 methods i.e. 'interlaced=true' and 'fields grouped' filtering. To my surprise the results were not identical. There were subtle differences towards the bottom of the frames. That was before the recommendation for remapping YUY2 -> YV16 was published, and as far as I remember I did the test on interlaced YUY2->YV12 sources only. Maybe something has changed in the meantime, and I will redo the tests occasionally with actual versions.

    Edit:
    I retested with my current Avisynth+ 32bit 3.7.0 r3382 and SMDegrain 3.1.2.104.
    No issue anymore, the former difference has disappeared. I tested for YUY2->YV12, YUY2->YV16, YUY2->YV24
    Last edited by Sharc; 16th Jul 2021 at 02:28.
    Quote Quote  
  17. Captures & Restoration lollo's Avatar
    Join Date
    Jul 2018
    Location
    Italy
    Search Comp PM
    On the other hand, the separateFields().selectEven() / separateFields().selectOdd() approach for a temporal or a motion compensated filter may not be the best.

    You cold try an approach like this and see if the filtering gives higher quality output:

    Code:
    <Bobbing deinterlacer>
    <filtering>
    SeparateFields()
    SelectEvery(4,0,3)
    Weave()
    where you deinterlace for filtering and then interlace back after. I use nnedi3(field=-2) as a Bobbing deinterlacer. The filter is then used in its progressive mode.
    Quote Quote  
  18. Originally Posted by lollo View Post
    .... You cold try an approach like this and see if the filtering gives higher quality output ...
    Let's leave it to the OP to find out what is best for him
    Last edited by Sharc; 16th Jul 2021 at 05:59.
    Quote Quote  
  19. This already getting too complex for me, so I guess the basics works for me
    What I did found strange when playing with it, is that the original script used converttoYV12() for SMDegrain(). I change it to converttoYV16(), and I noticed the frame has been changed. Isn't YUV settings should only effect color and luminance? Why was the frame was changed?

    Another thing I'm still trying to figure out, is the different between me up-scaling the video back to 704x576 after a crop that went below that (let's say the Crop brought it to 672x560), vs leaving it at 672x560. The player (let's say it's a TV) going to upscale it to the monitor resolution. So the difference is between if the tv will upscale from 672x560 to 1080p, or the tv will upscale 704x576 to 1080p. Does it makes any big difference? because upscaling from avi-synth to 704x576 with nnedi3_rpow2 is quite expensive time-wise.

    Also, in both cases, I still want to set the SAR flag on FFMPEG to be 12/11 for the correct ration? no matter if it's 672x560 or 704x576?

    And lastly, why 704x576 instead of the original full-frame resolution of 720x576?

    Thanks!
    Last edited by Okiba; 16th Jul 2021 at 06:52.
    Quote Quote  
  20. Originally Posted by Okiba View Post
    What I did found strange when playing with it, is that the original script used converttoYV12() for SMDegrain(). I change it to converttoYV16(), and I noticed the frame has been changed. Isn't YUV settings should only effect color and luminance? Why was the frame was changed?
    YV12 is 4:2:0 chroma subsampling. YV16 is 4:2:2 chroma subsampling. So conversion from YV12 to YV16 may not be lossless. And further, algorithms for converting 4:2:0 and 4:2:2 to RGB for display may not yield exactly the same result. And you must use interlaced=true in your YV12 to YV16 conversion. That said, when done correctly the differences will be very small in most cases. Try this:

    Code:
    # assuming interlaced YV12 input
    Interleave(last.ConvertToRGB(interlaced=true), last.ConvertToYV16(interlaced=true).ConvertToRGB(interlaced=true))
    Subtract(SelectEven(), SelectOdd()).Levels(120,1,136,0,255)
    Compare to:
    Code:
    # assuming interlaced YV12 input
    Interleave(last.ConvertToRGB(interlaced=true), last.ConvertToYV16().ConvertToRGB(interlaced=true)) # incorrect YV16 conversion
    Subtract(SelectEven(), SelectOdd()).Levels(120,1,136,0,255)
    Originally Posted by Okiba View Post
    Another thing I'm still trying to figure out, is the different between me up-scaling the video back to 704x576 after a crop that went below that (let's say the Crop brought it to 672x560), vs leaving it at 672x560. The player (let's say it's a TV) going to upscale it to the monitor resolution. So the difference is between if the tv will upscale from 672x560 to 1080p, or the tv will upscale 704x576 to 1080p. Does it makes any big difference? because upscaling from avi-synth to 704x576 with nnedi3_rpow2 is quite expensive time-wise.
    If you cropped from 720x576 to 672x560 then upscaling to 704x576 would give you the wrong aspect ratio unless you also change the SAR. If your TV plays the 672x560 frame with 12:11 SAR correctly there's no need to upscale. If you were going to DVD you could not use a 672x560 frame size.

    Originally Posted by Okiba View Post
    And lastly, why 704x576 instead of the original full-frame resolution of 720x576?
    I did that in my script because I cropped to 704x576 (to eliminate the ITU padding) before downscaling to 400x576. I also upscaled back to 704x576 so I could interleave or stack the original video with the processed video for easy comparison.
    Quote Quote  
  21. YV12 is 4:2:0 chroma subsampling. YV16 is 4:2:2 chroma subsampling.
    So all my Videos are captured at 4:2:2. They getting converted to 4:2:0 when they get converted to FFMPEG (By FFMPEG). The only reason I'm changing YUV in AviSynth, is if specific method require it. So I assume it's always convert to YV16 (and set Interlace=True int Interlaced content)? Or it's actually better for AviSynth to do the conversion to 4:2:0, instead of FFMPEG?

    If you were going to DVD you could not use a 672x560 frame size.
    That was I was missing. I don't going for DVD. My players are VLC and Kodi - and both support the SAR flag. In that case, I will be just crop to 672x560, and set the SAR to be 12/11, without upscaling with nnedi3_rpow2.

    Thanks!

    EDIT: Oh, and one more thing. If I have a Phase Shifted video that needed to be TFM. What's the proper way to apply the QTGMC cleaning on top of it?

    Code:
    TFM(clip2=QTGMC(FPSDivisor=2))
    QTGMC(InputType=2)
    Or there is no need to, because if TFM used QTGMC as de-interlacer, the cleaning up already happening during that phase?
    Last edited by Okiba; 16th Jul 2021 at 08:51.
    Quote Quote  
  22. Originally Posted by Okiba View Post
    YV12 is 4:2:0 chroma subsampling. YV16 is 4:2:2 chroma subsampling.
    So all my Videos are captured at 4:2:2. They getting converted to 4:2:0 when they get converted to FFMPEG (By FFMPEG). The only reason I'm changing YUV in AviSynth, is if specific method require it. So I assume it's always convert to YV16 (and set Interlace=True int Interlaced content)? Or it's actually better for AviSynth to do the conversion to 4:2:0, instead of FFMPEG?
    PAL is an inherently 4:2:0 system. So converting your 4:2:2 caps to YV12 won't really hurt them. And most players can't handle 4:2:2 h.264 or h.265 encoding so you'll probably have to convert to YV12 eventually anyway. Use whatever works.

    Originally Posted by Okiba View Post
    EDIT: Oh, and one more thing. If I have a Phase Shifted video that needed to be TFM. What's the proper way to apply the QTGMC cleaning on top of it?

    Code:
    TFM(clip2=QTGMC(FPSDivisor=2))
    QTGMC(InputType=2)
    Or there is no need to, because if TFM used QTGMC as de-interlacer, the cleaning up already happening during that phase?
    clip2=QTGMC isn't really needed with TFM unless you have lots of orphaned fields or other problems that cause TFM's deinterlacer to kick in. And again, QTGMC(InputType=2) isn't deinterlacing. It's cleaning other junk in the video.
    Quote Quote  
  23. Oh, OK. All my Videos are PAL, So I'll just use YV12 I guess. If AviSynth doing that, I'll try to remove -pix_fmt yuv420p (and add back -vf setsar=12/11) and see if the output is still 4:2:0.

    clip2=QTGMC isn't really needed with TFM unless you have lots of orphaned fields or other problems that cause TFM's deinterlacer to kick in.
    Cool, then I'll stop using it and just call TFM() with no clip2.

    I convert couple of videos today. It's going well, I think those are the last question as I'm almost done

    - All the blended fields I saw were with Interlaced content. Is it possible to have frame blending in progressive?
    - While I assume most of the cartoons are 23.973, I had couple of videos SRestore(25) worked better (found a panning shot, and there were less frame skipping with 25). That's the way to figure it out right?
    - I tried playing around with QTGMC preset specifically for cartoons. Slower seems to clean to a level of a bit of a blur. Is there any Preset specifically good for Cartoons? or really it depends on the type and I should try multiple?

    Thanks jagabo!
    Quote Quote  
  24. Originally Posted by Okiba View Post
    Is it possible to have frame blending in progressive?
    Yes.

    Originally Posted by Okiba View Post
    While I assume most of the cartoons are 23.973, I had couple of videos SRestore(25) worked better (found a panning shot, and there were less frame skipping with 25). That's the way to figure it out right?
    Yes.

    Originally Posted by Okiba View Post
    I tried playing around with QTGMC preset specifically for cartoons. Slower seems to clean to a level of a bit of a blur. Is there any Preset specifically good for Cartoons? or really it depends on the type and I should try multiple?
    I don't really know about that. Just try it and find out.
    Quote Quote  
  25. Originally Posted by jagabo View Post
    PAL is an inherently 4:2:0 system.
    Is that true, or are you referring to DV PAL?

    PAL is an analog system, so I don't think it has a color space until digitized.

    So, couldn't it be digitized, using something other than the DV codec, and thereby have a more robust color space?
    Quote Quote  
  26. As I understand it analog PAL video uses YUV encoding with the phase of the chroma carrier alternating with each line of the field (hence the name Phase Alternating Line). The result is the chroma has half the resolution vertically:

    Originally Posted by wikipedia
    The name "Phase Alternating Line" describes the way that the phase of part of the colour information on the video signal is reversed with each line, which automatically corrects phase errors in the transmission of the signal by cancelling them out, at the expense of vertical frame colour resolution.
    https://en.wikipedia.org/wiki/PAL#Colour_encoding

    With a YUY2 cap, if you (or the capture device/driver) don't average the chroma of the two consecutive scan lines (of the field) you get Hanover bars. Some capture devices don't do that and you are stuck with Hanover (both lines have the wrong color) unless you fix it yourself.
    Quote Quote  
  27. Originally Posted by jagabo View Post
    As I understand it analog PAL video uses YUV encoding with the phase of the chroma carrier alternating with each line of the field (hence the name Phase Alternating Line). The result is the chroma has half the resolution vertically:

    Originally Posted by wikipedia
    The name "Phase Alternating Line" describes the way that the phase of part of the colour information on the video signal is reversed with each line, which automatically corrects phase errors in the transmission of the signal by cancelling them out, at the expense of vertical frame colour resolution.
    https://en.wikipedia.org/wiki/PAL#Colour_encoding
    By "part of the color information" they mean the phase of the V component as I understand it, the U is not altered. The descriptions are not very clear to me though.
    Last edited by Sharc; 17th Jul 2021 at 07:34.
    Quote Quote  
  28. Member Skiller's Avatar
    Join Date
    Oct 2013
    Location
    Germany
    Search PM
    jagabo is on spot about vertical chroma line averaging in PAL.
    A digital sampling analog PAL is very comparable to is indeed 4:2:0.


    So, ironically, the phase alternation that made PAL superior over NTSC in analog TV transmissions for decades, can actually be a drawback when we just want to capture off a local video signal source (although no drawback for anything that was standard PAL before, such as video tapes when they were recorded). Another example is when we watch DVD or digital TV via composite or S-Video. The chroma line averaging isn't doing anything useful in such scenario at all. It's just cutting the vertical chroma res in half. Edit: Of couse those are 4:2:0 sources to begin with, but that doesn't exactly make it any better.

    If you, for example, capture off a video game console that outputs NTSC, you get better color resolution compared to if the game console were to output PAL (all other things ignored). Via S-Video on a Sony BVM professional CRT monitor the difference struck me some years ago. NTSC in S-Video looks surprisingly darn good, color wise.

    A digital sampling analog NTSC is very comparable to is 4:2:2.
    Last edited by Skiller; 17th Jul 2021 at 13:44.
    Quote Quote  
  29. Chroma line averaging in analog interlaced PAL is between adjacent scanlines of a field (spatial), not between scanlines of the top and subsequent bottom field (temporal), right?
    Quote Quote  
  30. Originally Posted by Sharc View Post
    Chroma line averaging in analog interlaced PAL is between adjacent scanlines of a field (spatial), not between scanlines of the top and subsequent bottom field (temporal), right?
    Yes.
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!