VideoHelp Forum

Our website is made possible by displaying online advertisements to our visitors. Consider supporting us by disable your adblocker or Try ConvertXtoDVD and convert all your movies to DVD. Free trial ! :)
+ Reply to Thread
Page 2 of 3
FirstFirst 1 2 3 LastLast
Results 31 to 60 of 71
Thread
  1. I took, out -r and -s because they are redundant
    Suppose you want to convert 720p to 1080i? Yes, I know it will have artifacts. You will need the -s.

    what was the reason for scale=out_color_matrix=bt709ut_range=limited ? I'm asking because that might be affected depending on where you place it in the filter chain. A "normal" YUV video should not require that
    You definitely need the range=limited to keep the colors true. I found that out the hard way.

    Is the chroma messed up with this iteration? Hint: my original source is 4:2:0, so I'm upconverting. Maybe try it without converting to 4:2:2?

    Code:
    ffmpeg -y  -i "C0008.MP4"  -s 1920x1080 -vcodec mpeg2video  -vf  format=yuv422p,scale=out_color_matrix=bt709:out_range=limited,interlace  -acodec mp2  -f mpegts output.ts
    I took out one of your two redundant -vf statements.
    Last edited by chris319; 27th Jun 2019 at 23:54.
    Quote Quote  
  2. Originally Posted by chris319 View Post
    I took, out -r and -s because they are redundant
    Suppose you want to convert 720p to 1080i? Yes, I know it will have artifacts. You will need the -s.
    Then you should move it back into the linear filter chain . You already have scale specified there, just add the width=1920:height=1080 arguments . Then you can control how you do it.

    It was the same idea with the -pix_fmt issue with interlaced chroma artifacts . Moving it into the filter chain as "format" enables you to specify how it's done , and what order . That 420 to 422 is scaling the chroma channels, so you can get the same sorts of problems here

    what was the reason for scale=out_color_matrix=bt709ut_range=limited ? I'm asking because that might be affected depending on where you place it in the filter chain. A "normal" YUV video should not require that
    You definitely need the range=limited to keep the colors true. I found that out the hard way.
    Some metadata in the file ? Is it full range 709 ?

    Is the chroma messed up with this iteration? Hint: my original source is 4:2:0, so I'm upconverting. Maybe try it without converting to 4:2:2?

    Code:
    ffmpeg -y  -i "C0008.MP4"  -s 1920x1080 -vcodec mpeg2video  -vf  format=yuv422p,interlace,scale=out_color_matrix=bt709:out_range=limited  -flags +ilme+ildct  -r 29.97  -color_primaries bt709  -color_trc bt709  -colorspace bt709  -acodec mp2  -f mpegts output.ts

    That works... for my source. I don't know about yours . But I would put interlace at the end, so all operations are progressive until the end

    format=yuv422p is converting to 4:2:2, progressively , because it occurs before interlace filter

    If you don't need 4:2:2, leave it out
    Quote Quote  
  3. How about this?

    Code:
    ffmpeg -y  -i "C0008.MP4"  -vcodec mpeg2video  -vf  format=yuv422p,scale=out_color_matrix=bt709:width=1920:height=1080:out_range=limited,interlace  -acodec mp2  -f mpegts output.ts
    Quote Quote  
  4. Filter chain looks ok; don't forget to add back the flags and encoding settings
    Quote Quote  
  5. Originally Posted by poisondeathray View Post
    Filter chain looks ok; don't forget to add back the flags and encoding settings
    They're there, you just have to scroll. Do you see them?

    What do you use to determine if the chroma samples are messed up?
    Quote Quote  
  6. Originally Posted by chris319 View Post
    Originally Posted by poisondeathray View Post
    Filter chain looks ok; don't forget to add back the flags and encoding settings
    They're there, you just have to scroll. Do you see them?
    I don't

    You should specify things like bitrate, gop , buffer size, maxrate, etc.. also metadata flags 709, etc..


    What do you use to determine if the chroma samples are messed up?
    Your eyes . Either separate the fields, or double rate deinterlace (when someone watches it on a TV, that's what they are going to see anyways, the TV will deinterlace)

    It looks like either colored horizontal lines on colored object edges, or sometimes "ghosting" like an echo image . Most visible with colors like red

    It occurs when you resize the chroma channels incorrectly (resizing in an interlaced manner while progressive, or vice versa) . When you convert 4:2:0 to 4:2:2 you're resizing the chroma channels
    Quote Quote  
  7. Let's see about this. The flags for color and audio are there.

    ffmpeg -y -i "C0008.MP4" -vcodec mpeg2video -vf format=yuv422p,scale=out_color_matrix=bt709:width= 1920:height=1080ut_range=limited,interlace -acodec mp2 -f mpegts output.ts
    Quote Quote  
  8. Originally Posted by chris319 View Post
    Let's see about this. The flags for color and audio are there.

    ffmpeg -y -i "C0008.MP4" -vcodec mpeg2video -vf format=yuv422p,scale=out_color_matrix=bt709:width= 1920:height=1080ut_range=limited,interlace -acodec mp2 -f mpegts output.ts
    Yes, that's what I see

    But those aren't colorimetry flags ; those are scale settings. Remember, earlier you had -color_primaries bt709 -color_trc bt709 -colorspace bt709 ? Those are metadata that will show up in things like mediainfo, some mediaplayers might read it, some programs etc..

    This is interlaced content, progressive encoding. Remember earlier you had -flags +ilme+ildct ? to encode interlaced

    The audio and video codecs are specified, but no encoding settings are specified. When you do it for real, you'd probably want to include those
    Quote Quote  
  9. Shoot, those flags got lost when I was trying to incorporate your little code bits instead of the full script.

    Take a look at this version, but I don't want to have another debate about fixed vs. variable bit rate.

    Code:
    ffmpeg -y  -i "C0008.MP4"  -vcodec mpeg2video  -vb 5.5M  -minrate 5.5M -maxrate 5.5M -bufsize 5.5M  -muxrate 6.0M  -vf format=yuv422p,scale=out_color_matrix=bt709:width=1920:height=1080:out_range=limited,tinterlace=4:vlpf  -flags +ilme+ildct  -r 29.97  -color_primaries bt709  -color_trc bt709  -colorspace bt709  -acodec mp2  -ab 192k  -f mpegts output.ts
    I am using tinterlace because the docs are less vague (to me) about the exact status of the lowpass filter.

    Some of those color flags give me this in MediaInfo:

    Code:
    -color_primaries bt709  -color_trc bt709  -colorspace bt709
    Color primaries : BT.709
    Transfer characteristics : BT.709
    Matrix coefficients : BT.709
    Last edited by chris319; 28th Jun 2019 at 08:22.
    Quote Quote  
  10. Originally Posted by _Al_ View Post
    Avisynth:
    Code:
    ffvideosource("C0008.MP4")
    AssumeTFF()
    Separatefields()
    Vapoursynth:
    Code:
    import vapoursynth as vs
    clip = vs.core.ffms2.Source("C0008.MP4")
    clip = vs.core.std.SeparateFields(clip, tff = True)
    clip.set_output()
    separatefields, does what is says, it is like bob, except it does not upscales to height, it just uses fields as a frame, so height is halved. So you just manually step during movement and you exactly see what the patters is.
    1. The avisynth documentation I've found neglects to mention that an avisynth script takes the file extension ".avs".

    2. VirtualDub cannot open mp4 files; it is instead necessary to use VirtualDub2.

    3. Documentation also neglects to mention that in VirtualDub2 you need to open a video file (with ".avs" extension) rather than "run script".

    4. Armed with the above knowledge I attempted to run my .avs script containing the above avisynth code and it did not make it past the first line before quitting with an error message about not finding "ffvideosource".

    WTF? Four strikes and we're starting off with a bang.
    Quote Quote  
  11. Looks ok , but 5.5Mb/s CBR MPEG2 422 isn't going to look very good . Typically 50Mb/s is used

    For generic MPEG2 streams, typically -g 15 for NTSC gop size, -bf 2 for 2 b-frames


    http://avisynth.nl/index.php/Getting_started
    For testing create a file called test.avs and put the following single line of text in it:

    Version()
    For FFVideosource, you need ffms2.dll . If you place it into the plugins directory, it will autoload.
    http://avisynth.nl/index.php/FFmpegSource
    Otherwise you have to use LoadPlugin("PATH\ffms2.dll") in the script. x86 version for x86 avisynth, x64 version for x64 avisynth

    You're probably going to have many "attempts" before getting avisynth working . Lots of little quirks or specific ways of doing things . All programs have them . FFMpeg has lots too - look how many "strikes" trying to make a proper interlace stream. None of the problems are documented either; you only figure those out from testing, looking at the output, retesting, rinse, repeat
    Quote Quote  
  12. Originally Posted by chris319 View Post
    1. The avisynth documentation I've found neglects to mention that an avisynth script takes the file extension ".avs".
    The first three lines from the very first introductory link at the AviSynth site:

    After downloading and installing AviSynth, let's run the most basic of AviSynth scripts:

    Version()

    Type the above in your text editor and save it as "version.avs".

    http://avisynth.nl/index.php/First_script
    2. VirtualDub cannot open mp4 files; it is instead necessary to use VirtualDub2.
    You don't really want to open the video directly in anything. Ordinarily you'd use the AviSynth frameserving capabilities by using a source filter. Unfortunately, there aren't all that many source filters built into AviSynth. pdr explained now to get and use FFVideoSource.
    3. Documentation also neglects to mention that in VirtualDub2 you need to open a video file (with ".avs" extension) rather than "run script".
    In the fourth line from the link above:
    You now have a script that can be opened by most AVI players in your machine: Windows Media Player 6.4 (or higher) will play it; so will Media Player Classic, VirtualDub, VirtualDubMod and many others. This is done through the magic of frameserving.
    It's difficult to just jump in and expect to do more than the basics. It takes time, and lots of reading and experimentation. And maybe questions here at videohelp.com when you get stuck. Geez, you didn't just jump into ffmpeg and know everything the first day, did you?
    Quote Quote  
  13. Here is the documentation I found:

    http://avisynth.nl/index.php/Getting_started

    Basically, AviSynth works like this:

    First, you create a simple text document with special commands, called a script. These commands make references to one or more videos and the filters you wish to run on them. Then, you run a video application, such as VirtualDub, and open the script file. This is when AviSynth takes action. It opens the videos you referenced in the script, runs the specified filters, and feeds the output to video application. The application, however, is not aware that AviSynth is working in the background. Instead, the application thinks that it is directly opening a filtered AVI file that resides on your hard drive.
    No mention of the file extension to use for the script. With ffmpeg it's easy to figure out to use a simple .bat file in Windows and easy to write a script with -h.

    You're probably going to have many "attempts" before getting avisynth working . Lots of little quirks or specific ways of doing things . All programs have them . FFMpeg has lots too - look how many "strikes" trying to make a proper interlace stream. None of the problems are documented either; you only figure those out from testing, looking at the output, retesting, rinse, repeat
    Thanks for the warning. I was going to use Al's script to accomplish this and be done with it rather than iteratively optimize it as we have been doing with the interlace script.

    I still don't see how this is a preferable solution to VLC which has been dependable in my experience. At my work, a broadcast TV station, we use VLC to privately stream our programming with no issues.

    Once I get these fields split I'm not sure what I'm looking for and if it's a foolproof solution, IOW, can any monkey use it to evaluate interlace without applying subjective judgement? With VLC interlacing "jumps out" at you and you can't miss it, either by turning the deinterlacer off or using the "Phosphor" deinterlacer.
    Quote Quote  
  14. Avisynth is frame server,
    it serves uncompressed frames, it has to be loaded somewhere, it cannot just run itself. You load it into something. Run button (F5) in a console would not work. The only console that Avisynth could be run at is AvsPmod (or I do not know any other). That is dedicated Avisynth console, creator/player. But just to get video on screen. If you need to encode avs script, you'd need to load it somewhere , virtualdub2, avfs.exe (creates virtual avi) or other encoding software that accepts avs as input, for example x264 encoder can also load avs script.

    Vapoursynth script has extension *.vpy or just *.py (like general python script). If loaded into VirtualDub2 , it needs to have *.vpy extension and python and vapoursynth needs to be installed. If you work with portable versions of vapoursynth and python, you cannot load *.vpy into VirtualDub2, you'd need to fix registry (I think ChaosKing posted something on doom9, not sure where it is). Now there is a major difference from Avisynth. Because it is a Python script it could be run in any python console. Of course you'd need to implement couple of line code at the end that brings a clip on screen using other python moduls like opencv, Qt, PIL . I have no problem using opencv or QT. Or you can just request frames in a loop for troubleshooting without actual preview. But of course, codes for that visual can get technical so there is something similar like for Avisyth's avsPmod -> Vapoursynth has also dedicated console/designer/editor , it is called Vapoursynth Editor, VSEditor, VSEdit made by Mystery Keeper. It could be called differently because it just has general name.

    VirtualDub could never load mp4 file. There is guy with nick shekh who adds new functions to that beautiful VD legacy naming it VirtualDub2. But VirtualDub2 is Windows only, so if using Vapoursynth, I'd focus using things that are also crossplatform. Vapoursynth is crossplatform.

    Videos could be loaded into Avisynth using different plugins. AviSource is included, other formats can use ffmpegsource, lsmash source, mpeg2source (to load indexed mpeg2 files that you make by DGIndex), directshow source.
    AviSource("avi_video", some args)
    audio = FFAudioSource("video", some args)
    video = FFVideoSource("video", some args)
    LSMASHAudioSource("video", some args)
    LSMASHVideoSource("video", some args)
    LWLibavAudioSource("video", some args)
    LWLibavVideoSource("video", some args)
    MPEG2Source("indexed_d2v_file", some args)
    DirectShowSource("video", some args)

    Vapoursynth has it similar, but remember, no audio support. So it has avisource (included) loads avi or avs, ffms2.Source, LibavSMASH, d2v.Source (to load indexed d2v files that you make by d2vwitch or dgindex). d2vwitch is crossplatform as well, so it can be used instead of dgindex.
    import vapoursynth as vs
    clip = vs.core.avisource.AVISource('avi_video', some args)
    clip = vs.core.lsmas.LibavSMASHSource('video', some args) #general iso files mpr,mov
    clip = vs.core.lsmas.LWLibavSource('video') #transport streams, ts etc
    clip = vs.core.core.ffms2.Source('video', some args)
    clip = vs.core.d2v.Source('indexed_d2v_file', some args)
    clip.set_output()
    #output must be specified in python to specify output clip, in avisynth it is last if not specified ,
    #because it does video only so it comes natural, but in vapoursynth you can output more items like:
    # clip1.set_output(0), clip2.set_output(1) and then requested it later in any part of script or other script!
    #clip = vs.get_output(index=0) , clip = vs.get_output(index=1) etc.
    # so you might not know clips name from importing script but you can get its output


    Just avisource is included, other source plugins need to be downloaded, for windows you need DLL and put it into Avisynth or Vapoursynth plugin directory. Always make sure you have 64 bit version if Vapoursynth or avisynth is 64 bit and same for 32bit.

    for linux, (no avisynth. just vapoursynth), you might download the whole package from a repository , djcj has a vapoursynth plugins package that installs all source plugins (except avisource, but ffms2 can load avi as well)

    Audio could be supported in limited way in Vapoursynth so far, if you edit video, cut it, you have to provide wav file and use damb plugin that would generate a new edited wav. And you'd have to deal with that audio separatelly, encoding it and mux into video.
    Last edited by _Al_; 28th Jun 2019 at 16:08.
    Quote Quote  
  15. Typically 50Mb/s is used
    You can't do that in the U.S. and I believe Canada is the same. You have 6 MHz of OTA RF bandwidth which MUST NOT BE EXCEEDED. This works out to about 19.39 Mbps for everything. Nowadays it is not atypical for a station to squeeze its main programming down to 5.5 - 6 Mbps for HD services. The rest of the channel is used for digital subchannels at 2.5 Mbps for SD services, and this is using hoary old MPEG-2.

    https://en.wikipedia.org/wiki/KPBS_(TV)#Digital_channels

    Under the ATSC spec we could be broadcasting H.264 but broadcasters are unwilling to do that for fear there will be TV receivers "out there" that can't handle the new signal and won't be able to receive our station. This leads to reception complaints from viewers and the station would likely revert to MPEG-2. It's not like the www where Chrome or Firefox can update itself behind the user's back.
    Quote Quote  
  16. Thanks for the advice, but this avisynth/virtualdub solution seems like a lot of bother for something I can accomplish by running VLC, despite the unfounded claims that it's balky or unreliable.

    I'm not going near Linux for this so why bring it up?
    Quote Quote  
  17. Originally Posted by chris319 View Post
    Thanks for the advice, but this avisynth/virtualdub solution seems like a lot of bother for something I can accomplish by running VLC, despite the unfounded claims that it's balky or unreliable.
    If you think the knowledgeable pdr's experience counts as being unfounded (because it's just one person saying it?), I don't even have VLC installed on my computer as I find it useless in my work. I much prefer to use MPC-HC to check my video work or to play videos and AviSynth scripts. Sure, plenty of people love and use VLC player. But I'm not one of them.
    Quote Quote  
  18. If you think the knowledgeable pdr's experience counts as being unfounded (because it's just one person saying it?)
    No, because I have not had a speck of trouble with it, which contradicts the claim that it's unreliable. Maybe it has improved with time?
    Quote Quote  
  19. Originally Posted by chris319 View Post
    I still don't see how this is a preferable solution to VLC which has been dependable in my experience. At my work, a broadcast TV station, we use VLC to privately stream our programming with no issues.
    Sure you can use it for streaming, for quick and dirty simple viewing - but if you cannot examine fields (or double rate deinterlace) reliably, you will misdiagnoise some


    Once I get these fields split I'm not sure what I'm looking for and if it's a foolproof solution, IOW, can any monkey use it to evaluate interlace without applying subjective judgement? With VLC interlacing "jumps out" at you and you can't miss it, either by turning the deinterlacer off or using the "Phosphor" deinterlacer.
    Yes, even monkeys can be trained. It's not that difficult. The instructions are posted above, for the 3 common content type patterns/scenarios you will see in broadcast . This is what part of the job the QC'er does with your submission

    If every field is different during movement (accounting for the slight field offset, if not using smart bobber) - it's true 59.94 fields/s content . Duplicates = 29.97p . 3:2 pattern = 23.976 . There are other patterns, but those are the basic ones and commonly used .

    I'll upload an example later where it "looks" interlaced according to your proposed method , but is actually progressive . It's not common but it happens in real life . You don' t do want to be "that guy" that misses things this simple

    Originally Posted by chris319 View Post
    Typically 50Mb/s is used
    You can't do that in the U.S. and I believe Canada is the same.
    I'm talking about the submission format . It will get encoded properly for the distribution streams later. You should be something resembling XDCAMHD422. This is the universal currency in terrestrial and sat broadcast, even Europe
    Quote Quote  
  20. Try this

    Is it interlaced or progressive content ?
    Image Attached Files
    Quote Quote  
  21. Does MPC-HC have a way of disabling the deinterlacer? I'll try it if it's so much better than VLC.
    Quote Quote  
  22. yes avisynth or vapoursynth, it just separates fields and nothing else ,
    nothing going on behind scenes, upscale to bob or whatever
    anyway, in MPC-HC I guess it also depends what renderer you use if its possible, like ffdshow, you do not check deinterlace , it will not deinterlace
    Last edited by _Al_; 28th Jun 2019 at 17:55.
    Quote Quote  
  23. Originally Posted by chris319 View Post
    Does MPC-HC have a way of disabling the deinterlacer? I'll try it if it's so much better than VLC.


    That's not the right question;

    You should be asking if you can ENable a double rate deinterlacer, and have it working reliably/correctly . Either that , or the ability to view individual, separate fields

    And the ability to navigate / step through field(frame) accurately . That's a big "ask" for some types of streams, especially ones with very long GOP's, open GOP's, many b-frames. Accurate seeking can be an issue for FFMpeg based libraries (this means almost all common media players, and most certainly free media players), unless it's using some indexed method (none of the "players" do; it's too slow).

    Media players are not optimized for this; it's not their goal to accurately analyze - their main goal is playback smoothness with a nice UI
    Quote Quote  
  24. it "looks" interlaced according to your proposed method , but is actually progressive
    Sorry, but according to my method it's progressive. It looks progressive with the VLC deinterlacer turned off and it looks progressive with the VLC bob, Yadif 2x and Phosphor deinterlacers, so I would call it progressive.

    Is there anything besides ffms2.dll that I need to get this avisynth/virtualdub setup running?
    Quote Quote  
  25. yes avisynth or vapoursynth
    I was asking about MPC-HC, not avisynth or vapoursynth

    What exactly am I looking for?

    Given moving video, in progressive scan there's going to be a slight change in the image every 1/59.94 second.

    In interlaced video there is likewise going to be a slight change in the image every 1/59.94 second, only you're going to see odd or even scan lines depending on which field you're looking at.

    So what am I looking for? Am I looking at the spacing between lines in the individual fields?
    Quote Quote  
  26. Originally Posted by chris319 View Post
    it "looks" interlaced according to your proposed method , but is actually progressive
    Sorry, but according to my method it's progressive. It looks progressive with the VLC deinterlacer turned off and it looks progressive with the VLC bob, Yadif 2x and Phosphor deinterlacers, so I would call it progressive.
    That is correct answer in this case, but only because VLC was buggy for you


    If VLC has the deinterlacer turned off, and you're calling it progressive (you don't see what you're calling the "line pairing") , then that suggests that VLC version or setup on your computer either has a decoding bug , or some other bug (deinterlacing somewhere in the chain when it shouldn't be) .

    If you decode it correctly, and if deinterlacing turned off actually works, you should see the "line pairing" or the horizontal lines.

    If you decode it correctly, and if deinterlacing turned on actually works, you should not see those

    The version I'm using works differently, I see the lines for both on/off/auto or any choice (deinterlacing doesn't work correctly)

    Eitherway VLC is buggy for both of us


    Is there anything besides ffms2.dll that I need to get this avisynth/virtualdub setup running?
    Not for separating fields . Yadif.dll is a separate dll if you want to do use yadif instead of bob in avisynth . Vdub alone (without avisynth) has them as filters too (the steps were outlined above)
    Quote Quote  
  27. Originally Posted by chris319 View Post
    What exactly am I looking for?

    Given moving video, in progressive scan there's going to be a slight change in the image every 1/59.94 second.

    In interlaced video there is likewise going to be a slight change in the image every 1/59.94 second, only you're going to see odd or even scan lines depending on which field you're looking at.

    So what am I looking for? Am I looking at the spacing between lines in the individual fields?

    Exactly. a 59.94p source converted to 59.94 fields/s (or 29.97i) will have 59.94 moments in time represented . That's

    A "smart" bobber like yadif compensates for that up/down scan line even/odd offset . Bob does not . You ignore that offset, you're looking actual motion of objects

    You look at an object , eg. a train, you mentioned in your prior example . 29.97i content will have 59.94 different pictures /s .

    29.97p content will have duplicates , so only 29.97 different pictures /s . The temporal resolution is cut in half

    23.976p will have triplicate,duplicate - so 3:2:3:2
    Quote Quote  
  28. The version I'm using works differently
    You can't blame everything on VLC.

    Update to the latest version, which I'm using. "Line pairing" was a bad choice of words call it "scalloping" if you will. If you're doing it as I described on properly-interlaced video, you can't miss it. If you can't reproduce my results then you're not doing the same experiment and have no basis to bash my method. Yes, I saw the line pairing in the belle-nuit video. It was not the same scalloping I see.

    You look at an object , eg. a train, you mentioned in your prior example . 29.97i content will have 59.94 different pictures /s .
    We know this.

    Again, I'm looking for spacing between scan lines in individual fields, correct?

    I'll try to get avisynth/virtualdub2 working and see how well it reveals interlacing.
    Quote Quote  
  29. Originally Posted by chris319 View Post
    The version I'm using works differently
    You can't blame everything on VLC.
    Not everything, but this one can be described as buggy for this - according to what I see, and what you describe

    And since what we see are both different, it's not only buggy, but also inconsistent

    Maybe some other difference, e.g. GPU driver, I tested on Win 8, etc... but VLC is known to be buggy for many things




    Update to the latest version, which I'm using. "Line pairing" was a bad choice of words call it "scalloping" if you will. If you're doing it as I described on properly-interlaced video, you can't miss it. If you can't reproduce my results then you're not doing the same experiment and have no basis to bash my method. Yes, I saw the line pairing in the belle-nuit video. It was not the same scalloping I see.
    I think I know what you're describing. And you should see that on this video too, if VLC was working correctly and deinterlace was off. You can't miss it. The characteristics look pretty much the same as the properly interlaced video .

    Can you describe it better or take a screenshot ? I have no idea what "scalloping" is

    I'm just reporting what I see here. You reported what you saw. And they are different. Maybe some configuration difference, maybe GPU driver setting. Eitherway both of our VLC's are not working correctly

    I'm not "bashing" your method. And it's not "your" method. This has been described many years ago. And this method of looking at weaved fields as a single frame will misdiagnose some streams , like this one, and others, if VLC (or any player or program) is working correctly. If you do not believe this, you clearly do not understand what interlace really is... Again, it's not common to have these sorts of streams, but it does happen in real life . But I'd rather be 100% certain, than 99.9%.



    You look at an object , eg. a train, you mentioned in your prior example . 29.97i content will have 59.94 different pictures /s .
    We know this.
    If "know" this , and understand what it means - then you should realize why turning deinterlacing "off" to examine will misdiagnose some streams



    Again, I'm looking for spacing between scan lines in individual fields, correct?
    No, you're looking for object motion . Does the train or object move between each (double rate deinterlaced) frame. Are the pictures the same or different?

    If you're looking at separated individual fields instead, they are now organized as half height frames. There are no scan lines when it's arranged this way. You're looking at alternate even/odd fields when you frame advance. And you're looking for the same thing - object motion




    Try ffplay

    ffplay -i "what_am_i.mp4"
    That's what it should look like with deinterlacing turned off . Hit the spacebar to pause. Hit "s" key to frame advance

    ffplay -i "what_am_i.mp4" -vf yadif=mode=1
    That's what it should look like with yadif 2x deinterlacing turned on . Hit the spacebar to pause. Hit "s" key to frame advance
    Quote Quote  
  30. This will give you an idea of what scalloping looks like. Note the wavy pattern. Picture scalloping was a common malady in quad videotape machines when the guide height was set incorrectly.

    https://upload.wikimedia.org/wikipedia/commons/0/09/Argopecten_irradians.jpg
    Quote Quote  



Similar Threads