VideoHelp Forum

Our website is made possible by displaying online advertisements to our visitors. Consider supporting us by disable your adblocker or Try ConvertXtoDVD and convert all your movies to DVD. Free trial ! :)
+ Reply to Thread
Page 10 of 11
FirstFirst ... 8 9 10 11 LastLast
Results 271 to 300 of 318
Thread
  1. Originally Posted by celsoac View Post
    I understand, but I still think that in that sense optical flow is not very different from QTGMC, except that QTGMC, computes inbetween fields, which are half-height frames, not only in reference to that field (top or bottom) itself, but also in reference to other real fields (half-height frames) displaying one pixel up (or down). So, there is less guessing than in optical flow, full of motion artifacts.

    I suppose...in that limited sense that there is some temporal interpolation, some use of motion vectors.

    But they are very different because optical flow generates new time sampling points. That's the whole point of optical flow. The big difference is QTGMC (or any deinterlacer) outputs the same time sampling that was already there. It does not "guess" some intermediate time sampling point based on motion vectors.

    QTGMC does not compute "inbetween fields" in the same sense as optical flow's "inbetween frames". In QTGMC, the fields are already present in interlaced video (sure it uses the already present fields to help refine the current (frame), but it does not synthesize new inbetween ones). But if you output another framerate such as 100 or 120 fps footage, then that implies "inbetween fields" in the same sense of optical flow where there are new timepoints as "inbetween frames" . QTGMC does not do that.



    What you say about the sense of "reality" or "fakenews" in video may be true, that is, that actually the 60fps "smooth" effect is already present in 30i video due to how human vision works (each field persists in the retina until next field is displayed, so the effect is "progressive"). But for some reason synthesizing those intermediate fields levels out some blur and jerkiness that may be necessary for the perception that what is been watched is artificial media, not "real" vision.

    Not what I'm saying at all.

    I said nothing about perception or vision in this thread.

    The BIG distinction is 50 or 60 moments in time per second are already physically captured and represented in interlaced video. It's there already. You're not synthesizing any new timepoints. What I'm saying has nothing necessarily to do with how human vision works, nothing necessarily to do with interlaced CRT display vs. flat panel. All I'm saying is the recording has that many moments in time recorded. You can physically count them, and see them. It's only stored in a weaved format, where 2 fields are combined into 1 frame. If you separate the fields, you will see all moments in time are all there. So when you double rate deinterlace (with any deinterlacer) , you aren't generating any new timepoints. They were there already to begin with

    Conversely, optical flow synthesized frames are not there to begin with. There are new inbetween frames . That's why there are often edge morphing artifacts on many types of content



    This perception of "unnaturalness" may be totally subjective.
    Yes

    But also -- and this may be overextending the argument -- real movement may not be continuous, but variably accelerated (for example, in facial microgestures),
    Yes, but higher actual sampling represents reality more accurately. That's real data . ie . You can interpolate more accurately from more sampling points, than fewer sample points. e.g. If I look out the window once per minute to count cars, I might miss some that pass by. If I look out 10 times per minute, I will have a higher degree of accuracy.

    Similarly, if you recorded 1000 fps high speed, you will catch all that acceleration , deceleration of things like human movements more accurately, than if you recorded 30fps. Real recorded data is better than interpolated data. At some point there is a number that is enough to satisfy human perception. There is a lot of discussion and research on what is the framerate required but 0% would say 30fps or 60fps. It's a higher number for sure

    whereas both optical flow or field synthesis in QTGMC calculate (I suppose) exact mid-positions between a given pixel/element in field/frame 1 and 2 .
    Interpolating scan lines for a field to generate a frame are exactly mid position (ignoring things like timebase errors , compression artifacts) . Whereas temporal interpolation might not be - as you pointed out variable acceleration/decel . And non integer multiple optical flow retiming, resamples current flames as well (you lose some actual frames)

    But there is no new time sampling with QTGMC or any deinterlacer. It's using the time samples already there

    Whereas optical flow IS retiming, and you might have variable acceleration/decel errors.
    Quote Quote  
  2. Video Restorer lordsmurf's Avatar
    Join Date
    Jun 2003
    Location
    dFAQ.us/lordsmurf
    Search Comp PM
    Originally Posted by celsoac View Post
    What throws me off about doubling framerate for digital displays is that a PAL interlaced recording, for example, only has 288 horizontal pixels per field! That's very little, like old MPEG1 had (which was 352x288, 25p or 30p, period). In analogue TV, showing the interpolated horizontal lines successively (even, odd, even, odd) gives the eye the impression of 576p. For some reason (may be having grown with analogue TV), that gives me the feeling that it's less "cheating" than showing duplicated pixels twice as fast.
    That's really not accurate, especially the last sentence.

    When the VHS input is faulty, wiggly, then turning each field into a full frame makes the wiggling very noticeably if you go frame by frame. What I found different and really good about QTGMC is that (I think) it generates the missing lines not by duplicating them but by (very good) interpolation, isn't it? I don't know if other deinterlacing methods do the same (smoothing edges, etc.), but I was never satisfied.
    Deinterlacing has nothing to do with this. Temporal NR does it. And it's limited at best, a poor method to correct timing errors/wiggles.
    Want my help? Ask here! (not via PM!)
    FAQs: Best Blank DiscsBest TBCsBest VCRs for captureRestore VHS
    Quote Quote  
  3. Member
    Join Date
    Mar 2019
    Location
    NY, US
    Search Comp PM
    Hey everyone. Just came to report back on something that was SO STUPID on my part, I can't believe I only just realized this now.

    So I'm happy to report, my TV is actually showing interlaced footage just fine... which makes a lot of sense, and is a relief. All this time I thought it was not, and here is one more reason to add to the list of why Handbrake stinks:

    Prior to now, I have been using Handbrake to convert to mp4. It defaults to basic decomb settings on the filter tab, which I experimented with, along with yadif, and bob, and so on.

    I ASSUMED that when you turn off all these filters, and especially set deinterlace=OFF (see screenshot), that it would in fact save the video as an interlaced file (same as source), and so I've been saving/naming these test files as "xyz interlaced.mp4".

    Well guess what - I just looked at the details of one of those files (with deinterlace off) with MediaInfo and even with all those settings off, it saves it as a progressive format. So basically I was burning the interlacing into a progressive file, which is why my tv was showing the interlacing lines.

    In fact, going back into Handbrake knowing this now, I can't even find a way to tell it to leave the file interlaced. Maybe this is something with my version, or the mac version, I have no idea.. but it's stupid, and I'm mad that all this time I've been thinking my tv can't even properly deinterlace a video.

    I only thought to check because I came across a file I actually DID leave interlaced using another encoder while testing some files on my tv, and I thought, "Hey, this looks pretty good! Which one is this?" Only to realize it was a REAL interlaced video, and there was no evidence of wiggling or jaggies, and that sent me back to the computer to check the files out that I THOUGHT were interlaced as well, which actually were very much indeed, not.

    Excuse me while I go bang my head against the wall!
    Image Attached Thumbnails Click image for larger version

Name:	handbrake.jpg
Views:	11
Size:	65.3 KB
ID:	48880  

    Quote Quote  
  4. Originally Posted by Christina View Post


    In fact, going back into Handbrake knowing this now, I can't even find a way to tell it to leave the file interlaced.


    You can encode MBAFF with handbrake (essentially a special type of "interlace" for AVC)

    add

    Code:
    bff
    to the extra options in the video tab, for bottom field first (dv will be bff instead of tff by convention)


    But do you still have the AR issue with that TV ?
    Quote Quote  
  5. Member
    Join Date
    Mar 2019
    Location
    NY, US
    Search Comp PM
    OK, good to know. I'll try that, thank you. Why would such a major important setting be buried in the advanced extra options field...

    Yes, my TV still ignores aspect ratio flags and displays everything as though it was 1:1. That part I haven't figured out, other than re-encoding at 640x480 1:1.
    Quote Quote  
  6. ]
    Originally Posted by celsoac View Post
    What you say about the sense of "reality" or "fakenews" in video may be true, that is, that actually the 60fps "smooth" effect is already present in 30i video due to how human vision works (each field persists in the retina until next field is displayed, so the effect is "progressive"). But for some reason synthesizing those intermediate fields levels out some blur and jerkiness that may be necessary for the perception that what is been watched is artificial media, not "real" vision.
    Originally Posted by poisondeathray View Post
    Not what I'm saying at all.

    I said nothing about perception or vision in this thread.

    The BIG distinction is 50 or 60 moments in time per second are already physically captured and represented in interlaced video. It's there already. You're not synthesizing any new timepoints.
    Yes, I know you didn't say that, I expressed it poorly. What I meant to say is that, while you have those 50 or 60 moments per second (sampling), sampling affects only half of the visual information. Somehow, in analogue CRT TV, where odd and even lines are displayed consecutively (correct me if I'm wrong), the lighting of a line leaves an impression in the screen and/or in the retina until the next line is displayed, and that "tricks" the eye to perceive that each one of the 50 or 60 moments was a "full" frame, when it was only half. In digital displays the effect is different: the entire frame is lighted simultaneously with one of the fields, with duplicated pixels; but often (at least in some of the video editing I've done, for example with FCP, where you play around with field dominance), when converting an interlaced input into progressive (1 field => 1 frame), one of the fields is sharper than the other, but when viewed at 50fps progressive, the sharper and blurrier frames again "superpose" each other in the eye in a compound effect.

    So, if some of this makes some sense (in interlaced video, the eye reconstructs non-present information), it logically follows that if you syntesize that missing information by software (or the closer to it), the perceptual effect may be different than if all of the information is there. The smoothness of 30i video (that is, 60 half frames) may be perceptually different from that of 60p.

    We can compare it to sound. You have two instruments, a piano and a clarinet, both playing simultaneously the same up- and down-pitch scales at 60 very brief staccatto notes per second (note length being compared to shutter speed in video). You sample only the odd notes in the piano, at 30 times per second, and only the even notes of the clarinet, also at 30 times. You may have two microphones or speakers that turn on and off at those rates. You interpolate both samplings. The sound "flows", and at that sampling rate no doubt you will get the scales and melodies, but probably your ear is reconstructing half of the information. Now, suppose you synthesize the (predictable) missing notes in each instrument and actually play them ("deinterlacing"). Your perception of that new sound will probably be very different.

    Motion flow in moving images compares to the predictability of a continuous musical scale; jerkiness compares to undersampling that scale; blurriness, to note length, extending over the following note. It is my impression that both some "jerkiness" and blurriness are built into how we perceive motion in interlaced video (not to speak of 24fps film). If you get rid of both of them, the effect may be unrealistic.
    Quote Quote  
  7. Originally Posted by Christina View Post
    Why would such a major important setting be buried in the advanced extra options field...
    Not that important to the typical handbrake user , and becoming less and less important every day

    Interlace is not even in the h265/HEVC standard
    Quote Quote  
  8. Member
    Join Date
    Mar 2019
    Location
    NY, US
    Search Comp PM
    Confirmed, adding bff to the extra options field leaves it interlaced. So simple when you actually know how to use it. Cannot believe I overlooked this.
    Quote Quote  
  9. Member
    Join Date
    Mar 2019
    Location
    NY, US
    Search Comp PM
    Is there some tricky way in Handbrake to add padding instead of cropping? Or cropping AND padding I guess I should say.. or some way to cover up overscan lines with black?
    Quote Quote  
  10. Originally Posted by lordsmurf View Post
    Originally Posted by celsoac View Post
    What throws me off about doubling framerate for digital displays is that a PAL interlaced recording, for example, only has 288 horizontal pixels per field! That's very little, like old MPEG1 had (which was 352x288, 25p or 30p, period). In analogue TV, showing the interpolated horizontal lines successively (even, odd, even, odd) gives the eye the impression of 576p. For some reason (may be having grown with analogue TV), that gives me the feeling that it's less "cheating" than showing duplicated pixels twice as fast.
    That's really not accurate, especially the last sentence..
    You sound like a bad teacher: "That's wrong. Next!". What exactly is not accurate? Don't bother to explain. Just comment RIGHT or WRONG. You may also ignore this message.

    1) Often deinterlacing by doubling framerate consists of just duplicating pixelx.
    2) Each field in a PAL recording has 288 horizontal lines.
    3) MPEG1 (I meant old VCD in MPEG1, at least) was 352x288 progressive.
    4) CRT shows first one of the fields made up of horizontal lines from top to bottom, the ray canon goes back to the top and shows the other line/field, this double sequence up to 25 times (PAL) per second.
    5) 288 lines + 288 lines sequentially but very quickly gives the impression of a full frame made up of 76 simultaneous lines.
    6) This phenomenon gives me a subjective feeling of whatever-I-want-to-call-it, as compared to deinterlaced video with duplicated FRAMErate in a digital display.
    Quote Quote  
  11. Originally Posted by Christina View Post
    Is there some tricky way in Handbrake to add padding instead of cropping? Or cropping AND padding I guess I should say.. or some way to cover up overscan lines with black?
    In MPEGStreamclip black padding is done with negative cropping, if I'm not mistaken. For example, -10 adds a 10 pixel black border. I don't think you can select padding color.
    Quote Quote  
  12. Originally Posted by celsoac View Post
    it logically follows that if you syntesize that missing information by software (or the closer to it), the perceptual effect may be different than if all of the information is there. The smoothness of 30i video (that is, 60 half frames) may be perceptually different from that of 60p.
    Of course this is true . That's why interlace is a compromise , and used back then in the first place.

    You have similar issues with artifacts and flicker artifacts regardless of CRT or flat panel . The fundamental problem is interlace . That's why you want 60p; and QTGMC does the "best" overall to emulate that

    But your comments imply that 30p from that same interlace source is preferred ...that 30p is more "real" ... No way

    60p from interlace , using any deinterlacer except a blend, looks more "real" , as in real life, than 30p . Because there are more real samples used . Just like 30p looks more real, as in more representative of real life, than say, a 1 fps video . 30p from that same interlace source, throws away another half the information (temporal this time, instead of spatial), so you're left with 1/4 of the information a full progressive 60p original.

    It is my impression that both some "jerkiness" and blurriness are built into how we perceive motion in interlaced video (not to speak of 24fps film). If you get rid of both of them, the effect may be unrealistic.
    I guess your definition of "real" might be different. Most people use "real" in the context as to what you see with your own eyes, outside the window etc..ie. real life. 60p is too jerky to be "real" too . A higher framemate is required . Your eyes have a much higher effective sampling rate than 60

    And not necessarily jerkiness. Maybe you expect home video to be jerky , because of bad camera work. But film a wall from a tripod, do a controlled professional pan - do you think that would be "jerky" ? Or maybe you're used to bad deinterlacing, so you "expect" things to look bad.

    Bluriness because you might be used to blend deinterlacing. And you lose some resolution from interlace because of fields in the first place.

    But the actual mechanism and amount for natural shutter blur is exactly the same as 60p . Both are shot at 1/120 by default . So, when you single rate deinterlace, it's actually has less motion blur than you would expect at a normal video shot natively at 30p (which would have had 1/60 shutter) . QTGMC has a shutter blur option to blur it more when you output single rate to emulate a native 30p camera

    The goal for interlaced 29.97 would be to have "real" 59.94p shot with a real 59.94p camera - if you had a time machine and could capture it with a modern camera - and QTGMC does the best in general to achieve that to fill in the missing scan lines. Just look at real 60p video from a modern camera and compare . There is no "extra" jerkiness, but it is more blurry (partly because of interlace has 1/2 the spatial resolution, partly because UHD/HD cameras sensors are a many times better in terms of clarity)

    Humans are used to, conditioned to watching cinema, that natural shutter motion blur, low framerates, choppy motion. But that does not happen to the same extent with real eyes. Look out the window at cars passing by, there are no blurry trails, there is no choppy motion (unless your neighbor's kid is learning to drive standard ). It's similar to HFR , I don't know what the magic number is but it's higher than 120fps. The lower the framerates, the more blur you need to add to prevent "strobing"
    Quote Quote  
  13. Originally Posted by poisondeathray View Post
    Originally Posted by celsoac View Post
    it logically follows that if you syntesize that missing information by software (or the closer to it), the perceptual effect may be different than if all of the information is there. The smoothness of 30i video (that is, 60 half frames) may be perceptually different from that of 60p.
    Of course this is true . That's why interlace is a compromise , and used back then in the first place.

    You have similar issues with artifacts and flicker artifacts regardless of CRT or flat panel . The fundamental problem is interlace . That's why you want 60p; and QTGMC does the "best" overall to emulate that

    But your comments imply that 30p from that same interlace source is preferred ...that 30p is more "real" ... No way

    60p from interlace , using any deinterlacer except a blend, looks more "real" , as in real life, than 30p . Because there are more real samples used . Just like 30p looks more real, as in more representative of real life, than say, a 1 fps video . 30p from that same interlace source, throws away another half the information (temporal this time, instead of spatial), so you're left with 1/4 of the information a full progressive 60p original.

    It is my impression that both some "jerkiness" and blurriness are built into how we perceive motion in interlaced video (not to speak of 24fps film). If you get rid of both of them, the effect may be unrealistic.
    I guess your definition of "real" might be different. Most people use "real" in the context as to what you see with your own eyes, outside the window etc..ie. real life. 60p is too jerky to be "real" too . A higher framemate is required . Your eyes have a much higher effective sampling rate than 60

    And not necessarily jerkiness. Maybe you expect home video to be jerky , because of bad camera work. But film a wall from a tripod, do a controlled professional pan - do you think that would be "jerky" ? Or maybe you're used to bad deinterlacing, so you "expect" things to look bad.

    Bluriness because you might be used to blend deinterlacing. And you lose some resolution from interlace because of fields in the first place.

    But the actual mechanism and amount for natural shutter blur is exactly the same as 60p . Both are shot at 1/120 by default . So, when you single rate deinterlace, it's actually has less motion blur than you would expect at a normal video shot natively at 30p (which would have had 1/60 shutter) . QTGMC has a shutter blur option to blur it more when you output single rate to emulate a native 30p camera

    The goal for interlaced 29.97 would be to have "real" 59.94p shot with a real 59.94p camera - if you had a time machine and could capture it with a modern camera - and QTGMC does the best in general to achieve that to fill in the missing scan lines. Just look at real 60p video from a modern camera and compare . There is no "extra" jerkiness, but it is more blurry (partly because of interlace has 1/2 the spatial resolution, partly because UHD/HD cameras sensors are a many times better in terms of clarity)

    Humans are used to, conditioned to watching cinema, that natural shutter motion blur, low framerates, choppy motion. But that does not happen to the same extent with real eyes. Look out the window at cars passing by, there are no blurry trails, there is no choppy motion (unless your neighbor's kid is learning to drive standard ). It's similar to HFR , I don't know what the magic number is but it's higher than 120fps. The lower the framerates, the more blur you need to add to prevent "strobing"
    I don't deny that higher framerates and full frames get closer to the way the human eye processes image and motion, at a practically continuous "sampling" of stimuli. Neurons fire at extremely high rates. What I'm trying to highlight is that those impressions of "real", "good quality" etc. are also cultural constructions (perceptions), not just physiological processes. For sports, to me (to me) high framerate and HD help perceive it as "better" and more "real", as if I were at the tennis court. But "what you see with your eyes" when filming a home video which always has some (some) artistic vocation, too, is different from the result. If home video were hyperrealistic it would probably be "perfect" but boring, such as some technically perfect renditions of classical music by highly-trained but soul-less soloists. All this, again, is subjective (my motion-image-viewing culture comes from watching 24fps cinema and doing some home SD video, I never had high definition cameras), and I see no point in trying to deny that a person (me) may perceive smooth, perfectly deinterlaced video as "unreal" as compared to its interlaced source. We may compare this, too, to the beginnings of CDs versus vinyl. Many people claimed that vinyl sounds "warmer" because it was their cultural (not only physiological) experience. That's my point. I could also tell you things about my viewing experience and you wouldn't see the same thing. When I move a finger rapidly side by side in front of my eyes I don't see a continuous movement like in high framerate animation, but a combination of strobing and blurring, don't you? I notice that my eyes focus selectively at different positions of the moving finger, so that sometimes it's almost stopped. So perception of an almost continuous input and its almost continuous neurological processing may not be necessarily isomorphically continuous. Etc.

    But the actual mechanism and amount for natural shutter blur is exactly the same as 60p . Both are shot at 1/120 by default . So, when you single rate deinterlace, it's actually has less motion blur than you would expect at a normal video shot natively at 30p (which would have had 1/60 shutter) . QTGMC has a shutter blur option to blur it more when you output single rate to emulate a native 30p camera.
    On another note, this, I didn't know (shutter speed in interlaced vs progressive video), and I appreciate it. And it's good to know that QTGMC can emulate that "bluriness" that some deviant amateurish video editors like . Thank you.

    PS: I didn't mean "jerkiness" when referring to continuous motion, but choppiness or strobe effect -- my English.
    Last edited by celsoac; 27th Apr 2019 at 06:16.
    Quote Quote  
  14. Member
    Join Date
    Mar 2019
    Location
    NY, US
    Search Comp PM
    Instead or “real” or “fake” you can probably describe what you’re trying to say by “authentic to its time period”. The same way some might prefer to see a really old film reel at the frame rate it would have been played on a projector, vs trying to increase frame rate and make it appear it was shot with a modern camera (which would certainly look more “real” and closer to what the human eye sees). When I mentioned faster frame rates looking unnatural, I think what I was referring to was that soap opera effect, where motion looks smoother and probably closer to “reality” in terms of frames and refresh rate, but to me is visually disturbing!
    Quote Quote  
  15. Originally Posted by celsoac View Post
    And it's good to know that QTGMC can emulate that "bluriness" that some deviant amateurish video editors like
    I posted an example earlier in this thread: https://forum.videohelp.com/threads/392737-Capturing-Tapes-to-ProRes-vs-DV/page8#post2549094
    Quote Quote  
  16. Originally Posted by jagabo View Post
    Originally Posted by celsoac View Post
    And it's good to know that QTGMC can emulate that "bluriness" that some deviant amateurish video editors like
    I posted an example earlier in this thread: https://forum.videohelp.com/threads/392737-Capturing-Tapes-to-ProRes-vs-DV/page8#post2549094
    Yes, I know, thank you.
    Quote Quote  
  17. Originally Posted by Christina View Post
    Confirmed, adding bff to the extra options field leaves it interlaced. So simple when you actually know how to use it. Cannot believe I overlooked this.
    But you still get chroma ghosting issues , since handbrake is converting interlaced 4:1:1 to interlaced 4:2:0 in a progressive manner


    Originally Posted by Christina View Post
    When I mentioned faster frame rates looking unnatural, I think what I was referring to was that soap opera effect, where motion looks smoother and probably closer to “reality” in terms of frames and refresh rate, but to me is visually disturbing!
    When you plug in a DV camera directly to the TV, or watch VHS directly, or playback your interlaced file directly , or watch an interlaced dvd - you should be getting the same thing
    Quote Quote  
  18. Member
    Join Date
    Mar 2019
    Location
    NY, US
    Search Comp PM
    Originally Posted by poisondeathray View Post
    Originally Posted by Christina View Post
    Confirmed, adding bff to the extra options field leaves it interlaced. So simple when you actually know how to use it. Cannot believe I overlooked this.
    But you still get chroma ghosting issues , since handbrake is converting interlaced 4:1:1 to interlaced 4:2:0 in a progressive manner
    Is this the case with ffmpeg and any GUI based on it? Or is it specific to something Handbrake is doing?

    Are there other programs that do not convert the chroma in a progressive manner? If so, how do I know which do or don't?

    If I am not de-interlacing for the time being, that should (hopefully) open up some other options to just convert DV to mp4/h264 that don't have this issue...?

    What about converting right in an NLE like Final Cut Pro? Would that have the same issue?

    Thank you.
    Last edited by Christina; 27th Apr 2019 at 17:30.
    Quote Quote  
  19. Video Restorer lordsmurf's Avatar
    Join Date
    Jun 2003
    Location
    dFAQ.us/lordsmurf
    Search Comp PM
    Originally Posted by celsoac View Post
    You sound like a bad teacher: "That's wrong. Next!". What exactly is not accurate? Don't bother to explain. Just comment RIGHT or WRONG. You may also ignore this message.
    No, I'm exactly like a college instructor here:

    - Student: "In 1492, Columbus discovered America."
    - Teacher: "That's not accurate."

    You gave such an overly simple synopsis that it would take an inordinate amount of time to correct. It shows that you're either not reading, skimming, rushing, and/or rashly making conclusions not supported by the facts. So my answer here is the same as the teacher: re-read the book/documentation (with the understanding that you're currently wrong). On the 2nd reading, hopefully you'll see all the errors.

    For instance, no duplication is occurring. I have no idea why you think that. Interlaced is 50/60 discreet moments in time*. And you can't weasel out of this by saying "I think" or "I feel" because it's just wrong. Nobody ever said anything was doubled, yet somehow you're inferred it. In fact, poison has already disputed this earlier, yet you're still missing it.

    This conversation could get more advanced, but I'm stopping myself, because you're not yet comprehending the basics.

    * Note that some "moments" are identical unless high motion. So in that understanding, yes, it could be called duplication. This is an example of the "more advanced" I refer to.
    Last edited by lordsmurf; 27th Apr 2019 at 18:23.
    Want my help? Ask here! (not via PM!)
    FAQs: Best Blank DiscsBest TBCsBest VCRs for captureRestore VHS
    Quote Quote  
  20. Originally Posted by Christina View Post

    Is this the case with ffmpeg and any GUI based on it

    Or is it specific to something Handbrake is doing?
    I mentioned earlier that it's possible to pass the interlaced flag with ffmpeg swscale or zscale so it will scale the chroma in an interlaced fashion, instead of progressive and "fix" the issue.

    You can do it with the ffmpeg directly , so any GUI based on ffmpeg should be able to do it in theory, you just have to pass the proper flags

    I think handbrake extra options only deal with the encoder options, not how it handles the colorspace and scaling conversions prior to the encoder


    Are there other programs that do not convert the chroma in a progressive manner? If so, how do I know which do or don't?
    You want the ability to do it properly under all circumstances . So if it's interlaced, and you're encoding interlaced, then treat it as interlaced . Same with progressive

    vapoursynth, avisynth give you that level of control, but they are not really "GUIs" . FFmpeg can do it too .


    What about converting right in an NLE like Final Cut Pro? Would that have the same issue?
    I don't have a mac anymore, but most NLE's should handle it properly I would think . Just do a quick test to verify
    Quote Quote  
  21. If you want to use ffmpeg it would look something like this


    Code:
    ffmpeg -i "input.avi" -vf scale=w=-1:h=-1:interl=1,setsar=sar=10/11,format=yuv420p -c:v libx264 -crf 18 -flags +ildct+ilme -x264opts bff=1:force-cfr:colorprim=smpte170m:transfer=smpte170m:colormatrix=smpte170m -c:a aac -b:a 160k -ar 48000 -movflags faststart "output.mp4"
    The important flag is is vf scale's interl=1 , which tells ffmpeg to do swscale operations in an interlaced fashion. That includes the 'format' filter which uses swcale (yuv420p is 4:2:0)

    If you want it "fixed" for some software, a bug report would need to filed with and example illustrating the issue. But I doubt anyone will fix anything, or it will be assigned a low priority , because interlace is not that common among general users of software like handbrake

    It also opens up a can of worms and potential problems - if you expose those flags , it will clutter a GUI, and users might make some mistakes. If you include some logic to do it automatically based on metadata or labels - that also is potentially problematic, because some files are flagged incorrectly. For example , there are progressive DV variants, that are encoded interlaced.
    Quote Quote  
  22. Member
    Join Date
    Mar 2019
    Location
    NY, US
    Search Comp PM
    Thank you!!!
    Quote Quote  
  23. Originally Posted by Christina View Post
    Originally Posted by poisondeathray View Post
    But you still get chroma ghosting issues , since handbrake is converting interlaced 4:1:1 to interlaced 4:2:0 in a progressive manner
    Is this the case with ffmpeg and any GUI based on it? Or is it specific to something Handbrake is doing?
    It's a problem with Handbrake. ffmpeg is capable of doing the conversion correctly if told to do so. Handbrake isn't telling ffmpeg.lib to do an interlaced 4:1:1 (or 4:2:2) to interlaced 4:2:0, but rather progressive to progressive.
    Quote Quote  
  24. Originally Posted by celsoac View Post
    4) CRT shows first one of the fields made up of horizontal lines from top to bottom, the ray canon goes back to the top and shows the other line/field, this double sequence up to 25 times (PAL) per second.
    yes 25 times per second for 2 fields, so 50 times per second, there is a no new temporal information by 25i , same as by QTGMC, just resolution is better

    QTGMC is the same, the is no "better" fluency. If it appears that way , it only means that you never see it as fluent as recorded because player does not manage to compute it on screen in time , because PC monitors or LCD's are different beasts, they need to show image as a whole for some time per whole resolution.

    You'd see perfect fluency if watching 25i on CRT, because CRT was dedicated device for that. Video from my NTSC DV camcorder Sony VX2000 was "unreal", too fluent watching on a big CRT, too perfect. That is why I am replying, just remembering that. At that time it was almost "unreal"
    Quote Quote  
  25. When talking about deinterlacing done by digital displays:

    Originally Posted by poisondeathray View Post
    And most displays only do something similar to a bob . If you pause the picture you can see it's just a resized field, hence the jaggy line buzzing artifacts. On higher end displays, you have additional processing that fixes some artifacts , similar to QTGMC .
    Isn't that "resized field" in those digital displays just made by repeating lines from each half-frame field?, that is, by filling empty lines with an identical contiguous line? Or perhaps by redifining PAR so that each pixel is twice as tall?). If so, if no "additional processing" is done, wouldn't this be equivalent to "duplicating pixels"? (each pixel in a given line is repeated immediately below -- or above --)

    Another, related question: best way to produce a progressive 24p output of a 25i PAL TV recording of film (documentary) shot at 24fps. The method used in TV was to accelerate frame rate (horrible), but there's no frame superposition, ghosting. With the software I use, if I do (which I don't) a menu option "PAL => film" (inverse telecine), there is field blending, since 50 fields => 24 frames. So, how to better recombine those fields, in terms of getting the best quality? In this case, wouldn't just simple field blending work, since all the information for each frame is there? Or deinterlacing with some plugin by using both fields without doubling framerate? I suppose this last option is strange, as (yes, I understand) deinterlacing deals with 50 real moments per frame, not 25. And no, I don't want to double framerate, I'm excluding that option, I want to get 24fps.

    After that, I can change framerate somehow, that's another issue. I usually use an app that changes framerate without reencoding. This is not recognized by all players, so I input that in MPEGStreamclip and Save as.... also without reencoding. Some times I need Subler too.

    Thanks.
    Quote Quote  
  26. Originally Posted by celsoac View Post
    Isn't that "resized field" in those digital displays just made by repeating lines from each half-frame field?, that is, by filling empty lines with an identical contiguous line?
    Almost never.

    Originally Posted by celsoac View Post
    Or perhaps by redifining PAR so that each pixel is twice as tall?).
    At the very least the missing field is filled with interpolated lines. And most TVs are smart enough to interpolate only parts of the frame that are moving.
    Quote Quote  
  27. Originally Posted by celsoac View Post

    Another, related question: best way to produce a progressive 24p output of a 25i PAL TV recording of film (documentary) shot at 24fps. The method used in TV was to accelerate frame rate (horrible), but there's no frame superposition, ghosting. With the software I use, if I do (which I don't) a menu option "PAL => film" (inverse telecine), there is field blending, since 50 fields => 24 frames. So, how to better recombine those fields, in terms of getting the best quality? In this case, wouldn't just simple field blending work, since all the information for each frame is there? Or deinterlacing with some plugin by using both fields without doubling framerate? I suppose this last option is strange, as (yes, I understand) deinterlacing deals with 50 real moments per frame, not 25. And no, I don't want to double framerate, I'm excluding that option, I want to get 24fps.

    After that, I can change framerate somehow, that's another issue. I usually use an app that changes framerate without reencoding. This is not recognized by all players, so I input that in MPEGStreamclip and Save as.... also without reencoding. Some times I need Subler too.
    You should probably discuss that in another thread

    If it's a simple speed up (no blends, no duplicates, no inserts), you just do the reverse. Slow it down. In most NLE's it's called "interpreting" the frame rate. That means the exact same frames are kept (nothing added or dropped), just the rate is increased or decreased. However, most NLE's will re-encode, so it's usually not a preferred option, unless you were doing other manipulations that required re-encoding anyways,

    Whether or not you can do it without re-encoding depends on the codec and container; you'd need to provide more info on that

    Audio should almost always be resampled in a proper audio program, because it will be higher quality
    Quote Quote  
  28. Originally Posted by poisondeathray View Post
    Originally Posted by celsoac View Post

    Another, related question: best way to produce a progressive 24p output of a 25i PAL TV recording of film (documentary) shot at 24fps. The method used in TV was to accelerate frame rate (horrible), but there's no frame superposition, ghosting. With the software I use, if I do (which I don't) a menu option "PAL => film" (inverse telecine), there is field blending, since 50 fields => 24 frames. So, how to better recombine those fields, in terms of getting the best quality? In this case, wouldn't just simple field blending work, since all the information for each frame is there? Or deinterlacing with some plugin by using both fields without doubling framerate? I suppose this last option is strange, as (yes, I understand) deinterlacing deals with 50 real moments per frame, not 25. And no, I don't want to double framerate, I'm excluding that option, I want to get 24fps.

    After that, I can change framerate somehow, that's another issue. I usually use an app that changes framerate without reencoding. This is not recognized by all players, so I input that in MPEGStreamclip and Save as.... also without reencoding. Some times I need Subler too.
    You should probably discuss that in another thread

    If it's a simple speed up (no blends, no duplicates, no inserts), you just do the reverse. Slow it down. In most NLE's it's called "interpreting" the frame rate. That means the exact same frames are kept (nothing added or dropped), just the rate is increased or decreased. However, most NLE's will re-encode, so it's usually not a preferred option, unless you were doing other manipulations that required re-encoding anyways,

    Whether or not you can do it without re-encoding depends on the codec and container; you'd need to provide more info on that

    Audio should almost always be resampled in a proper audio program, because it will be higher quality
    Thank you. I want to turn it progressive first because I want to upload it to YouTube (in fact, a version is already uploaded. I don't want YouTube to convert it to progressive for me.

    The original codec is MPEG2.
    Quote Quote  
  29. Originally Posted by celsoac View Post
    I want to turn it progressive first because I want to upload it to YouTube (in fact, a version is already uploaded. I don't want YouTube to convert it to progressive for me.

    The original codec is MPEG2.


    From your description, the actual content probably is not actually "25i" . It's probably a "25p" speedup from 23.976p source, but just encoded as interlaced to make it compatible with 50Hz broadcast systems . This is essentially 2:2 pulldown . ie. it's probably already progressive content, and you do not want to deinterlace progressive content, or you will degrade it

    You probably have to re-encode it anyways for youtube because of 1) AR issues and 2) you don't want to upload progressive content flagged interlaced to YT, because it will apply deinterlace once it sees the flag

    If using a NLE, you'd have to remember to interpret it as progressive, using progressive timeline settings, and export settings - otherwise the NLE will degrade the footage too

    You'd have to upload an actual video sample to verify , someone will take a look at it

    (And this is way off topic for this thread...)
    Quote Quote  
  30. If it's progressive frames capture out of phase (so it shows combing) a simple TFM() will make all the frames progressive with no comb artifacts.
    Quote Quote  



Similar Threads