I suppose...in that limited sense that there is some temporal interpolation, some use of motion vectors.
But they are very different because optical flow generates new time sampling points. That's the whole point of optical flow. The big difference is QTGMC (or any deinterlacer) outputs the same time sampling that was already there. It does not "guess" some intermediate time sampling point based on motion vectors.
QTGMC does not compute "inbetween fields" in the same sense as optical flow's "inbetween frames". In QTGMC, the fields are already present in interlaced video (sure it uses the already present fields to help refine the current (frame), but it does not synthesize new inbetween ones). But if you output another framerate such as 100 or 120 fps footage, then that implies "inbetween fields" in the same sense of optical flow where there are new timepoints as "inbetween frames" . QTGMC does not do that.
What you say about the sense of "reality" or "fakenews" in video may be true, that is, that actually the 60fps "smooth" effect is already present in 30i video due to how human vision works (each field persists in the retina until next field is displayed, so the effect is "progressive"). But for some reason synthesizing those intermediate fields levels out some blur and jerkiness that may be necessary for the perception that what is been watched is artificial media, not "real" vision.
Not what I'm saying at all.
I said nothing about perception or vision in this thread.
The BIG distinction is 50 or 60 moments in time per second are already physically captured and represented in interlaced video. It's there already. You're not synthesizing any new timepoints. What I'm saying has nothing necessarily to do with how human vision works, nothing necessarily to do with interlaced CRT display vs. flat panel. All I'm saying is the recording has that many moments in time recorded. You can physically count them, and see them. It's only stored in a weaved format, where 2 fields are combined into 1 frame. If you separate the fields, you will see all moments in time are all there. So when you double rate deinterlace (with any deinterlacer) , you aren't generating any new timepoints. They were there already to begin with
Conversely, optical flow synthesized frames are not there to begin with. There are new inbetween frames . That's why there are often edge morphing artifacts on many types of content
YesThis perception of "unnaturalness" may be totally subjective.
Yes, but higher actual sampling represents reality more accurately. That's real data . ie . You can interpolate more accurately from more sampling points, than fewer sample points. e.g. If I look out the window once per minute to count cars, I might miss some that pass by. If I look out 10 times per minute, I will have a higher degree of accuracy.But also -- and this may be overextending the argument -- real movement may not be continuous, but variably accelerated (for example, in facial microgestures),
Similarly, if you recorded 1000 fps high speed, you will catch all that acceleration , deceleration of things like human movements more accurately, than if you recorded 30fps. Real recorded data is better than interpolated data. At some point there is a number that is enough to satisfy human perception. There is a lot of discussion and research on what is the framerate required but 0% would say 30fps or 60fps. It's a higher number for sure
Interpolating scan lines for a field to generate a frame are exactly mid position (ignoring things like timebase errors , compression artifacts) . Whereas temporal interpolation might not be - as you pointed out variable acceleration/decel . And non integer multiple optical flow retiming, resamples current flames as well (you lose some actual frames)whereas both optical flow or field synthesis in QTGMC calculate (I suppose) exact mid-positions between a given pixel/element in field/frame 1 and 2 .
But there is no new time sampling with QTGMC or any deinterlacer. It's using the time samples already there
Whereas optical flow IS retiming, and you might have variable acceleration/decel errors.
+ Reply to Thread
Results 271 to 300 of 318
When the VHS input is faulty, wiggly, then turning each field into a full frame makes the wiggling very noticeably if you go frame by frame. What I found different and really good about QTGMC is that (I think) it generates the missing lines not by duplicating them but by (very good) interpolation, isn't it? I don't know if other deinterlacing methods do the same (smoothing edges, etc.), but I was never satisfied.
Hey everyone. Just came to report back on something that was SO STUPID on my part, I can't believe I only just realized this now.
So I'm happy to report, my TV is actually showing interlaced footage just fine... which makes a lot of sense, and is a relief. All this time I thought it was not, and here is one more reason to add to the list of why Handbrake stinks:
Prior to now, I have been using Handbrake to convert to mp4. It defaults to basic decomb settings on the filter tab, which I experimented with, along with yadif, and bob, and so on.
I ASSUMED that when you turn off all these filters, and especially set deinterlace=OFF (see screenshot), that it would in fact save the video as an interlaced file (same as source), and so I've been saving/naming these test files as "xyz interlaced.mp4".
Well guess what - I just looked at the details of one of those files (with deinterlace off) with MediaInfo and even with all those settings off, it saves it as a progressive format. So basically I was burning the interlacing into a progressive file, which is why my tv was showing the interlacing lines.
In fact, going back into Handbrake knowing this now, I can't even find a way to tell it to leave the file interlaced. Maybe this is something with my version, or the mac version, I have no idea.. but it's stupid, and I'm mad that all this time I've been thinking my tv can't even properly deinterlace a video.
I only thought to check because I came across a file I actually DID leave interlaced using another encoder while testing some files on my tv, and I thought, "Hey, this looks pretty good! Which one is this?" Only to realize it was a REAL interlaced video, and there was no evidence of wiggling or jaggies, and that sent me back to the computer to check the files out that I THOUGHT were interlaced as well, which actually were very much indeed, not.
Excuse me while I go bang my head against the wall!
You can encode MBAFF with handbrake (essentially a special type of "interlace" for AVC)
But do you still have the AR issue with that TV ?
OK, good to know. I'll try that, thank you. Why would such a major important setting be buried in the advanced extra options field...
Yes, my TV still ignores aspect ratio flags and displays everything as though it was 1:1. That part I haven't figured out, other than re-encoding at 640x480 1:1.
]FCP, where you play around with field dominance), when converting an interlaced input into progressive (1 field => 1 frame), one of the fields is sharper than the other, but when viewed at 50fps progressive, the sharper and blurrier frames again "superpose" each other in the eye in a compound effect.
So, if some of this makes some sense (in interlaced video, the eye reconstructs non-present information), it logically follows that if you syntesize that missing information by software (or the closer to it), the perceptual effect may be different than if all of the information is there. The smoothness of 30i video (that is, 60 half frames) may be perceptually different from that of 60p.
We can compare it to sound. You have two instruments, a piano and a clarinet, both playing simultaneously the same up- and down-pitch scales at 60 very brief staccatto notes per second (note length being compared to shutter speed in video). You sample only the odd notes in the piano, at 30 times per second, and only the even notes of the clarinet, also at 30 times. You may have two microphones or speakers that turn on and off at those rates. You interpolate both samplings. The sound "flows", and at that sampling rate no doubt you will get the scales and melodies, but probably your ear is reconstructing half of the information. Now, suppose you synthesize the (predictable) missing notes in each instrument and actually play them ("deinterlacing"). Your perception of that new sound will probably be very different.
Motion flow in moving images compares to the predictability of a continuous musical scale; jerkiness compares to undersampling that scale; blurriness, to note length, extending over the following note. It is my impression that both some "jerkiness" and blurriness are built into how we perceive motion in interlaced video (not to speak of 24fps film). If you get rid of both of them, the effect may be unrealistic.
Confirmed, adding bff to the extra options field leaves it interlaced. So simple when you actually know how to use it. Cannot believe I overlooked this.
1) Often deinterlacing by doubling framerate consists of just duplicating pixelx.
2) Each field in a PAL recording has 288 horizontal lines.
3) MPEG1 (I meant old VCD in MPEG1, at least) was 352x288 progressive.
4) CRT shows first one of the fields made up of horizontal lines from top to bottom, the ray canon goes back to the top and shows the other line/field, this double sequence up to 25 times (PAL) per second.
5) 288 lines + 288 lines sequentially but very quickly gives the impression of a full frame made up of 76 simultaneous lines.
6) This phenomenon gives me a subjective feeling of whatever-I-want-to-call-it, as compared to deinterlaced video with duplicated FRAMErate in a digital display.
You have similar issues with artifacts and flicker artifacts regardless of CRT or flat panel . The fundamental problem is interlace . That's why you want 60p; and QTGMC does the "best" overall to emulate that
But your comments imply that 30p from that same interlace source is preferred ...that 30p is more "real" ... No way
60p from interlace , using any deinterlacer except a blend, looks more "real" , as in real life, than 30p . Because there are more real samples used . Just like 30p looks more real, as in more representative of real life, than say, a 1 fps video . 30p from that same interlace source, throws away another half the information (temporal this time, instead of spatial), so you're left with 1/4 of the information a full progressive 60p original.
It is my impression that both some "jerkiness" and blurriness are built into how we perceive motion in interlaced video (not to speak of 24fps film). If you get rid of both of them, the effect may be unrealistic.
And not necessarily jerkiness. Maybe you expect home video to be jerky , because of bad camera work. But film a wall from a tripod, do a controlled professional pan - do you think that would be "jerky" ? Or maybe you're used to bad deinterlacing, so you "expect" things to look bad.
Bluriness because you might be used to blend deinterlacing. And you lose some resolution from interlace because of fields in the first place.
But the actual mechanism and amount for natural shutter blur is exactly the same as 60p . Both are shot at 1/120 by default . So, when you single rate deinterlace, it's actually has less motion blur than you would expect at a normal video shot natively at 30p (which would have had 1/60 shutter) . QTGMC has a shutter blur option to blur it more when you output single rate to emulate a native 30p camera
The goal for interlaced 29.97 would be to have "real" 59.94p shot with a real 59.94p camera - if you had a time machine and could capture it with a modern camera - and QTGMC does the best in general to achieve that to fill in the missing scan lines. Just look at real 60p video from a modern camera and compare . There is no "extra" jerkiness, but it is more blurry (partly because of interlace has 1/2 the spatial resolution, partly because UHD/HD cameras sensors are a many times better in terms of clarity)
Humans are used to, conditioned to watching cinema, that natural shutter motion blur, low framerates, choppy motion. But that does not happen to the same extent with real eyes. Look out the window at cars passing by, there are no blurry trails, there is no choppy motion (unless your neighbor's kid is learning to drive standard ). It's similar to HFR , I don't know what the magic number is but it's higher than 120fps. The lower the framerates, the more blur you need to add to prevent "strobing"
But the actual mechanism and amount for natural shutter blur is exactly the same as 60p . Both are shot at 1/120 by default . So, when you single rate deinterlace, it's actually has less motion blur than you would expect at a normal video shot natively at 30p (which would have had 1/60 shutter) . QTGMC has a shutter blur option to blur it more when you output single rate to emulate a native 30p camera.
PS: I didn't mean "jerkiness" when referring to continuous motion, but choppiness or strobe effect -- my English.
Last edited by celsoac; 27th Apr 2019 at 06:16.
Instead or “real” or “fake” you can probably describe what you’re trying to say by “authentic to its time period”. The same way some might prefer to see a really old film reel at the frame rate it would have been played on a projector, vs trying to increase frame rate and make it appear it was shot with a modern camera (which would certainly look more “real” and closer to what the human eye sees). When I mentioned faster frame rates looking unnatural, I think what I was referring to was that soap opera effect, where motion looks smoother and probably closer to “reality” in terms of frames and refresh rate, but to me is visually disturbing!
handbrake is converting interlaced 4:1:1 to interlaced 4:2:0 in a progressive manner
ffmpeg and any GUI based on it? Or is it specific to something Handbrake is doing?
Are there other programs that do not convert the chroma in a progressive manner? If so, how do I know which do or don't?
If I am not de-interlacing for the time being, that should (hopefully) open up some other options to just convert DV to mp4/h264 that don't have this issue...?
What about converting right in an NLE like Final Cut Pro? Would that have the same issue?
Last edited by Christina; 27th Apr 2019 at 17:30.
- Student: "In 1492, Columbus discovered America."
- Teacher: "That's not accurate."
You gave such an overly simple synopsis that it would take an inordinate amount of time to correct. It shows that you're either not reading, skimming, rushing, and/or rashly making conclusions not supported by the facts. So my answer here is the same as the teacher: re-read the book/documentation (with the understanding that you're currently wrong). On the 2nd reading, hopefully you'll see all the errors.
For instance, no duplication is occurring. I have no idea why you think that. Interlaced is 50/60 discreet moments in time*. And you can't weasel out of this by saying "I think" or "I feel" because it's just wrong. Nobody ever said anything was doubled, yet somehow you're inferred it. In fact, poison has already disputed this earlier, yet you're still missing it.
This conversation could get more advanced, but I'm stopping myself, because you're not yet comprehending the basics.
* Note that some "moments" are identical unless high motion. So in that understanding, yes, it could be called duplication. This is an example of the "more advanced" I refer to.
You can do it with the ffmpeg directly , so any GUI based on ffmpeg should be able to do it in theory, you just have to pass the proper flags
I think handbrake extra options only deal with the encoder options, not how it handles the colorspace and scaling conversions prior to the encoder
Are there other programs that do not convert the chroma in a progressive manner? If so, how do I know which do or don't?
vapoursynth, avisynth give you that level of control, but they are not really "GUIs" . FFmpeg can do it too .
If you want to use ffmpeg it would look something like this
ffmpeg -i "input.avi" -vf scale=w=-1:h=-1:interl=1,setsar=sar=10/11,format=yuv420p -c:v libx264 -crf 18 -flags +ildct+ilme -x264opts bff=1:force-cfr:colorprim=smpte170m:transfer=smpte170m:colormatrix=smpte170m -c:a aac -b:a 160k -ar 48000 -movflags faststart "output.mp4"
If you want it "fixed" for some software, a bug report would need to filed with and example illustrating the issue. But I doubt anyone will fix anything, or it will be assigned a low priority , because interlace is not that common among general users of software like handbrake
It also opens up a can of worms and potential problems - if you expose those flags , it will clutter a GUI, and users might make some mistakes. If you include some logic to do it automatically based on metadata or labels - that also is potentially problematic, because some files are flagged incorrectly. For example , there are progressive DV variants, that are encoded interlaced.
QTGMC is the same, the is no "better" fluency. If it appears that way , it only means that you never see it as fluent as recorded because player does not manage to compute it on screen in time , because PC monitors or LCD's are different beasts, they need to show image as a whole for some time per whole resolution.
You'd see perfect fluency if watching 25i on CRT, because CRT was dedicated device for that. Video from my NTSC DV camcorder Sony VX2000 was "unreal", too fluent watching on a big CRT, too perfect. That is why I am replying, just remembering that. At that time it was almost "unreal"
When talking about deinterlacing done by digital displays:
Another, related question: best way to produce a progressive 24p output of a 25i PAL TV recording of film (documentary) shot at 24fps. The method used in TV was to accelerate frame rate (horrible), but there's no frame superposition, ghosting. With the software I use, if I do (which I don't) a menu option "PAL => film" (inverse telecine), there is field blending, since 50 fields => 24 frames. So, how to better recombine those fields, in terms of getting the best quality? In this case, wouldn't just simple field blending work, since all the information for each frame is there? Or deinterlacing with some plugin by using both fields without doubling framerate? I suppose this last option is strange, as (yes, I understand) deinterlacing deals with 50 real moments per frame, not 25. And no, I don't want to double framerate, I'm excluding that option, I want to get 24fps.
After that, I can change framerate somehow, that's another issue. I usually use an app that changes framerate without reencoding. This is not recognized by all players, so I input that in MPEGStreamclip and Save as.... also without reencoding. Some times I need Subler too.
If it's a simple speed up (no blends, no duplicates, no inserts), you just do the reverse. Slow it down. In most NLE's it's called "interpreting" the frame rate. That means the exact same frames are kept (nothing added or dropped), just the rate is increased or decreased. However, most NLE's will re-encode, so it's usually not a preferred option, unless you were doing other manipulations that required re-encoding anyways,
Whether or not you can do it without re-encoding depends on the codec and container; you'd need to provide more info on that
Audio should almost always be resampled in a proper audio program, because it will be higher quality
From your description, the actual content probably is not actually "25i" . It's probably a "25p" speedup from 23.976p source, but just encoded as interlaced to make it compatible with 50Hz broadcast systems . This is essentially 2:2 pulldown . ie. it's probably already progressive content, and you do not want to deinterlace progressive content, or you will degrade it
You probably have to re-encode it anyways for youtube because of 1) AR issues and 2) you don't want to upload progressive content flagged interlaced to YT, because it will apply deinterlace once it sees the flag
If using a NLE, you'd have to remember to interpret it as progressive, using progressive timeline settings, and export settings - otherwise the NLE will degrade the footage too
You'd have to upload an actual video sample to verify , someone will take a look at it
(And this is way off topic for this thread...)
If it's progressive frames capture out of phase (so it shows combing) a simple TFM() will make all the frames progressive with no comb artifacts.