I'm trying to convert DVD footage from YV12 to RGB color in the "best" way possible -- meaning, most accurate/least information loss. I know this has been discussed a zillion times, but the more I read on the subject the less I understand. I'm hoping someone can provide a nice simple "type this, click that" dummy-level answer for my specific situation.
My source is a DVD from 2002 or so. It's a TV show, standard definition. It's soft-telecined, encoded as frames with pulldown flags. (I've used DGIndex with "Ignore Pulldown Flags" selected to bypass deinterlacing and get straight to the progressive footage. This may not be relevant to my question.)
What's the current best way to convert this footage to RGB? Should I use an AviSynth script with the command "ConvertToRGB"? Should I use a different command? Should I use different software?
(I assume I should use the Rec.601 conversion matrix, and that "ConvertToRGB" uses that by default. Is that right?)
Right now this is my script:
Is it as simple as that -- or is there a better method I should be using?
Thanks in advance!
+ Reply to Thread
Results 1 to 19 of 19
Convert to RGB for what purpose?
I'm not sure yet. There are a lot of things I want to try to do with the footage. Does that affect the answer?
It does, because, unless your process explicitly requires the RGB color model, it is best to leave it in YUV to avoid quality loss during the conversion, and to avoid the resulting recompression vs storage quality issue.
Many processes can be operated on directly in YUV space.
Good rule of thumb: leave things alone as long as possible, then do as little change as necessary to achieve the desired outcome in order do as little further damage as possible.
That's a great rule of thumb! However, my process explicitly requires the RGB color model.
I always hate giving extra background because then the conversation expands out into critiques of my plan, etc., instead of just providing an answer to the specific question asked.
However, in this case: I am playing around with an upscaling program (Video Enhance AI). I want to feed it two clips from this DVD that are identical in every way, except that one was left in the native YUV color space and one was converted to RGB at the start. I want to compare the differences.
In order to do this...I need to convert the footage to RGB.
Could someone please recommend the current best way to do so?
The answers are the same as your other thread.
For DVD YV12 to RGB in AviSynth you should read up on chroma placement and chroma resampling.
The defaults are probably fine for what you're doing. And probably pretty similar to what your upscaling program does with a YV12 source. If your chroma is already very sharp you may want to use bilinear resampling rather than the default bicubic. If you chroma is fairly blurry you might use a sharper resampler like Spline36Resize.
Thanks. I don't recall receiving any answers in that thread on how, specifically, to switch from YUV to RGB. Just "it has to happen at some point because that's how your screen displays it" and "Video Enhance AI can do it while upscaling." Maybe I missed something.
The defaults are probably fine for what you're doing.
If your chroma is already very sharp.... If you chroma is fairly blurry...
Thanks for the reply!
You can view the U channel as greyscale with UtoY(), the V channel with VtoY().
The DVD has a fair bit of "noise" in the background already and I had planned to smooth it a bit before upscaling. It sounds like I shouldn't be sharpening the chroma unless it's very blurry...
Out of curiosity, how would it affect the conversion if I specified "interlaced=true"? DGIndex says the material is 91.15% film, but there are orphaned fields at the ends of some shots. I was debating bobbing them into full frames, and am curious if converting the footage as "interlaced" would help or hurt things.
Sharpening also enhances noise. So you want to avoid sharpening. Or at least reduce the noise first.
If your source is only 91 percent film you should use Honor Pulldown Flags mode in DgIndex. Then use TFM().TDecimate() in AviSynth.
Mpeg2Source("filename.d2v" TFM(d2v="filename.d2v") TDecimate()
Does that method work on material with broken cadences? This is a TV show from 2000; if I enable Honor Pulldown Flags I'll get different cadences in every single shot.
And what does TDecimate do with orphaned fields?
Yes it does, it's adaptive
Be clear - is it just broken cadences (such as edits when still interlaced before broadcast) , or are there variable frame rate material, such as 23.976, 29.97, 59.94 sequences ? mix of film and video ?
TFM applies post processing by default. It will deinterlace based on combing thresholds. It only deinterlaces when it detects combing. You can adjust the detection thresholds, or disable post processing completely with pp=0. Or many people chose to replace the default TFM deinterlacer (it's similar to a bob, you get jaggies) with a higher quality deinterlacer like QTGMC with clip2 .
You really want to reduce anything that sharpen noise or artifacts. That's detrimental for almost all types of "AI" scaling. So I would reduce the default QTGMC sharpness right off the bat
If it's an orphaned field, and you have TFM PP enabled, it will now be a deinterlaced frame. If it belongs to the part of the film cadence, TDecimate will try keep it. If it's extraneous, it will drop it .
Try it out and preview the results, tweak the script. But 91% film is too low
Hi again poisondeathray! As far as I know, the only material natively at 29.97 is the end credits, which I don't care about right now. (It's possible that VFX were generated at 29.97 too, but I haven't looked into it yet.)
For my current purposes I'm just grabbing 5-10 second clips from the episodes, and can skip problematic sections -- so I figured the "Force Film" or "Ignore Pulldown Flags" settings would be a quick way for me to bypass having to learn to deinterlace. Eventually I plan to work with full episodes, but clearly I'm still learning the very basics here. Deinterlacing seems like a bigger topic, for a later day, with a stiffer drink...
Thanks for the background on how TFM works. Hey, here's an odd question: is there a way to take a 29.97i video like this, and change it into 59.94p? Basically making each field into a full frame?
I could SeparateFields, then Bob each one -- but the software would just guess at each field's missing lines and overall image quality would drop. Is there a bobber that looks at the previous and next field, sees if either matches the field it's bobbing, and if so it copies that info over -- resulting in no quality loss? (Essentially, deinterlacing but without deleting the duplicate frames?) And only if there are no matching adjacent fields, would it interpolate the missing lines.
This is a separate thought from any upscaling project. Just curious if such a thing exists!
Last edited by bentley; 30th Mar 2020 at 01:19.
QTGMC already mentioned by pdr.
Just curious if such a thing exists!
Last edited by manono; 30th Mar 2020 at 13:45.
A distant second double rate deinterlacer is Yadif(mode=1). Its one benefit over QTGMC is that it's fast (QTGMC is very slow). I often use it as a placeholder while working on scripts, then change to QTGMC when it's time for the final render.
Huh, cool! Thanks! I'll file QTGMC away for when I get to that stage in my projects.
(Boy, that is one LONG wiki page to chew through...is there a "QTGMC for Dummies" page out there?)
The defaults are pretty good for most video.
I haven't tried it beyond installation and looking at the GUI, but Lordsmurf says that you can avoid the headaches of installing Avisynth & QTGMC yourself if you use Hybrid by Selur. It also offers Vapoursynth.
This guy has video tutorials on setting up QTGMC if you want to do it the hard way: http://macilatthefront.blogspot.com/2018/12/using-vapoursynth-for-qtgmc-round-one.html?m=1