VideoHelp Forum

Try DVDFab and download streaming video, copy, convert or make Blu-rays,DVDs! Download free trial !
+ Reply to Thread
Results 1 to 19 of 19
Thread
  1. I'm trying to convert DVD footage from YV12 to RGB color in the "best" way possible -- meaning, most accurate/least information loss. I know this has been discussed a zillion times, but the more I read on the subject the less I understand. I'm hoping someone can provide a nice simple "type this, click that" dummy-level answer for my specific situation.

    My source is a DVD from 2002 or so. It's a TV show, standard definition. It's soft-telecined, encoded as frames with pulldown flags. (I've used DGIndex with "Ignore Pulldown Flags" selected to bypass deinterlacing and get straight to the progressive footage. This may not be relevant to my question.)

    What's the current best way to convert this footage to RGB? Should I use an AviSynth script with the command "ConvertToRGB"? Should I use a different command? Should I use different software?

    (I assume I should use the Rec.601 conversion matrix, and that "ConvertToRGB" uses that by default. Is that right?)

    Right now this is my script:

    MPEG2Source("C:\index.d2v")
    ConvertToRGB

    Is it as simple as that -- or is there a better method I should be using?

    Thanks in advance!
    Quote Quote  
  2. Convert to RGB for what purpose?
    Quote Quote  
  3. I'm not sure yet. There are a lot of things I want to try to do with the footage. Does that affect the answer?
    Quote Quote  
  4. Member Cornucopia's Avatar
    Join Date
    Oct 2001
    Location
    Deep in the Heart of Texas
    Search PM
    It does, because, unless your process explicitly requires the RGB color model, it is best to leave it in YUV to avoid quality loss during the conversion, and to avoid the resulting recompression vs storage quality issue.

    Many processes can be operated on directly in YUV space.

    Good rule of thumb: leave things alone as long as possible, then do as little change as necessary to achieve the desired outcome in order do as little further damage as possible.

    Scott
    Quote Quote  
  5. That's a great rule of thumb! However, my process explicitly requires the RGB color model.

    I always hate giving extra background because then the conversation expands out into critiques of my plan, etc., instead of just providing an answer to the specific question asked.

    However, in this case: I am playing around with an upscaling program (Video Enhance AI). I want to feed it two clips from this DVD that are identical in every way, except that one was left in the native YUV color space and one was converted to RGB at the start. I want to compare the differences.

    In order to do this...I need to convert the footage to RGB.

    Could someone please recommend the current best way to do so?
    Quote Quote  
  6. Then you should follow input rules for that procedure. Does it allow floating point RGB? 8bit? 16bit? Then you ask if avisynth can do it. Or if your particular intermediate can be encoded as such if not using Avisynth as input.

    Check what is used as input bitdepth for that AI.
    Quote Quote  
  7. The answers are the same as your other thread.

    https://forum.videohelp.com/threads/396408-Losslessly-exporting-frames-from-a-VOB-am-I...doing-it-right

    For DVD YV12 to RGB in AviSynth you should read up on chroma placement and chroma resampling.

    http://avisynth.nl/index.php/ConvertToRGB#Chroma_placement

    The defaults are probably fine for what you're doing. And probably pretty similar to what your upscaling program does with a YV12 source. If your chroma is already very sharp you may want to use bilinear resampling rather than the default bicubic. If you chroma is fairly blurry you might use a sharper resampler like Spline36Resize.
    Quote Quote  
  8. Thanks. I don't recall receiving any answers in that thread on how, specifically, to switch from YUV to RGB. Just "it has to happen at some point because that's how your screen displays it" and "Video Enhance AI can do it while upscaling." Maybe I missed something.

    The defaults are probably fine for what you're doing.
    This the kind of basic answer that a dummy like me is looking for.

    Am I right in assuming that, if I just type "ConvertToRGB" and nothing else, all these defaults will kick in? (Matrix=Rec601, interlaced=false, ChromaInPlacement=MPEG2, chromaresample=bicubic ?) Or do I need to manually type in the properties of each of them for the script to do anything?

    If your chroma is already very sharp.... If you chroma is fairly blurry...
    How would I look at the chroma to determine this?

    Thanks for the reply!
    Quote Quote  
  9. Originally Posted by bentley View Post
    Am I right in assuming that, if I just type "ConvertToRGB" and nothing else, all these defaults will kick in? (Matrix=Rec601, interlaced=false, ChromaInPlacement=MPEG2, chromaresample=bicubic ?)
    Yes, those are all the defaults.

    Originally Posted by bentley View Post
    If your chroma is already very sharp.... If your chroma is fairly blurry...
    How would I look at the chroma to determine this?
    You can view the U channel as greyscale with UtoY(), the V channel with VtoY(). I like to use:
    Code:
    StackHorizontal(UtoY(), VtoY())
    Keep in mind that Bilinear blurs a bit, Bicubic sharpens a bit, Spline16/36/64 sharpen more (in that order), Lancsoz sharpens even more. All the sharpening resizers will create oversharpening halos if the source is already sharp.
    Quote Quote  
  10. You can view the U channel as greyscale with UtoY(), the V channel with VtoY().
    Thanks, I will try that!

    The DVD has a fair bit of "noise" in the background already and I had planned to smooth it a bit before upscaling. It sounds like I shouldn't be sharpening the chroma unless it's very blurry...

    Out of curiosity, how would it affect the conversion if I specified "interlaced=true"? DGIndex says the material is 91.15% film, but there are orphaned fields at the ends of some shots. I was debating bobbing them into full frames, and am curious if converting the footage as "interlaced" would help or hurt things.
    Quote Quote  
  11. Sharpening also enhances noise. So you want to avoid sharpening. Or at least reduce the noise first.

    If your source is only 91 percent film you should use Honor Pulldown Flags mode in DgIndex. Then use TFM().TDecimate() in AviSynth.

    Code:
    Mpeg2Source("filename.d2v"
    TFM(d2v="filename.d2v")
    TDecimate()
    Quote Quote  
  12. Does that method work on material with broken cadences? This is a TV show from 2000; if I enable Honor Pulldown Flags I'll get different cadences in every single shot.

    And what does TDecimate do with orphaned fields?
    Quote Quote  
  13. Originally Posted by bentley View Post
    Does that method work on material with broken cadences? This is a TV show from 2000; if I enable Honor Pulldown Flags I'll get different cadences in every single shot.

    And what does TDecimate do with orphaned fields?

    Yes it does, it's adaptive

    Be clear - is it just broken cadences (such as edits when still interlaced before broadcast) , or are there variable frame rate material, such as 23.976, 29.97, 59.94 sequences ? mix of film and video ?

    TFM applies post processing by default. It will deinterlace based on combing thresholds. It only deinterlaces when it detects combing. You can adjust the detection thresholds, or disable post processing completely with pp=0. Or many people chose to replace the default TFM deinterlacer (it's similar to a bob, you get jaggies) with a higher quality deinterlacer like QTGMC with clip2 .

    TFM(clip2=QTGMC(sharpness=0.5).selecteven())

    You really want to reduce anything that sharpen noise or artifacts. That's detrimental for almost all types of "AI" scaling. So I would reduce the default QTGMC sharpness right off the bat

    If it's an orphaned field, and you have TFM PP enabled, it will now be a deinterlaced frame. If it belongs to the part of the film cadence, TDecimate will try keep it. If it's extraneous, it will drop it .

    Try it out and preview the results, tweak the script. But 91% film is too low
    Quote Quote  
  14. Hi again poisondeathray! As far as I know, the only material natively at 29.97 is the end credits, which I don't care about right now. (It's possible that VFX were generated at 29.97 too, but I haven't looked into it yet.)

    For my current purposes I'm just grabbing 5-10 second clips from the episodes, and can skip problematic sections -- so I figured the "Force Film" or "Ignore Pulldown Flags" settings would be a quick way for me to bypass having to learn to deinterlace. Eventually I plan to work with full episodes, but clearly I'm still learning the very basics here. Deinterlacing seems like a bigger topic, for a later day, with a stiffer drink...

    Thanks for the background on how TFM works. Hey, here's an odd question: is there a way to take a 29.97i video like this, and change it into 59.94p? Basically making each field into a full frame?

    I could SeparateFields, then Bob each one -- but the software would just guess at each field's missing lines and overall image quality would drop. Is there a bobber that looks at the previous and next field, sees if either matches the field it's bobbing, and if so it copies that info over -- resulting in no quality loss? (Essentially, deinterlacing but without deleting the duplicate frames?) And only if there are no matching adjacent fields, would it interpolate the missing lines.

    This is a separate thought from any upscaling project. Just curious if such a thing exists!
    Last edited by bentley; 30th Mar 2020 at 01:19.
    Quote Quote  
  15. Originally Posted by bentley View Post
    is there a way to take a 29.97i video like this, and change it into 59.94p? Basically making each field into a full frame?
    That's called a double-rate deinterlacer or, more commonly, a bobber. Perhaps the best bobber is the QTGMC already mentioned by pdr.

    Just curious if such a thing exists!
    Yes. Some create their frames solely from a field. Some create their frames from the field and its partner field (if the frame is mainly static). Some also use nearby fields to generate a new frame. But you can forget all that as QTGMC is generally considered to be the single best bobber in the AviSynth world.
    Last edited by manono; 30th Mar 2020 at 13:45.
    Quote Quote  
  16. A distant second double rate deinterlacer is Yadif(mode=1). Its one benefit over QTGMC is that it's fast (QTGMC is very slow). I often use it as a placeholder while working on scripts, then change to QTGMC when it's time for the final render.
    Quote Quote  
  17. Huh, cool! Thanks! I'll file QTGMC away for when I get to that stage in my projects.

    (Boy, that is one LONG wiki page to chew through...is there a "QTGMC for Dummies" page out there?)
    Quote Quote  
  18. The defaults are pretty good for most video.

    Code:
    WhateverSource()
    QTGMC()
    The biggest problem is getting it set up. It relies on several other AviSynth filters that you need to download and install too.
    Quote Quote  
  19. Formerly 'vaporeon800' Brad's Avatar
    Join Date
    Apr 2001
    Location
    Vancouver, Canada
    Search PM
    I haven't tried it beyond installation and looking at the GUI, but Lordsmurf says that you can avoid the headaches of installing Avisynth & QTGMC yourself if you use Hybrid by Selur. It also offers Vapoursynth.

    This guy has video tutorials on setting up QTGMC if you want to do it the hard way: http://macilatthefront.blogspot.com/2018/12/using-vapoursynth-for-qtgmc-round-one.html?m=1
    Quote Quote  



Similar Threads