VideoHelp Forum
+ Reply to Thread
Page 9 of 11
FirstFirst ... 7 8 9 10 11 LastLast
Results 241 to 270 of 318
Thread
  1. Originally Posted by lordsmurf View Post
    Originally Posted by celsoac View Post
    My humble opinion: JES Deinterlacer is bettter than yadif
    There's no way that blended deinterlace is better than Yadif. Also realize that Yadif has many options, and has mod methods as well. It can be almost as good as QTGMC, or almost as bad as drop-field, depending on what was done.
    Have you had your eyes checked lately? . Did you look at those samples that were uplooaded? That's what I've compared. I have the bad habit of basing my opinions on empirical evidence, not dogma.

    Now, if someone uploads another, better sample with yadif to prove that it's good, I may change my mind, too, as I have done in this thread.
    Quote Quote  
  2. I attempted to install Hybrid but I believe Vapoursynth requires one OS version newer than I currently have installed on my Mac so for now I’m going to stick with yadif, and keep source files in case I ever want to redo.

    I had been using handbrake to do my final conversion to mp4 using h.264 but I just realized I can use Avidemux provided my DV files are .mov container and not .dv, which some were depending on the software I used to capture.

    It seems avidemux has a few more options than handbrake, like adding black border to cover overscan noise on bottom (vs actually cropping) as well as other filters that may be helpful like sharpen or denoise - without experimenting yet, I’m not sure.

    Before I go down the path of leaning Avidemux under the hood, is there any reason I wouldn’t want to use avidemux for my final output to mp4 compared with handbrake or some other program? I read something about handbrake output files being more compatible but if both are using h264 I don’t underhand how that’s possible.

    Thanks!
    Quote Quote  
  3. Originally Posted by celsoac View Post
    Originally Posted by lordsmurf View Post
    Originally Posted by celsoac View Post
    My humble opinion: JES Deinterlacer is bettter than yadif
    There's no way that blended deinterlace is better than Yadif. Also realize that Yadif has many options, and has mod methods as well. It can be almost as good as QTGMC, or almost as bad as drop-field, depending on what was done.
    Have you had your eyes checked lately? . Did you look at those samples that were uplooaded? That's what I've compared. I have the bad habit of basing my opinions on empirical evidence, not dogma.

    Now, if someone uploads another, better sample with yadif to prove that it's good, I may change my mind, too, as I have done in this thread.
    Once again, that video sample consists of only small motions where blending fields looks a lot like motion blur. Blending fares far worse when motions are large.
    Quote Quote  
  4. Originally Posted by Christina View Post
    I had been using handbrake to do my final conversion to mp4 using h.264
    There's another problem with Handbrake and NTSC DV sources: it doesn't handle the interlaced YUV 4:2:2 to YUV 4:2:0 conversion correctly and it screws up the chroma (colors). It blends the two fields colors together. This can be seen as color ghosting when colored objects make large movements.
    Quote Quote  
  5. I thought DV was 4:1:1?

    Nonetheless, Avidemux doesn’t have the same problem?

    I also have Final Cut Pro 7 on this computer and that’s where I’m doing any editing, so I also have Apple Compressor as an output option, although the settings seem to be quite limited. I had been saving my edits and exporting another DV file and using a 3rd party transcoder for final output. Avidemux looks more like a mini NLE than a transcoder like Handbrake and for some reason I can’t find too much info on this topic, so I was wondering if generally Avidemux is a tool most used in other scenarios but not as a workhorse for transcoding.
    Quote Quote  
  6. Originally Posted by Christina View Post
    I thought DV was 4:1:1?
    Internally, yes. But most decoders output 4:2:2. In any case, Handbrake definitely creates chroma blending artifacts from NTSC DV.

    Originally Posted by Christina View Post
    Nonetheless, Avidemux doesn’t have the same problem?
    I don't know. I don't really use the program because it has so many crash-and-die bugs.
    Quote Quote  
  7. Originally Posted by Christina View Post
    I thought DV was 4:1:1?

    Nonetheless, Avidemux doesn’t have the same problem?

    I also have Final Cut Pro 7 on this computer and that’s where I’m doing any editing, so I also have Apple Compressor as an output option, although the settings seem to be quite limited. I had been saving my edits and exporting another DV file and using a 3rd party transcoder for final output. Avidemux looks more like a mini NLE than a transcoder like Handbrake and for some reason I can’t find too much info on this topic, so I was wondering if generally Avidemux is a tool most used in other scenarios but not as a workhorse for transcoding.
    May I give my opinion?:

    - If the output is MP4 and I'm not going to edit the DV source, just simple cutting and trimming (to the frame, yes, not to milliseconds), and keeping the file interlaced, in my experience MPEGStreamclip is more than enough. It is very flexible for chosing bitrates etc.
    - If editing and credits are needed, I use FCP, but I output the editing in ProRes (save as Master) and, again, convert it to MP4 with MPEGStreamclip. In my experience, the MP4 generated by FCP is not better, and as far as I know you cannot select bitrate.
    - If only credits are needed (no color editing, etc.), I often use FCP just to generate those credits. I output a ProRes file which then I convert to MP4 with MPEGStreamclip with exactly the same settings as the main file. Then -- in my experience -- cutting and pasting the credits and the main recording with MPEGStreamclip is safe and fast -- no reencoding, ever.
    - If I were to go for deinterlacing, from what I've learned here I would try to do QTGMC, though it is slow. I would like for someone to show that yadif can be equally good, because it's much faster. But with Hybrid I have not been able to produce a good deinterlaced sample with yadif. (I personally don't like Handbrake).
    Quote Quote  
  8. Originally Posted by jagabo View Post
    In any case, Handbrake definitely creates chroma blending artifacts from NTSC DV.
    confirmed

    Originally Posted by Christina View Post
    Nonetheless, Avidemux doesn’t have the same problem?
    yes, confirmed


    Originally Posted by celsoac View Post
    - If I were to go for deinterlacing, from what I've learned here I would try to do QTGMC, though it is slow.
    It is slow, but you can use faster settings . You can get 98% of the quality using faster settings . The default preset is "slower" . "faster" would typically be more than 2x as fast on an average quadcore, "very fast" 4x as fast. QTGMC has dozens of settings you can tweak , but the presets help simplify things or at least a good starting point . You can use vsedit (vapoursynth editor) which has a benchmarking utility to optimize and test your script, to see how fast it is, preview the quality of the script output)

    But do not use the fastest preset: "ultra fast", because yadif is used for parts of the subroutine , you get back the aliasing and jaggies. Personally, I use "faster" for general use

    Vapoursynth threads very efficiently, more than avisynth (at least on the PC). If you are in a situation where combine other filters, it actually becomes significantly faster than the avisynth version

    Many settings you probably should be adjusting - For example, with typical DV , they often have oversharpen halos . You might want to adjust the contra sharpness settings or else the halos will be enhanced

    Look at avisynth filter package for the complete QTGMC documentation and settings


    I would like for someone to show that yadif can be equally good, because it's much faster.
    Quality wise, not possible
    Quote Quote  
  9. Originally Posted by poisondeathray View Post
    Originally Posted by Christina View Post
    Nonetheless, Avidemux doesn’t have the same problem?
    yes, confirmed
    Thanks for the reply, poisondeathray. Can you clarify what you mean exactly? Yes Avidemux DOES have the same problem, or no Avidemux DOES NOT have the same problem?

    The way I worded it, I'm not sure what your "yes" means!
    Quote Quote  
  10. Sorry , to clarify it DOES have the same problem . It should be reported to the developers

    Because many deinterlacers and filters do not work in 4:1:1 , there is a conversion to 4:2:0 prior to the filter. That conversion is done in a progressive fashion (U,V planes are scaled progressively instead of in an interlaced fashion), causing the chroma ghosting artifacts

    I think handbrake and avidemux base most their code in libav/ffmpeg libraries . There is a way to scale interlaced correctly using swscale or zscale, so a fix should be possible
    Quote Quote  
  11. Here's an example of the bad chroma handling of DV in Handbrake.

    Two frames generated from the two fields of one DV frame (yadif+bob):

    Image
    [Attachment 48861 - Click to enlarge]

    Image
    [Attachment 48862 - Click to enlarge]


    The same two frames generated by QTGMC in AviSynth:
    Image
    [Attachment 48863 - Click to enlarge]

    Image
    [Attachment 48864 - Click to enlarge]


    You can see that Handbrake blended the colors from the two fields together. That has caused ghosting of the colors on the grey background, and loss of saturation on the moving ball (because the colors of the ball were blended with the grey background).

    DV source attached. And AviSynth script:

    Code:
    AviSource("dv2.avi") 
    ConvertToYV12(interlaced=true)
    QTGMC()
    Image Attached Files
    Quote Quote  
  12. Originally Posted by celsoac View Post
    May I give my opinion?
    Of course, at your own risk!

    - If the output is MP4 and I'm not going to edit the DV source, just simple cutting and trimming (to the frame, yes, not to milliseconds), and keeping the file interlaced, in my experience MPEGStreamclip is more than enough. It is very flexible for chosing bitrates etc.
    I would PREFER to keep the file interlaced since my final destination is likely a TV in most cases, but for some reason the way I'm playing the mp4s off a hard drive, the tv isn't de-interlacing. If I were making DVD's, that would probably be another story. I don't understand enough about modern tvs, progressive displays, interlacing, and so on to know what is going on or what to expect from TVs. Plus, the files may be played on computers too, so I figured de-interlacing was a safer bet and more universal.

    - If editing and credits are needed, I use FCP, but I output the editing in ProRes (save as Master) and, again, convert it to MP4 with MPEGStreamclip. In my experience, the MP4 generated by FCP is not better, and as far as I know you cannot select bitrate.
    If my source is DV in FCP, and I do some edits (for example, I had one file I wanted to tweak audio levels, brighten, add Text to the very beginning to state what the video was, like so-and-so's 50th Birthday 1992, and a fade out at the end), I just use Export QuickTime Movie (NOT Export using QuickTime Conversion) and it just saves the DV stream as DV without re-encoding the whole thing, keeping it in my original format. I don't see any reason why I would convert to ProRes at this stage (unless your source was ProRes, you didn't say).

    - If only credits are needed (no color editing, etc.), I often use FCP just to generate those credits. I output a ProRes file which then I convert to MP4 with MPEGStreamclip with exactly the same settings as the main file. Then -- in my experience -- cutting and pasting the credits and the main recording with MPEGStreamclip is safe and fast -- no reencoding, ever.
    I see where you're going with this - if no editing is needed, I can always generate my title and join the 2 mp4 files together rather than doing it on a timeline. This gets a little more complicated though if I'm inserting several titles throughout the duration of the video, say, if it's comprised of several different events in an hour long video.

    Most of my experimenting so far has just been straight capture and convert with no editing, so I've only just started thinking about titles/credits and so on.

    On the other hand, I like the idea of outputting a DV file with my edits and titles so that later, if conversion techniques improve, it's as simple as converting the whole source file again, rather than fiddling around with splicing and remuxing files.

    - If I were to go for deinterlacing, from what I've learned here I would try to do QTGMC, though it is slow. I would like for someone to show that yadif can be equally good, because it's much faster. But with Hybrid I have not been able to produce a good deinterlaced sample with yadif. (I personally don't like Handbrake).
    I would be totally open to QTGMC at whatever speed, but I already hit one roadblock trying to get it working on my Mac, and I'm not quite sure I'm willing to do what it would take (at this time) to get that working. Hybrid installed just fine, but QTGMC was not in the dropdown list of filters. From what I understand about Hybrid, and that isn't much, it needs a whole slew of dependencies to do what it's capable of doing. So when I went to download Vapoursynth and did some researching on how to do this for the Mac, it led me to the Doom9 Forum (https://forum.doom9.org/showthread.php?t=173453) which says that you should only use this with MacOS 10.11 and newer. I have 10.10 at the moment on the main computer I've been using, and for other reasons, I don't want to upgrade right now.

    From what I understand, Yadif is basically runner up to QTGMC, and I'm ok with using it for my needs right now. I cannot argue with the results shown - QTGMC definitely looks better/clearer, but I'm sure I can tweak Yadif enough to suit my individual needs.

    Regarding Handbrake, it was all I heard about when I began this project, and it's clear it may not the best tool to use, be it for lack of settings, or messing up chroma. I get that - and am looking for an alternative, which is why I asked about Avidemux, which previously I thought I couldn't use because I was trying to load a raw DV file (.dv) and it said it couldn't find a demuxer. However, I realized my dv files in .mov container loaded just fine, so now I'm wondering if it's an alternate viable solution. So far no one has really answered that - other than saying it's buggy (I haven't experimented enough to see if it crashes for me). So I'll give it a try, unless someone says "NO DON'T, BECAUSE XYZ!" and tells me something compelling about why I would never want to use Avidemux for what I am doing.

    This post got very long. Thanks to anyone who is still reading, and thanks to everyone trying to help me. I've learned so much here already and I appreciate the support!
    Quote Quote  
  13. Originally Posted by poisondeathray View Post
    Sorry , to clarify it DOES have the same problem . It should be reported to the developers

    Because many deinterlacers and filters do not work in 4:1:1 , there is a conversion to 4:2:0 prior to the filter. That conversion is done in a progressive fashion (U,V planes are scaled progressively instead of in an interlaced fashion), causing the chroma ghosting artifacts

    I think handbrake and avidemux base most their code in libav/ffmpeg libraries . There is a way to scale interlaced correctly using swscale or zscale, so a fix should be possible
    Do you have any other suggestions of programs that don't use this type of conversion? I'm on a Mac
    Quote Quote  
  14. Originally Posted by jagabo View Post
    Here's an example of the bad chroma handling of DV in Handbrake.
    Well damn, that is a great example and makes me want to go throw Handbrake in the trash right now!
    Quote Quote  
  15. Originally Posted by jagabo View Post
    Here's an example of the bad chroma handling of DV in Handbrake.

    Two frames generated from the two fields of one DV frame (yadif+bob):

    Image
    [Attachment 48861 - Click to enlarge]

    Image
    [Attachment 48862 - Click to enlarge]


    The same two frames generated by QTGMC in AviSynth:
    Image
    [Attachment 48863 - Click to enlarge]

    Image
    [Attachment 48864 - Click to enlarge]


    You can see that Handbrake blended the colors from the two fields together. That has caused ghosting of the colors on the grey background, and loss of saturation on the moving ball (because the colors of the ball were blended with the grey background).

    DV source attached. And AviSynth script:

    Code:
    AviSource("dv2.avi") 
    ConvertToYV12(interlaced=true)
    QTGMC()
    That's really so interesting, thanks for the samples. Just for comparison, this is the same frame from deinterlacing with JES deinterlacer plus "Remove jaggies" and "Reduce noise". It is jagged, but no color bleeding. Definitely, QTGMC is better.

    Image
    [Attachment 48866 - Click to enlarge]
    Quote Quote  
  16. Originally Posted by celsoac View Post
    Just for comparison, this is the same frame from deinterlacing with JES deinterlacer plus "Remove jaggies" and "Reduce noise". It is jagged, but no color bleeding. Definitely, QTGMC is better.
    Why doesn't this have the same chroma problem as Handbrake?

    This is why I don't like freeze-framing a very specific scene - it's great for illustrating a point or showing the strengths/weaknesses of a particular approach, but never gives the full picture.

    I'm 100% sure my home video footage does not have a very clear and super colorful ball bouncing quickly across the screen. Most of it is people sitting around in the backyard or opening Christmas presents. However, in this very specific example, JES Deinterlacer looks better than yadif/bob/handbrake/ffmpeg - but yet I know how everyone [else] feels about JES Deinterlacer!
    Quote Quote  
  17. Originally Posted by Christina View Post

    If my source is DV in FCP, and I do some edits (for example, I had one file I wanted to tweak audio levels, brighten, add Text to the very beginning to state what the video was, like so-and-so's 50th Birthday 1992, and a fade out at the end), I just use Export QuickTime Movie (NOT Export using QuickTime Conversion) and it just saves the DV stream as DV without re-encoding the whole thing, keeping it in my original format. I don't see any reason why I would convert to ProRes at this stage (unless your source was ProRes, you didn't say).
    My version of FCP Pro 10.4.6 does not do that: apparently it works with ProRes as an intermediate codec. If I input a DV file and I output it as "same source" in Export, look how the "same source" is ProRes:

    Image
    [Attachment 48867 - Click to enlarge]


    I don't want to be whatever, but, are you sure the codec of that output is DV? Are you sure FCP can edit directly on a DV format? That is, if editing is done, in my mind whatever software one uses needs to reencode, at least the part that was changed. It may be the case that, if you use DV as output, what FCP is doing is re-reencoding it to DV from an intermediate format.

    I would be totally open to QTGMC at whatever speed, but I already hit one roadblock trying to get it working on my Mac, and I'm not quite sure I'm willing to do what it would take (at this time) to get that working. Hybrid installed just fine, but QTGMC was not in the dropdown list of filters. From what I understand about Hybrid, and that isn't much, it needs a whole slew of dependencies to do what it's capable of doing. So when I went to download Vapoursynth and did some researching on how to do this for the Mac, it led me to the Doom9 Forum (https://forum.doom9.org/showthread.php?t=173453) which says that you should only use this with MacOS 10.11 and newer. I have 10.10 at the moment on the main computer I've been using, and for other reasons, I don't want to upgrade right now.
    Again, I don't want to be insisting, but 10.10 and 10.11 are practically the same thing. I went from Snow Leopard (which I loved) to Yosemite (which I left immediately) to El Capitan (which I like a lot, and it's installed in one Mac). In my experience, El Capitan (10.11) is more solid and faster than 10.10. In fact, I would have stayed at El Capitan -- but I got a new iMac with Mojave preinstalled, which so far is actually a good OS.

    You may also consider installing Linux in a virtual machine (it would run fine in a 10.10 Mac OS) and running a recent version of Hybrid for Linux. Or, for specific purposes, keep your current copy of Yosemite inside an El Capitan virtual machine.

    This post got very long. Thanks to anyone who is still reading, and thanks to everyone trying to help me. I've learned so much here already and I appreciate the support!
    Thank you.
    Last edited by celsoac; 26th Apr 2019 at 12:29.
    Quote Quote  
  18. Originally Posted by Christina View Post
    Originally Posted by celsoac View Post
    Just for comparison, this is the same frame from deinterlacing with JES deinterlacer plus "Remove jaggies" and "Reduce noise". It is jagged, but no color bleeding. Definitely, QTGMC is better.
    Why doesn't this have the same chroma problem as Handbrake?

    This is why I don't like freeze-framing a very specific scene - it's great for illustrating a point or showing the strengths/weaknesses of a particular approach, but never gives the full picture.

    I'm 100% sure my home video footage does not have a very clear and super colorful ball bouncing quickly across the screen. Most of it is people sitting around in the backyard or opening Christmas presents. However, in this very specific example, JES Deinterlacer looks better than yadif/bob/handbrake/ffmpeg - but yet I know how everyone [else] feels about JES Deinterlacer!
    I have no idea why JES Deinterlacer does not bleed colors. I don't know what method it uses. It's a very simple, versatile app, try it. In my (quite long) amateurish experience editing videos in a Mac -- which is limited in some ways if you don't have a Mac Pro and lots of hardware, etc. -- I've used many different apps and utilities, for specific things. This JES Deinterlacer, which I found recently, does some things very well. These are some apps I use: iMovie, FCP Pro, MPEGStreamclip (bad in Mojave!), JES Deinterlacer, Subler, Videospec (for displaying file specifications), Atom Inspector, Avidemux (very little), Lossless Frame Rate Converter, Handbrake (no like, for some reason), and now Hybrid.
    Quote Quote  
  19. Originally Posted by celsoac View Post
    My version of FCP Pro 10.4.6 does not do that: apparently it works with ProRes as an intermediate codec. If I input a DV file and I output it as "same source" in Export, look how the "same source" is ProRes:
    I know there are some pretty big differences between FCP 7 and the current version. In fact it's basically not even the same program anymore. So maybe your version just handles it differently and doesn't have that option anymore.

    I don't want to be whatever, but, are you sure the codec of that output is DV? Are you sure FCP can edit directly on a DV format? That is, if editing is done, in my mind whatever software one uses needs to reencode, at least the part that was changed. It may be the case that, if you use DV as output, what FCP is doing is re-reencoding it to DV from an intermediate format.
    Yes, I am pretty sure! I believe it is called lossless editing and if I'm wrong, someone please correct me. From what I remember, an hour+ video took about 6-7 minutes to output using this method, so I am fairly certain it's not reencoding using an intermediate codec.
    Quote Quote  
  20. Originally Posted by Christina View Post
    Why doesn't this have the same chroma problem as Handbrake?
    Because it's not Handbrake? Maybe JES is deinterlacing while the video is still YUY2 and converting to YV12 after. Whereas Handbrake is converting to YV12 first (incorrectly) and deinterlacing after.

    Originally Posted by Christina View Post
    This is why I don't like freeze-framing a very specific scene - it's great for illustrating a point or showing the strengths/weaknesses of a particular approach, but never gives the full picture.
    Still frames are good for showing some issues. Full motion video is good for other issues.

    Originally Posted by Christina View Post
    I'm 100% sure my home video footage does not have a very clear and super colorful ball bouncing quickly across the screen. Most of it is people sitting around in the backyard or opening Christmas presents.
    Shaky camera work? Fast panning? People in colorful shirts running around the yard? Christmas presents with colorful wrapping being passed around? You'll have the same problems with all those. Even if it's not especially visible at normal playback speed (note: it will be more visible at 30p than at 60p because each frame is visible for twice as long) why have it in your videos if you can avoid it?

    Originally Posted by Christina View Post
    However, in this very specific example, JES Deinterlacer looks better than yadif/bob/handbrake/ffmpeg - but yet I know how everyone [else] feels about JES Deinterlacer!
    People where talking about JES's blend deinterlacing. What's done in post #255 isn't simply blending. A pure blend deinterlace would look like this:

    Image
    [Attachment 48868 - Click to enlarge]
    Quote Quote  
  21. Originally Posted by jagabo View Post
    Originally Posted by Christina View Post
    I'm 100% sure my home video footage does not have a very clear and super colorful ball bouncing quickly across the screen. Most of it is people sitting around in the backyard or opening Christmas presents.
    Shaky camera work? Fast panning? People in colorful shirts running around the yard? Christmas presents with colorful wrapping being passed around? You'll have the same problems with all those. Even if it's not especially visible at normal playback speed (note: it will be more visible at 30p than at 60p because each frame is visible for twice as long) why have it in your videos if you can avoid it?
    100%. Not saying you are wrong - just trying to say that clip is an extreme example. The ball makes 4 rotations in less than a second and is perfect for illustrating the chroma issue, but like you said, the issue may not be very apparent during normal playback, especially in more common scenarios. I actually appreciate the extreme examples to illustrate something because it's so much easier to understand when you actually see it vs read about it.
    Quote Quote  
  22. Originally Posted by jagabo View Post
    Originally Posted by Christina View Post
    However, in this very specific example, JES Deinterlacer looks better than yadif/bob/handbrake/ffmpeg - but yet I know how everyone [else] feels about JES Deinterlacer!
    People where talking about JES's blend deinterlacing. What's done in post #255 isn't simply blending. A pure blend deinterlace would look like this:

    Image
    [Attachment 48868 - Click to enlarge]
    Yes, absolutely. When I (hundreds of posts ago ) said that JES Deinterlacer blending looked better than other options is because (a) in scenes with very little movement is quite good, and (b) when there is fast movement, interlaced output looks horrible. This is a sample from the same recording from TV:
    Image
    [Attachment 48869 - Click to enlarge]


    So, what to do? In my previous experience, deinterlacing by dropping one field per frame (whether keeping only one field or double framerate) diminishes quality. At least blending seemed better in no-motion scenes, and it looks similar to motion blur in others. Now that I've learned here , I see that blending is generally wrong also for interlaced material (there is nothing worst than poorly done telecine, with lots of frame blending in 24fps => NTSC, etc.). But in general I don't think that doubling framerate is a good option, either. Now, QTGMC is quite amazing as compared to other methods, because it seems to treat each field as a separate full frame calculated from the contiguous ones, almost as if the original source were 59.97p or 50p instead of interlaced.

    In other words: no, blending in general is not a good idea, you are right. I've been convinced of that.

    Here are some samples (some superfluous), for the sake of it.
    Image Attached Files
    Quote Quote  
  23. Originally Posted by Christina View Post
    Originally Posted by celsoac View Post
    My version of FCP Pro 10.4.6 does not do that: apparently it works with ProRes as an intermediate codec. If I input a DV file and I output it as "same source" in Export, look how the "same source" is ProRes:
    I know there are some pretty big differences between FCP 7 and the current version. In fact it's basically not even the same program anymore. So maybe your version just handles it differently and doesn't have that option anymore.

    ...
    Yes, I am pretty sure! I believe it is called lossless editing and if I'm wrong, someone please correct me. From what I remember, an hour+ video took about 6-7 minutes to output using this method, so I am fairly certain it's not reencoding using an intermediate codec.
    Christina, thank you for the information, you're right. I didn't know that. I've been looking around, and, yes, this web page says:

    https://larryjordan.com/articles/picking-the-right-version-of-prores/

    "My recommendation is that if you are shooting HDV, XDCAM HD, XDCAM EX, or DVCPRO HD, transcode into ProRes 422. If you are shooting R3D, HDCAM, HDCAM SR, or 2k formats, transcode into ProRes 422 HQ. While ProRes can also be used for SD projects, my suggestion is to work with the native codec, such as DV, rather than transcode into ProRes."

    So, FCP 7 transcoded for other codecs, but not for DV. I believe that FPC X transcodes everything into ProRes 422.
    Unfortunately, I never had FCP 7. When I bought it, it was already FCP X. And, of course, Apple no longer provides it.
    Quote Quote  
  24. Originally Posted by celsoac View Post
    But in general I don't think that doubling framerate is a good option, either.
    Why ?

    If you're in the context of deinterlacing it's the only choice that sense

    Perhaps, if you were restricted in some way (e.g. youtube circa. 2005) , or had bandwidth limitations, or dimension restrictions (e.g. 1080p59.94 or 1080p50 isn't supported by some older decoder hardware chipsets because it's L4.2, but lower resolutions like SD , or 720p50, 720p59.94 are supported)


    Now, QTGMC is quite amazing as compared to other methods, because it seems to treat each field as a separate full frame calculated from the contiguous ones, almost as if the original source were 59.97p or 50p instead of interlaced.
    If you're deinterlacing, double rate is the whole point of interlace's existence in the first place. That's why you're recording as fields. If bandwidth, compression weren't issues back in the stone age , when engineers were developing interlace, everything would be 50p in 50Hz regions, or 59.94p in 59.94 Hz regions . There would be no reason for interlace . ie. A camera recording a 50p signal is only "demoted" to interlaced because it can't record 50p due to bandwidth constraints . It throws away half the spatial resolution (each field is essentially 1/2 a frame) as a trade off for temporal resolution (50 or 59.94 samples / second). 50 full frames per second will take more bandwidth to store properly than 50 half frames (ie. fields). If you didn't need that temporal resolution, you'd be recording 29.97p or 25p in the first place . Full progressive spatial frame quality instead of "half" during motion

    If you hook up a DV camera directly to a TV and play it back directly - you get 50p or 59.94p - ie. that's the way interlace is supposed to be viewed.

    QTGMC does a better job at emulating that original 50p or 59.94 signal than everything else, even very expensive displays with hardware adaptive deinterlacing . The main difference is the flickering lines , known as "bob flicker" . Those residual jaggies and dotted lines flicker on every still image posted (except QTGMC) - when you view them will flicker. QTGMC's precursor TGMC - it's original reason for existence was to calm bob flicker .

    The negatives are the slow processing time, and it tends to denoise too much in some scenarios. It's more noticable on higher quality HD interlaced source materials that the ones posted here. Fine texture details are lost . It has a "lossless mode" , but then you tend to lose the "calmness" and you tend to get back some artifacts. But on lower quality material, that denoising can actually be beneficial.
    Quote Quote  
  25. Originally Posted by celsoac View Post
    So, what to do?
    Doubling frame rate is the only correct thing to do. This is what is you TV doing anyway. Interlaced footage is basically exactly that, except field frames have half vertical resolution.

    Take a half frame field frame (even lines) and make one full frame out of it, take odd lines (which are 1/59.94 second behind even lines for bff) and make another full frame out of it as well - thus making bob deinterlace = doubling frame rate, but temporal resolution is the same as original. And doing it in a sophisticated way slightly de-noising, and granting smooth vectors in movements or smooth edges. This causes that encoded file is smaller than attempting to encode original footage as interlaced.

    You have to bob deinterlace it 30000/1001i to 60000/1001p, there is no other way about it. Keep temporal resolution and making it progressive , considering noise and vectors, temporal management (considering pixels before and after).

    This is what QTGMC does, that is why it is so slow.

    Or just leaving it interlaced.
    Quote Quote  
  26. Thanks to poisondeatharray and _AI_ for the explanations about interlacing, doubling framerate, etc., which I basically knew and understood. What I did not know is that what a digital TV does is doubling framerate anwway. I thought that it would keep fps but create horizontal pixels by interpolation, not just duplicating horizontal pixels/lines. What throws me off about doubling framerate for digital displays is that a PAL interlaced recording, for example, only has 288 horizontal pixels per field! That's very little, like old MPEG1 had (which was 352x288, 25p or 30p, period). In analogue TV, showing the interpolated horizontal lines successively (even, odd, even, odd) gives the eye the impression of 576p. For some reason (may be having grown with analogue TV), that gives me the feeling that it's less "cheating" than showing duplicated pixels twice as fast.

    When the VHS input is faulty, wiggly, then turning each field into a full frame makes the wiggling very noticeably if you go frame by frame. What I found different and really good about QTGMC is that (I think) it generates the missing lines not by duplicating them but by (very good) interpolation, isn't it? I don't know if other deinterlacing methods do the same (smoothing edges, etc.), but I was never satisfied.

    Another thing to take into account is the type of source material. Duplicating frames beyond, say, 30p (actually 24p, with repeated frames) for film is a no-no in my humble opinion. "Smart" TVs that double refresh rates up to 120Hz or more probably do something similar to QTGMC, right?, except that they do it with full frames (in FCP, this is called "optical flow"). Watching a movie at 50fps or 60fps horribly looks like home video. I know we're talking interlacing, and this is a different issue, but not so much. Christina said somewhere that sometimes double-framerate video with movement, though very smooth, looks somewhat unreal.
    Quote Quote  
  27. Originally Posted by celsoac View Post
    What I did not know is that what a digital TV does is doubling framerate anwway.
    For pure interlace content, yes . 25 fields per second displays as 50p (or 29.97 fields/s interlaced displays as 59.94p) . It's deinterlaced by your display

    And most displays only do something similar to a bob . If you pause the picture you can see it's just a resized field, hence the jaggy line buzzing artifacts . On higher end displays, you have additional processing that fixes some artifacts , similar to QTGMC .


    When the VHS input is faulty, wiggly, then turning each field into a full frame makes the wiggling very noticeably if you go frame by frame. What I found different and really good about QTGMC is that (I think) it generates the missing lines not by duplicating them but by (very good) interpolation, isn't it? I don't know if other deinterlacing methods do the same (smoothing edges, etc.), but I was never satisfied.
    Yes, and that was mentioned earlier where your blend actually help reduce some of those issues . But sometimes the pattern of the wiggle isn't perfectly distributed a even / odd fields frequency like your example . Sometimes the "wiggle" moves in same vector direction over a few fields and dropping half the frames would make it even worse as a larger jump . You need some timebase correction, not much can be done successfully in software for that specific issue

    There are other deinterlacers that can get rid of most of the jaggies, better than jes or yadif , at least on single frame comparisons. eg. Tdeint or Yadifmod + nnedi3 or eedi3 used for edeint. But in motion , there still is flicker, none of the "calmness" that QTGMC exhibits; mostly because they are working intrafield , or on individual fields. Whereas QTGMC also looks at adjacent fields (the T in QTGMC is for Temporal) and smooths everything over (including smoothing over some detail and noise)

    And another QTGMC negative is motion blending artifacts in some very rare situations. So it's not perfect by any means (interlace isn't "perfect" in the first place), but it's still the "best" overall for general use situations



    Another thing to take into account is the type of source material. Duplicating frames beyond, say, 30p (actually 24p, with repeated frames) for film is a no-no in my humble opinion.
    Yes, but that' s not "deinterlacing" anymore. That's pulldown removal or fieldmatching and decimation

    "Smart" TVs that double refresh rates up to 120Hz or more probably do something similar to QTGMC, right?, except that they do it with full frames (in FCP, this is called "optical flow").
    Not necessarily. Some have pure frame duplication modes and the ability to disable the "soap opera" effect

    The ones that interpolate "fake" frames use optical flow methods . They synthesize inbetween frames. The only thing in common would be using motion vectors

    Not quite the same as QTGMC, which is a deinterlacer. For interlace material, each field actually represents a real moment in time. Nothing fake. If you were to separate the fields and view them individually you would see this . Each field is taken from a unique moment in time and authentic. It literally is 50 or 59.94 half frames per seconds



    Watching a movie at 50fps or 60fps horribly looks like home video. I know we're talking interlacing, and this is a different issue, but not so much. Christina said somewhere that sometimes double-framerate video with movement, though very smooth, looks somewhat unreal.
    Yes, but a movie isn't interlaced content - completely different issue . You don't deinterlace progressive content . You field match and decimate to recover the original progressive frame. It's just "stored" with repeat fields

    For deinterlacing double rate - I guess it depends on your perspective, but hook up the DV camera directly and you see the same smoothness . Watch VHS directly, or an interlaced DVD directly. Same smoothness. If that's "unreal" then unreal is "normal"
    Quote Quote  
  28. Telecined film is best inverse telecined back to 24p. Better TVs can do that. If they're displaying at 120 Hz, they can display every 24p film frame 5 times. That's still jerky like watching movies in a theater but it doesn't have the judder you get by alternating between 2 times and 3 times to get 60 Hz on a 60p display. Optical flow is a further option on some TVs.
    Quote Quote  
  29. [QUOTE=poisondeathray;2549153]
    Originally Posted by celsoac View Post
    And another QTGMC negative is motion blending artifacts in some very rare situations. So it's not perfect by any means (interlace isn't "perfect" in the first place), but it's still the "best" overall for general use situations

    "Smart" TVs that double refresh rates up to 120Hz or more probably do something similar to QTGMC, right?, except that they do it with full frames (in FCP, this is called "optical flow").
    Not necessarily. Some have pure frame duplication modes and the ability to disable the "soap opera" effect

    The ones that interpolate "fake" frames use optical flow methods . They synthesize inbetween frames. The only thing in common would be using motion vectors
    I understand, but I still think that in that sense optical flow is not very different from QTGMC, except that QTGMC, computes inbetween fields, which are half-height frames, not only in reference to that field (top or bottom) itself, but also in reference to other real fields (half-height frames) displaying one pixel up (or down). So, there is less guessing than in optical flow, full of motion artifacts.

    What you say about the sense of "reality" or "fakenews" in video may be true, that is, that actually the 60fps "smooth" effect is already present in 30i video due to how human vision works (each field persists in the retina until next field is displayed, so the effect is "progressive"). But for some reason synthesizing those intermediate fields levels out some blur and jerkiness that may be necessary for the perception that what is been watched is artificial media, not "real" vision. This perception of "unnaturalness" may be totally subjective. But also -- and this may be overextending the argument -- real movement may not be continuous, but variably accelerated (for example, in facial microgestures), whereas both optical flow or field synthesis in QTGMC calculate (I suppose) exact mid-positions between a given pixel/element in field/frame 1 and 2 . I have used optical flow with quite a few pieces of old film footage from the 1920's or 1930's, to reconstruct "real" movement and get a sense of closeness (we're talking about film shot at 16fps, or maximum 20fps), and, aside from obvious motion artifacts, body movements and facial expressions are both more immediate and real (as if shot with a videocamera) and more unnatural -- although it's true that filming speed was not constant with hand-cranking of the camera. I only apply a 1/2 speed optical flow correction (reconstructing 1 frame inbetween 2). Assuming the original footages was 16fps = 32fps, which if left at 30fps is close enough. Some results are fascinating. Here is an example:

    https://twitter.com/CelsoACaccamo/status/1078726589913927684
    Quote Quote  
  30. I have a couple of short questions about settings in a Canopus ADVC-110 for digitizing, if I may. Input is analogue PAL VHS.

    - Switch 1 is "Digital-in reference sync". OFF is Stream Sync. ON is Fixed. Which one?
    - Audio mode may be 48kHz 16bit or 32kHz 12bit (switch 3). Switch 4 is Locked or Unlocked audio mode. I suppose that Locked means that the device resamples the signal as chosen in switch 3, right? What is better?
    - Switch 5 applies only to NTSC input: what do 0 IRE and 7.5 IRE mean? Which one to chose for a NTSC tape?

    And a very important question about digitizing NTSC tapes. I am in Europe (PAL). My VCR is not true tristandard, but it can play NTSC tapes. TVs can display that with no problem, adjusting for resolution and refresh rate. But I haven't managed to make the Canopus output a signal that a computer or my (great) Panasonic HD-DVD recorder could record without it losing either color or frame sync or both.

    However, a very basic Roxio USB capturer which I have that digitizes in MPEG2 (lowish bitrate, some pixelation) did capture this NTSC without those color/framerate problems.

    Any hint at how I could make the Canopus output a good DV signal from NTSC, or at software that could take that DV stream and fix it? Thank you.
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!