VideoHelp Forum
+ Reply to Thread
Page 2 of 2
FirstFirst 1 2
Results 31 to 55 of 55
Thread
  1. Originally Posted by pandy View Post
    Originally Posted by poisondeathray View Post
    I only took a quick look, but it seems that way . There might be some hidden switches or "god mode" button I couldn't find. I tested with a native camcorder file with proper PAFF.
    For sure Blender support interlaced rendering (found info that option 'field' need to be activated (F10 key), however this is render part - not sure about NLE part but i would assume if it can generate interlaced then it must be also able to perform NLE on such content.
    Yes, I mentioned that above that you can encode fields. But if you have interlaced upsampling errors with RGB before you even encode, it's a problem


    Originally Posted by poisondeathray View Post
    Yes, I'm just saying UT Video codec doesn't convey this information. Or at least other programs do not pick up or distinguish between it.
    At this point i'm seriously confused as utvideo developer claims that interlaced content is supported from rev 6 and seem now it is rev 19.1...
    You can encode interlaced with UT. It works. Fields are preserved. It's 100% lossless if decoded correctly in the same colorspace. BUT the receiving application has to implement proper support. None of them do automatically. You have to manually interpret or override the files. It's not "automatic" like interlaced PAFF AVC, or DVD, or standard formats like DV
    Quote Quote  
  2. Originally Posted by poisondeathray View Post
    Yes, I mentioned that above that you can encode fields. But if you have interlaced upsampling errors with RGB before you even encode, it's a problem
    There is no upsampling errors on RGB domain by definition (4:4:4) - upsampling errors are unique effect of 4:2:0 sampling scheme (and this sampling scheme is never used on RGB domain) - this lead also to explanation why in broadcast production usually worst (quality) acceptable sampling scheme is 4:2:2 (also not possible for RGB) and due popularity of green (blue) screen nowadays widely 4:4:4 used.
    Interlacing has nothing in common with chroma upsampling error (interlaced 4:2:2 is not affected by chroma upsampling error).

    Originally Posted by poisondeathray View Post
    Yes, I'm just saying UT Video codec doesn't convey this information. Or at least other programs do not pick up or distinguish between it.
    You can encode interlaced with UT. It works. Fields are preserved. It's 100% lossless if decoded correctly in the same colorspace. BUT the receiving application has to implement proper support. None of them do automatically. You have to manually interpret or override the files. It's not "automatic" like interlaced PAFF AVC, or DVD, or standard formats like DV
    IMHO any lossless codec is transparent for interlaced and progressive source coding as any interlaced source can be coded as progressive (depends on coded design it may mean some coding gain loss or not) - from my perspective if codec specify "interlaced source support" this means it is able to properly carry Progressive/Interlace and TFF/BFF flags (so able to deliver time domain information). Of course developer may signal support for interlace by meaning "i do internal conversion before coding so i will not ruin your sampling scheme but i don't care about time domain information" - this is not clear for UT Video - never used UT Video before so not particularly interested on solving this puzzle but someone constantly using UT Video may ask developer Takeshi Umezawa on this - guy is willingly responsive on his blog and open to questions.

    ---
    Asked so we may have information from most trustworthy source (UT Video developer).
    Last edited by pandy; 24th Mar 2018 at 05:25.
    Quote Quote  
  3. Originally Posted by pandy View Post
    Originally Posted by poisondeathray View Post
    Yes, I mentioned that above that you can encode fields. But if you have interlaced upsampling errors with RGB before you even encode, it's a problem
    There is no upsampling errors on RGB domain by definition (4:4:4) - upsampling errors are unique effect of 4:2:0 sampling scheme (and this sampling scheme is never used on RGB domain) - this lead also to explanation why in broadcast production usually worst (quality) acceptable sampling scheme is 4:2:2 (also not possible for RGB) and due popularity of green (blue) screen nowadays widely 4:4:4 used.
    Interlacing has nothing in common with chroma upsampling error (interlaced 4:2:2 is not affected by chroma upsampling error).
    That's nice. But you probably should re-read the thread

    He's starting with 4:2:0 interlaced. Wants to keep 4:2:0 interlaced ,same as original. Blender can't handle it. It works in RGB internally, but it upsamples the interlaced 4:2:0 as progressive. Thus you get chroma errors. You will see chroma ghosting artifacts




    Originally Posted by poisondeathray View Post
    Yes, I'm just saying UT Video codec doesn't convey this information. Or at least other programs do not pick up or distinguish between it.
    You can encode interlaced with UT. It works. Fields are preserved. It's 100% lossless if decoded correctly in the same colorspace. BUT the receiving application has to implement proper support. None of them do automatically. You have to manually interpret or override the files. It's not "automatic" like interlaced PAFF AVC, or DVD, or standard formats like DV
    IMHO any lossless codec is transparent for interlaced and progressive source coding as any interlaced source can be coded as progressive (depends on coded design it may mean some coding gain loss or not) - from my perspective if codec specify "interlaced source support" this means it is able to properly carry Progressive/Interlace and TFF/BFF flags (so able to deliver time domain information). Of course developer may signal support for interlace by meaning "i do internal conversion before coding so i will not ruin your sampling scheme but i don't care about time domain information" - this is not clear for UT Video - never used UT Video before so not particularly interested on solving this puzzle but someone constantly using UT Video may ask developer Takeshi Umezawa on this - guy is willingly responsive on his blog and open to questions.

    ---
    Asked so we may have information from most trustworthy source (UT Video developer).

    That would be nice if it worked "automatically" and correctly like DV-AVI . But other codecs have the issue too. And that's only half the problem. It's also a host application issue; they have to implement proper support

    But as long as the receiving application has the ability to interpret the file, and distinguish between interlaced vs. progressive when converting to RGB on the timeline - then it's at least a usable solution. Blender does not as far as I can see
    Quote Quote  
  4. Originally Posted by poisondeathray View Post
    That's nice. But you probably should re-read the thread

    He's starting with 4:2:0 interlaced. Wants to keep 4:2:0 interlaced ,same as original. Blender can't handle it. It works in RGB internally, but it upsamples the interlaced 4:2:0 as progressive. Thus you get chroma errors. You will see chroma ghosting artifacts
    I've read thread and my comments are purely related to video codec (UT Video Suite) and ffmpeg - NLE within Blender is a completely different story.

    Originally Posted by poisondeathray View Post
    That would be nice if it worked "automatically" and correctly like DV-AVI . But other codecs have the issue too. And that's only half the problem. It's also a host application issue; they have to implement proper support

    But as long as the receiving application has the ability to interpret the file, and distinguish between interlaced vs. progressive when converting to RGB on the timeline - then it's at least a usable solution. Blender does not as far as I can see

    If NLE app has issue then or application need to be changed or workarounds need to be proceed - if Blender has particular limitation for interlaced 4:2:0 source then perhaps, at first issue should be reported to Blender bug tracker and secondly perhaps 4:2:2 should be used (even as forced conversion with help of the third party tool like ffmpeg or avisynth).

    ***
    Based on provided by developer informations:

    ffmpeg use own code and this code is unrelated to UT Video code.

    The original UtVideo uses “interlace flag” for following two purposes:
    – Do slightly different intra-frame prediction in “predict gradient” and “predict median” in order to achieve better compression for interlace video.
    – Do correct chroma sampling for interlace video while converting between internal YUV420 format and external RGB or YUV422 formats (in ULY0/ULH0)

    The “interlace flag” of UtVideo only indicates that the video is *compressed suitable for* interlaced. It does not indicate that the video is *actually* interlaced. UtVideo is not interested in whether the video is actually interlaced or not, as well as uncompressed video data itself do not contain interlace info.
    I think this conclude discussion about UT Video interlace support - if someone need to keep time domain information then seem alternative codec broadcast type (i.e. H.264, H.265, H262) in perhaps quasi lossless mode (sufficiently low QP) need to be used and 4:2:2 as minimum chroma sampling scheme.
    Last edited by pandy; 25th Mar 2018 at 05:51. Reason: Information from UT Video developer provided
    Quote Quote  
  5. The way I see it if you want to use blender with proxies, is you need 2 extra versions . A high quality or lossless YUV 4:2:2 interlaced version for swap in, and lower quality / low res version for proxy swap out.

    You cannot use the originals with blender, or any 4:2:0 interlaced intermediate - because of the progressive chroma upsampling issue.

    Other issues are "normal" RGB issues . eg. if you superbrights/darks, they are going to be clipped . You have to correct them in YUV or use full range RGB first.
    Quote Quote  
  6. Originally Posted by oduodui View Post
    ok i was able to upload the file

    here is the link

    https://files.videohelp.com/u/233287/ffmpeg-20180323-135624.log
    No errors - conversion was OK. Developer said that UT video codec distinguish progressive/interlace at chroma subsampling level. This mean that it is purely up to you to tell your application that your source is interlaced.

    IMHO this mean that you need to use broadcast codec to preserve all informations and by using sufficiently modern broadcast codec you may get quasi-lossless or truly lossless conversion. As a bonus you may wish to use HW accelerated compression as broadcast codecs are usually HW accelerated - typical Intra coding is usually one of commonly supported presets.
    Quote Quote  
  7. Could you possibly suggest a few broadcast codecs?

    And many thanks to all .
    Quote Quote  
  8. Originally Posted by oduodui View Post
    Could you possibly suggest a few broadcast codecs?

    And many thanks to all .
    H.264, H.265, H.262 (H.264 seem to be best choice - it also support lossless mode).
    Quote Quote  
  9. Is there an intraframe version for libx264 on ffmpeg?
    Quote Quote  
  10. Originally Posted by oduodui View Post
    Is there an intraframe version for libx264 on ffmpeg?
    yes,

    in ffmpeg -g 1 means gop size of 1, or intra frame

    But even with tune fastdecode for x264, it's going to be much slower to edit than say, something like cineform, which decodes very fast . x264 intra / fastdecode is very sluggish on the timeline performance wise. It feels like molasses

    You would still need to upsample to 422 interlaced, or RGB interlaced, or bob deinterlace before using blender

    The interlaced export from blender is messed up too, unless you deinterlace. When you check the deinterlace box, you said peformance is slower - that's because it's applying deinterlace in realtime . But the deinterlace needs to be "baked" (you have to leave it on) , otherwise the export is messed up. So actually 422 doesn't help here...

    So lots of problems using interlace with blender... If it was me , and I had to use blender, I would probably just bob deinterlace with QTGMC. And adjust the YUV levels and convert to RGB beforehand, so blender doesn't mess it up. The quality of QTGMC is better than blender's deinterlace. But 50p is going to be more difficult to edit performance wise, when you have many layers , multi cam. So I hope blender's multi cam is up to snuff

    Decoding wise, you usually don't care about proxy quality too much . But the "best" proxy for windows, performance wise, is usually cineform, by a large margin. Some people even use full resolution cineform and don't even use proxies. (Unless it's UHD) . But if you're picky about quality, it's technically not lossless even at "filmscan2" . (But if you're picky about quality, you wouldn't use blender for this workflow)

    good luck
    Last edited by poisondeathray; 3rd Apr 2018 at 20:51.
    Quote Quote  
  11. Actually some more testing, it was decoding issue with blender and cfhd . So 422 interlaced will work (I tested UT 422), but will still get converted to RGB in blender. Field encoding for export is ok (fields are clean).

    But if you choose to use 422 interlaced, make sure you upsample 420 => 422 in interlaced fashion, otherwise you get similar chroma artifacts when using progressive upsampling
    Last edited by poisondeathray; 3rd Apr 2018 at 21:57.
    Quote Quote  
  12. Sorry double post
    Last edited by oduodui; 5th Apr 2018 at 03:04.
    Quote Quote  
  13. Ok I used yadif=1 to deinterlace to huffyuv. It chose yuv422p by itself. Wow after putting it into blender it is easy to use an doesn't require a proxy but I did trancode out to a proxy and its quick.

    By accident I stumbled upon this

    -filter:v yadif=1:0,mcdeint=2:1:10

    I don't know if mcdeint can be used by itself.

    I've used it on a short clip and the quality is is better but the transcoding is horribly slow.

    Problem solved though.

    THANK YOU

    Yadif works a charm. You do loose some vibrancy in the colors but it is probably because half of the data is displayed per time unit. I don't mind that at this stage because deinterlacing allows me to use the benefit of shooting in interlace in blender.


    The main thin here is I have t record in interlace bscause the camera 's refresh rate is very sluggish and slow at 50p. In 50i I get good quality when when quick changes in motion occur. So now I am able to record at 50i transcode to progressive 50 fps. blender is lightning quick now. Transco ding to output format very quick and much lighter on the cpu. Proxies are vastly qiicker in being created and finishes once it reaches 100 percent.

    I still have to test this on a project in blender where all the mts videos are transcoded in to lossless intraframe huffyuv/utvideo to see if it will work (proxies or not) but this makes me think it will. A project where it runs to over a hour for and is multicam. I also have to test efffect strips on the timeline where the effect strip called transform is applied that has rotation crop used. But I now think it might. But is ssuspect it might. I had to transcode to bmp rastor images that doesn't use any compression for the parts that used the effect strip called transform. Previously when I used the utvideo clips in the time line the when I applied the effect strip transform to two consecutive clips and then transcoding out it created blank frames when it went to the next utvideo clip with that particular effectcstrip. To solve this I used the bmp file. I then used that image sequence to stabilise by importing the image sequence to the movie clip editor and creating markers for rotation and movement stabilisation . After keyframing the the expexted scale to prevent the video showing. The mopvement of the edges of the video when stabilization was added to 2d stabilisation. I then transcoded to bitmap images again so that I can add it later to the multicam timeline which worked.

    Perhaps using progressive uyvideo might prevent this blanlk frames from happening again.

    Does any one know how I can ask ffmpeg in windows to show me only the options for a specific filter wher it show all the parameters and options for that filter?

    Something like : ffmpeg -h full yafiff ?

    Can I output ffmpeg -h full to a text file?
    Quote Quote  
  14. At first it lloks like https://ffmpeg.org/ffmpeg-filters.html#mcdeint require yadif - not sure why, i would recommend to use https://ffmpeg.org/ffmpeg-filters.html#nnedi as alternative to mcdeint but you must judge by your self.

    To query filter i use:
    Code:
    ffmpeg -h filter=filter_of_interest > filter_of_interest.txt
    Full help for ffmpeg:
    Code:
    @ffmpeg.exe -hide_banner -h full > ff_help_full.txt
    Quote Quote  
  15. Originally Posted by oduodui View Post
    Ok I used yadif=1 to deinterlace to huffyuv. It chose yuv422p by itself. Wow after putting it into blender it is easy to use an doesn't require a proxy but I did trancode out to a proxy and its quick.
    The original huffyuv does not support 4:2:0 (variants like ffvhuff do)

    If you're ok with bob deinterlacing, then you should be able to use 4:2:0. All the blender problems I mentioned earlier were with interlace handling . UT 420 . It will be faster, smaller file sizes . You don't "gain" anything here by progressive upsampling to 4:2:2 . Blender is going to convert to RGB anyway

    By accident I stumbled upon this

    -filter:v yadif=1:0,mcdeint=2:1:10

    I don't know if mcdeint can be used by itself.

    I've used it on a short clip and the quality is is better but the transcoding is horribly slow.
    mcdeint needs bobbed input, so you need to apply a bobber beforehand

    Unfortunately this is not multithreaded in ffmpeg, so you should probably run several simultaneous instances

    The quality is significantly lower than QTGMC (even using some of the faster settings); and if you run QTGMC with avs+ or vpy , it's about 10-20x faster per single instance, or about 50-100x faster than the nnedi/mcdeint combo . But QTGMC is more difficult to batch, requires avs or vpy usage . QTGMC is not perfect, but it's a lot better quality wise than the ffmpeg alternatives. The latter suffer from blurring, lost details, motion artifacts, more aliasing . In motion, the QTGMC is calm, pleasing, but there is line twitter from the yadif/nnedi or mcdeint approach.

    Yadif works a charm. You do loose some vibrancy in the colors but it is probably because half of the data is displayed per time unit. I don't mind that at this stage because deinterlacing allows me to use the benefit of shooting in interlace in blender.
    Maybe there is probably with translation, but "vibrancy" probably is not the correct term for this. There is reduced color resolution, but the problem is interlace in the first place, not yadif or deinterlacing.



    Originally Posted by pandy View Post
    At first it lloks like https://ffmpeg.org/ffmpeg-filters.html#mcdeint require yadif - not sure why, i would recommend to use https://ffmpeg.org/ffmpeg-filters.html#nnedi as alternative to mcdeint but you must judge by your self.
    nnedi produces fewer aliasing artifacts than yadif in general, but it has it's own problems - it's a single field deinterlacer but does not compensate for the field offset. So you get this up/down/up/down result (it's known as a "dumb" bobber. It's only ok if you only single rate to 25p.) . You can use it as the interpolator with tdeint or yadifmod to compensate for that, but not with ffmpeg
    Quote Quote  
  16. Wow I didn't know there's software that much faster. Is it free software? Will it stil woirk on windows 7 10 years from now? Is it maintained by one person or many? Why is it so much faste? Will it work on linux in WINE?
    Quote Quote  
  17. yes - free open source . They will be around forever since they are open source. Surely you've heard of avisynth ? If you've ever been around this forum it gets used everywhere. Many programs rely on it on the backend, and many ffmpeg filters are actually ported from avisynth

    vapoursynth can run natively in linux

    avisynth can mostly run in wine too (there are a few compatibility issues with some filters) . But native windows avisynth+mt is the fastest implementation of QTGMC. The vapoursynth version is slightly slower (maybe 90% speed, but that's still many times faster than the settings you are using, with better quality than any ffmpeg possibility) . avs in wine is significantly slower for things too

    But there is a bit of a learning curve to get started , and it's a hassle collecting .dll's and dependencies (That's a big reason why it won't get ported to ffmpeg - too many dependencies and subparts). But QTGMC is clearly the best deinterlacer for general use in most situations. yadif+mcdeint or nnedi+mcdeint produce worse results and are much slower.
    Quote Quote  
  18. That's amazing. I know about avisynth but has become allergic to windows because of all the virus issues and updates etc.

    I want to be windows free as much as possible. And whatever I learn I can take to ubuntu linux.

    LOL. Don t hate windows but I am tired of. Al the hassles. Which usalluy boils down to spending cash whereas ubuntu linux is free - well at least if you don't use it commercially .

    Can vapoursynth do all the things avisynth can? On linux?

    Does any one have command line examples nnedi in ffmpeg?
    Quote Quote  
  19. Originally Posted by oduodui View Post
    Can vapoursynth do all the things avisynth can? On linux?
    Most, not quite all . Each has pros/cons .

    A big "con", IMO, is vapoursynth doesn't support audio for example (there are ways around it, you can copy audio in ffmpeg using -map, but if your script is doing cuts, etc... that won't work) . Vapoursynth is better at higher bit depth formats and manipulations. Some filters/chains are faster in avisynth+mt but some are faster in vapoursynth. Vapoursynth is more cross platform compatible (win/mac/linux) . avisynth has direct ffmpeg support (you can feed avs directly to ffmpeg if it's compiled with avs support), but vapoursynth requires vspipe currently (there is a patch in the works for native vpy support, but not finished yet)

    Does any one have command line examples nnedi in ffmpeg?
    Did you mean nnedi alone, or with mcdeint (motion compensated) ?

    Eitherway, it's worse because of the field offset. Interlaced video has even/odd fields spatially offset ; that's why you get the up/down/up/down effect when you perform a simple bob deinterlace to 50p . Essentially it's resizing the field to a frame. nnedi goes a step farther by interpolation and antialiasing - it basically tries to elminate the stair-stepping artifacts from the field resize based on it's neural network. "Smart" bobbers compensate for that offset. nnedi (alone) or "dumb" bobbers do not.

    nnedi (there are nnedi2, nnedi3 variants in avisynth and vapoursynth) is an intra-only field deinterlacers. That means they are spatial only. That's only sutiable for single images or if you single rate deinterlace to 25p (all odd, or all even fields). They don't look at other fields or frames to fill in the missing information , and there is no motion compensation or temporal smoothing .
    Last edited by poisondeathray; 5th Apr 2018 at 16:15.
    Quote Quote  
  20. Ok

    I am happy with yadif with mcdient or without it the quality is acceptable to me at this time because at least now I can get the work done.

    Thanks for all the info.
    Quote Quote  
  21. Originally Posted by oduodui View Post
    Ok

    I am happy with yadif with mcdient or without it the quality is acceptable to me at this time because at least now I can get the work done.

    Thanks for all the info.

    You don't know what you're missing. Just a while ago you were talking about "lossless" workflows...just saying ... Well the artifacts from "bad" deinterlacing are much worse that what you get from using something like prores lossy compression which you were complainig about earlier

    In this zip file, are single frames comparison of a 1080i25 camcorder source bob deinterlaced to 1080p50 . QTGMC(preset="faster"), vs. yadif+mcdeint, vs. nnedi+mcdeint , all encoded by ffmpeg to huffyuv (422p, but I would use ut 420 in your case, it would be faster, smaller filesizes). Again, if yadif+mcdeint was "1x speed" for reference, QTMGC was about 15-17x speed . nnedi+mcdeint was about 0.2x speed. Yadif alone would be much faster; if you're ok with the artifacts

    Pay attention to the missing details in the leaves, fence (especially bottom left corner - you cannot even make out the lines - on a test chart that would manifest objectively as resolution loss because you can no longer clearly delineate those fine details or lines on the fence). Not shown here, but in motion, the QTGMC is "smoother" because of fewer artifacts. In motion, yadif+mcdeint or nnedi+mcdeint will have a "twittering" effect because of the aliasing artifacts

    The blurring and resolution loss in the yadif+mcdeint is partially from the AA, but nnedi+mcdeint is also from the up/down . A more stable feed into mcdeint would likely result in better results. (It's trying to motion compensate, but adjacent frames are moving up/down. Not ideal)

    There are many comparisons of deinteracers in various threads. For general use, QTGMC is probably the "best" in the majority of general use cases. It has many options to tweak for quality/speed and presets to simplify usage
    Image Attached Files
    Last edited by poisondeathray; 5th Apr 2018 at 16:38.
    Quote Quote  
  22. Originally Posted by oduodui View Post
    Does any one have command line examples nnedi in ffmpeg?
    Code:
    -vf nnedi=nnedi3_weights.bin:field=af:deint=interlaced
    OR

    https://forum.videohelp.com/threads/381126-ffmpeg-1080i50-to-720p#post2465082

    You need to download and expose explicitly file 'nnedi3_weights.bin'.

    You can compensate (with subpixel accuracy) mentioned shift by using convolution (kernel) filter.

    And overall i agree QTGMC is probably best open source deinterlacer available today.
    ffmpeg have implemented support for Avisynth scripts from a long time so you can combine ffmpeg (which frequently lacks multi-threading similarly to original Avisynth).
    Last edited by pandy; 6th Apr 2018 at 05:04.
    Quote Quote  
  23. Indeed I was very adamant about having a loss less workflow. But I am so happy with the responsiveness of blender nowah the non jittery no combing video and transoding speed that I am not that unhappy with losing some quality.

    But now I see there is a tradeoff .

    All of this amazing in how stuff exists that is free and can do amazing stuff that are just as good as paid for software . I did install avisynth and because I tried to denoise video that was shot in low light. Couldn't make it work but I think the video was just shot in too low light. Then I had to install some kind of .dll file int windows 32 bit system folder but there was repeated warnings that there was some security issues with is that might make windows 7 crash or unstable or was a backend for hacking or what ever. Linux is not better but at least I can reinstall everything free of charge. I am just tired of all the shite that come s with windows that equals cash out of your pocket.

    I am really tired of all the expenses windows incurs .

    So yes I am quite impressed with what avisynth can do. But even if it is free windows is not.

    So I am sticking with linux windows mac supported free software that allows me to 1080 p work .but I can at least get the jpb don if needs be.
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!