VideoHelp Forum
+ Reply to Thread
Results 1 to 9 of 9
Thread
  1. Hello,

    If anyone can offer me some advice I'd be extremely grateful. I'm fairly new to the world of codecs so please be gentle!

    My starting point - a numbered sequence of TIFFs. The images are generated by Pure Data/GEM (ie. OpenGL) and consist mainly of a black background with some fine white lines and subtly coloured circles.

    The movie file made from the TIFFs looks great. The problem is that I haven't been able to find a codec that can deliver a useable file size without introducing banding as the circles fade in and out to/from black. The only acceptable results have been with the animation codec - which hardly changes the file size as might be expected.

    I've been experimenting with FFMPEG with prores and x264 but can't rid of the banding - the only success I've had has been with the following command :
    ffmpeg -y -i input.mov -c:v libx264 -preset veryslow -tune animation -f mp4 output.mov
    which doesn't have banding but looks very blocky.

    Is it likely to be a colour space issue - the original files being RGBA?

    I can post examples if required.
    Quote Quote  
  2. Use a lower (C)RF value. Post samples of your TIFF files. Or a losslessly encoded video sample.

    Very small colored objects (or thin colored lines) will lose saturation with x264 because of the YUV 4:2:0 chroma subsampling. And some posterization is to be expected when converting to YUV.
    Quote Quote  
  3. HI jagabo -thanks very much for getting back. I've popped two files up on dropbox (I hope that's allowed?) - the first is from the Tiffs, the second using the following :

    ffmpeg -y -i input.mov -c:v libx264 -preset veryslow -tune animation -crf 0 output.mov

    https://www.dropbox.com/sh/vnlslnmk3wz2epi/_TMKTLgy9O


    If you look at the bottommost circle you should see what I mean - yes - maybe it is posterisation rather than banding. I have tried using the libx264rgb codec to try to avoid yuv conversion issues but all I end up with is a green screen.

    I was also wondering whether it might be an alpha related issue?
    Quote Quote  
  4. Also wondering whether a ProRes approach might be better?
    Quote Quote  
  5. The best way to approach this is to add a little grain to the video. I did this in AviSynth:

    Code:
    ffVideoSource("AFE_TIFFs.mov") 
    AddGrain(var=2.0) # add random noise
    ConvertToYV12()
    TTempSmooth() # a little temporal noise reduction
    Then encoded with the x264 CLI encoder at CRF=18, preset=slow.

    Even 24 bit RGB is insufficient to eliminate banding with smooth gradients. Conversion to YUV makes it even worse because there are fewer colors available. Adding noise helps mask the banding.

    Oh, I just noticed I should have converted to YV12 with a rec.709 matrix.
    Image Attached Files
    Last edited by jagabo; 31st Mar 2014 at 08:23.
    Quote Quote  
  6. Even 24 bit RGB is insufficient to eliminate banding with smooth gradients.
    probably since most banding nowadays is caused by 8bit precision and not caused by the lack of potential colors.
    As a side note: Why not use gradfun2d and similar? (doesn't require avisynth, so it's better suited on Linux&Mac systems)
    Quote Quote  
  7. Thank you both - was trying to find a way of adding noise with FFMPEG and came across gradfun - which is getting fantastic results. I am over the f**king moon! I've been struggling with this for months.
    Thanks again.
    Is this effectively dithering?
    Quote Quote  
  8. Is this effectively dithering?
    Yes
    Quote Quote  
  9. Member Cornucopia's Avatar
    Join Date
    Oct 2001
    Location
    Deep in the Heart of Texas
    Search PM
    Yes, and here's a good rule to remember: ANYTIME you reduce your bitdepth, for more natural outcome, you should dither (which is adding a very small amount of controlled noise). This does have the consequence of raising your necessary equivalent effective bitrate, however. I learned this early on with audio, but it applies just as well to photo and video.
    This occurs during:
    1. Capturing of natural scenes in the camera/microphone (where the scene itself has its own dither so you don't need to add any)
    2. Rendering synthetic scenes from a generating app (incl. 3D modelling). In this scenario, you do need to add dither (if possible).
    3. Processing in a temporarily larger bitdepth (where the math can make the numbers more variable: think high precision) before dropping back down to the starting bitdepth. Unfortunately, this must be done within the process, and not many processes make full use of this. It can be partially alleviated after-the-fact, but not fully.
    4. When creating specific lower-bitdepth copies for distribution. In this scenario, you can almost always add dither manually prior to the bitdepth lowering.

    Clearly, there are workflows where one might use multiple examples of those 4 scenarios, and in those instances the question arises: do I need to keep adding dither every time? If so, doesn't that keep increasing the noise in my project?
    The answer is YES and YES (but just a small amount).

    This is one area where I am hopeful that photo/video apps will advance to in the coming years (pro audio apps already feature across-the-board maintenance of dither and/or ultra-high precision processing).

    Scott
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!