VideoHelp Forum




+ Reply to Thread
Page 2 of 2
FirstFirst 1 2
Results 31 to 52 of 52
  1. Just a side note...
    at some point last year(around August perhaps), FFmpeg can encode WebP ANIMATION.
    I'm not aware of that until a few days ago, so I did not make change to the WebP presets in AviUtl's ffmpegout plugin.

    WordPress can be tweaked to use WebP, but Twitter still has no support for it (as I know). (probably the same goes for all major social platforms)
    Stopping development until someone save me from poverty or get me out of Hong Kong...
    Twitter @MaverickTse
    Quote Quote  
  2. Originally Posted by pandy View Post
    From my perspective new functions are definitely progress but still far from this what can be squeezed from gif - Bayer dither is somehow incorrect with low CLUT (even with maximum strength), usually i always have better result with some default color palette than calculated one.
    Sometimes 8 bit RGB (3:3:2) can give better results especially with a_dither.
    .
    This is due to the algorithm...
    Similar to pngquant and AForge.net, the new genpalette option actually use "Median Cut" as color quantization (i.e. reduction) method. I believe the version in FFmpeg has been tweaked, or the color accuracy would have been questionable even at 64 colors.

    The best color accuracy would be using "Oct-Tree" (employed by ImageMagick and Windows Image Codec) but that is slow and therefore not very suitable for using with video.
    Stopping development until someone save me from poverty or get me out of Hong Kong...
    Twitter @MaverickTse
    Quote Quote  
  3. Originally Posted by MaverickTse View Post
    This is due to the algorithm...
    Similar to pngquant and AForge.net, the new genpalette option actually use "Median Cut" as color quantization (i.e. reduction) method. I believe the version in FFmpeg has been tweaked, or the color accuracy would have been questionable even at 64 colors.

    The best color accuracy would be using "Oct-Tree" (employed by ImageMagick and Windows Image Codec) but that is slow and therefore not very suitable for using with video.
    True, however i've tested various color quantizers and from my subjective perspective scolorq http://www.cs.berkeley.edu/~dcoetzee/downloads/scolorq/ is best, second seem to be http://members.ozemail.com.au/~dekker/NEUQUANT.HTML however it doesn't shine in small CLUT's (like 16 colors).
    Quote Quote  
  4. Member
    Join Date
    Nov 2002
    Location
    United States
    Search Comp PM
    Originally Posted by MaverickTse View Post
    @DarrellS
    What error did you get when using the VBScript version?
    (P.S. On the first day I published FFgif, both archives are Batch file. I replaced the first one with VBScript on the next day)
    or
    No error but the GIF seems corrupted (the *_optimized.gif version)?
    I'm not sure but I don't believe there was an error message. I believe that it just didn't do anything. Since the text said that if there were problems with the first download to try the second download, that's what I did.

    I did find another download that was 64 bit. It only had the 64 bit versions of gifdiff.exe and gifsicle.exe and a documents folder with gifsicle.html and gifdiff.html. They all had the same creation date as the files in your FFgif folder.
    Quote Quote  
  5. Originally Posted by DarrellS View Post
    Originally Posted by MaverickTse View Post
    @DarrellS
    What error did you get when using the VBScript version?
    (P.S. On the first day I published FFgif, both archives are Batch file. I replaced the first one with VBScript on the next day)
    or
    No error but the GIF seems corrupted (the *_optimized.gif version)?
    I'm not sure but I don't believe there was an error message. I believe that it just didn't do anything. Since the text said that if there were problems with the first download to try the second download, that's what I did.

    I did find another download that was 64 bit. It only had the 64 bit versions of gifdiff.exe and gifsicle.exe and a documents folder with gifsicle.html and gifdiff.html. They all had the same creation date as the files in your FFgif folder.
    Is the failed script a VBScript or Batch?
    Stopping development until someone save me from poverty or get me out of Hong Kong...
    Twitter @MaverickTse
    Quote Quote  
  6. Originally Posted by pandy View Post
    Originally Posted by MaverickTse View Post
    This is due to the algorithm...
    Similar to pngquant and AForge.net, the new genpalette option actually use "Median Cut" as color quantization (i.e. reduction) method. I believe the version in FFmpeg has been tweaked, or the color accuracy would have been questionable even at 64 colors.

    The best color accuracy would be using "Oct-Tree" (employed by ImageMagick and Windows Image Codec) but that is slow and therefore not very suitable for using with video.
    True, however i've tested various color quantizers and from my subjective perspective scolorq http://www.cs.berkeley.edu/~dcoetzee/downloads/scolorq/ is best, second seem to be http://members.ozemail.com.au/~dekker/NEUQUANT.HTML however it doesn't shine in small CLUT's (like 16 colors).
    Scolorq and NeuQuant are even much more slower than Oct-Tree.
    Slow on single image already, unusable in video
    Stopping development until someone save me from poverty or get me out of Hong Kong...
    Twitter @MaverickTse
    Quote Quote  
  7. Originally Posted by MaverickTse View Post
    Scolorq and NeuQuant are even much more slower than Oct-Tree.
    Slow on single image already, unusable in video
    Yes but more clever approach can be used - create intermediate bitmap with histogram from all video frames with significant reduction of CLUT (12 - 15 bits max - with dithering this is bellow HVS perception), group similarities together (blocks with size comparable to frequency of occurrence) and perform next decimation/averaging inside block, use slow algorithm to select desired CLUT - perhaps naive approach but it should work i believe. Anyway i see that calculating palette based on deltas (diff) seem to provide less banding, error diffusion is quite ok but definitely there must be bug somewhere in Bayer implementation - for example instead blue green is used where error diffusion doesn't use green. Probably more work is required to increase/improve error diffusion (for example temporally stable error diffusion can be very important).
    Quote Quote  
  8. Originally Posted by pandy View Post
    Originally Posted by MaverickTse View Post
    Scolorq and NeuQuant are even much more slower than Oct-Tree.
    Slow on single image already, unusable in video
    Yes but more clever approach can be used - create intermediate bitmap with histogram from all video frames with significant reduction of CLUT (12 - 15 bits max - with dithering this is bellow HVS perception), group similarities together (blocks with size comparable to frequency of occurrence) and perform next decimation/averaging inside block, use slow algorithm to select desired CLUT - perhaps naive approach but it should work i believe. Anyway i see that calculating palette based on deltas (diff) seem to provide less banding, error diffusion is quite ok but definitely there must be bug somewhere in Bayer implementation - for example instead blue green is used where error diffusion doesn't use green. Probably more work is required to increase/improve error diffusion (for example temporally stable error diffusion can be very important).
    Nope. Sorry I can't agree on any of your points.
    Creating a RGB Histogram(actually 3 histograms), though not as slow as color reduction, this is NOT speedy process.
    In addition, some color reduction methods give weight to pixel location or area of a color patch. Making a histogram -> quantize will likely give different result from using the image directly.
    (In ImageMagick, one may also use "UniqueColors" to get a representation of unique colors only. But applying quantization functions on this palette will give somewhat different result.)

    The Bayer dither in FFmpeg has no problem. I should even say, it is one of the more "correct" implementation among freewares.
    ImageMagick and AForge.net gives a tones of clipping(white pixel) and overall image got brightened.
    I spent quite a while researching on this and implement a "corrected" way of ordered-dither in the AviUtl plugin DGE2.
    FFmpeg's contributor for this new update must also have aware of this problem and the overall color/brightness is correct.

    The perceived color mismatch is unavoidable by the nature of ordered-dithering. Because the so-called "ordered-dither" is almost identical to adding a noise pattern to the color component.
    Last edited by MaverickTse; 25th Feb 2015 at 06:10.
    Stopping development until someone save me from poverty or get me out of Hong Kong...
    Twitter @MaverickTse
    Quote Quote  
  9. Originally Posted by MaverickTse View Post
    Nope. Sorry I can't agree on any of your points.
    Creating a RGB Histogram(actually 3 histograms), though not as slow as color reduction, this is NOT speedy process.
    In addition, some color reduction methods give weight to pixel location or area of a color patch. Making a histogram -> quantize will likely give different result from using the image directly.
    (In ImageMagick, one may also use "UniqueColors" to get a representation of unique colors only. But applying quantization functions on this palette will give somewhat different result.)

    The Bayer dither in FFmpeg has no problem. I should even say, it is one of the more "correct" implementation among freewares.
    ImageMagick and AForge.net gives a tones of clipping(white pixel) and overall image got brightened.
    I spent quite a while researching on this and implement a "corrected" way of ordered-dither in the AviUtl plugin DGE2.
    FFmpeg's contributor for this new update must also have aware of this problem and the overall color/brightness is correct.

    The perceived color mismatch is unavoidable by the nature of ordered-dithering. Because the so-called "ordered-dither" is almost identical to adding a noise pattern to the color component.
    Somehow even with mentioned Bayer problem (it is common in all implementations i saw - video is brightened, some offset is introduced) i see some problem in dithering (wrong color/lack of color used even if CLUT have color that can be used - im not saying that this is mathematically incorrect but subjectively it give worse result).

    For my needs i've created different CLUT (with scolorq) - seem that Bayer provide insufficient dither level (perhaps some random dither should be added before Bayer?).

    CLUT's attaches as pics

    OCS_PAL_o.png - is some default one used frequently (im not the author but indeed even with incorrect Bayer implementations results are sufficiently good for 16 color).
    OCS_PAL_0.png - this was was generated with focus on human skin
    OCS_PAL_2.png this one is exceptionally good especially for more saturated videos (and it is my favorite "universal" CLUT)

    All above gives bad results with Bayer in ffmpeg and good to very good results with sierra lite (beside to insufficient dither level sometimes to remove banding).


    OCS_PAL_o.png
    Image
    [Attachment 30442 - Click to enlarge]

    OCS_PAL_0.png
    Image
    [Attachment 30443 - Click to enlarge]

    OCS_PAL_2.png
    Image
    [Attachment 30444 - Click to enlarge]



    dos batch:


    Code:
    set colors=16
    set pix=320
    
    @ffmpeg.exe -threads %NUMBER_OF_PROCESSORS%*1.5 -i %1 -vf "hqdn3d=8:8:8:8,format=pix_fmts=rgb24,scale=%pix%:-1:out_range=full:sws_flags=spline,tblend=all_mode=average,decimate=cycle=2,tblend=all_mode=average,decimate=cycle=2,xbr=2,scale=%pix%:-1:sws_flags=spline" -an -c:v ffv1 -level 1 -coder 0 -context 0 -g 1 -loop 0 -vsync 0 %1_temp.avi
    
    @ffmpeg.exe -threads %NUMBER_OF_PROCESSORS%*1.5 -i %1_temp.avi -i "OCS_PAL_o.png" -lavfi "paletteuse=dither=bayer:bayer_scale=0" -loop 0 -vsync 0 %1_OCS_o_b.gif
    @ffmpeg.exe -threads %NUMBER_OF_PROCESSORS%*1.5 -i %1_temp.avi -i "OCS_PAL_0.png" -lavfi "paletteuse=dither=bayer:bayer_scale=0" -loop 0 -vsync 0 %1_OCS_0_b.gif
    @ffmpeg.exe -threads %NUMBER_OF_PROCESSORS%*1.5 -i %1_temp.avi -i "OCS_PAL_2.png" -lavfi "paletteuse=dither=bayer:bayer_scale=0" -loop 0 -vsync 0 %1_OCS_2_b.gif
    
    @ffmpeg.exe -threads %NUMBER_OF_PROCESSORS%*1.5 -i %1_temp.avi -i "OCS_PAL_o.png" -lavfi "paletteuse=dither=sierra2_4a" -loop 0 -vsync 0 %1_OCS_o_s.gif
    @ffmpeg.exe -threads %NUMBER_OF_PROCESSORS%*1.5 -i %1_temp.avi -i "OCS_PAL_0.png" -lavfi "paletteuse=dither=sierra2_4a" -loop 0 -vsync 0 %1_OCS_0_s.gif
    @ffmpeg.exe -threads %NUMBER_OF_PROCESSORS%*1.5 -i %1_temp.avi -i "OCS_PAL_2.png" -lavfi "paletteuse=dither=sierra2_4a" -loop 0 -vsync 0 %1_OCS_2_s.gif
    I miss something like Yliluoma temporally stable dither.

    http://bisqwit.iki.fi/story/howto/dither/jy/
    Last edited by pandy; 25th Feb 2015 at 09:15.
    Quote Quote  
  10. Do not waste time on Yliluoma's site. It doesn't work as expected.(I have spent WEEKS on it before you ask)

    To get bayer resulting a color as close to the image source as possible, you have to use a SMALL DITHER MATRIX, like 4x4.
    FFmpeg's new bayer implementation is a fixed 8x8 matrix.

    From FFmpeg doc http://ffmpeg.org/ffmpeg-filters.html#paletteuse
    ‘bayer’
    Ordered 8x8 bayer dithering (deterministic)
    Stopping development until someone save me from poverty or get me out of Hong Kong...
    Twitter @MaverickTse
    Quote Quote  
  11. Originally Posted by MaverickTse View Post
    Do not waste time on Yliluoma's site. It doesn't work as expected.(I have spent WEEKS on it before you ask)

    To get bayer resulting a color as close to the image source as possible, you have to use a SMALL DITHER MATRIX, like 4x4.
    FFmpeg's new bayer implementation is a fixed 8x8 matrix.

    From FFmpeg doc http://ffmpeg.org/ffmpeg-filters.html#paletteuse
    ‘bayer’
    Ordered 8x8 bayer dithering (deterministic)
    Well, it looks ok but i never saw any practical implementation so perhaps it doesn't work as You says.
    Then what - void and clutter? non regular matrices? I don't see why such problem was completely ignored - using classical error diffusion doesn't work for gif as noise is not compressible - dither need to be temporally stable...
    Side to this - isn't rule for Bayer matrix is that for low number of quantization levels (i.e. distance between levels is high) large/big matrix is required and for small distance between quantization levels (large number of quantization levels) it can be smaller (e.g. 256/16 i.e. 8 bit to 4 bit matrix size is 16)?

    btw
    i realize that there is "elbg" that may help a bit with what im trying to achieve (but it is painfully slow).
    Last edited by pandy; 26th Feb 2015 at 02:26.
    Quote Quote  
  12. It seems that you don't know what ordered is... but not surprising.

    So I need to explain a bit on the fundamental, then I think you will get why ordered dither by nature will deviate from the original color:

    First, the matrix.
    In a 3x3 matrix, it is make up of 1~9
    1 2 3
    4 5 6
    7 8 9
    In a 4x4 matrix, it is 1~16
    01 02 03 04
    05 06 07 08
    09 10 11 12
    13 14 15 16
    Different algorithm will arrange those numbers differently

    Then for the SIMPLEST ordered-dither implementation:
    (Assuming the image is 4x4 pixels, represented using pixel[row][col], index count from 1)

    In the case of 3x3 matrix:

    Code:
    pixel[1][1].red += matrix[1][1]
    pixel[1][1].green += matrix[1][1]
    pixel[1][1].blue += matrix[1][1]
    
    Find_the_Closest_color_of_the_modified_pixel_available_in_palette(pixel[1][1], palette) -> return color_index
    Set new_pixel[1][1]= color_index
    
    pixel[1][2].red += matrix[1][2]
    pixel[1][2].green += matrix[1][2]
    pixel[1][2].blue += matrix[1][2]
    
    Find_the_Closest_color_of_the_modified_pixel_available_in_palette(pixel[1][2], palette) -> return color_index
    Set new_pixel[1][2]= color_index
    ...
    ...
    pixel[1][4].red += matrix[1][1]
    pixel[1][4].green += matrix[1][1]
    pixel[1][4].blue += matrix[1][1]
    
    Find_the_Closest_color_of_the_modified_pixel_available_in_palette(pixel[1][4], palette) -> return color_index
    Set new_pixel[1][4]= color_index
    ...
    ...
    pixel[4][1].red += matrix[1][1]
    pixel[4][1].green += matrix[1][1]
    pixel[4][1].blue += matrix[1][1]
    
    Find_the_Closest_color_of_the_modified_pixel_available_in_palette(pixel[4][1], palette) -> return color_index
    Set new_pixel[4][1]= color_index
    ...
    ...
    pixel[4][4].red += matrix[1][4]
    pixel[4][4].green += matrix[1][4]
    pixel[4][4].blue += matrix[1][4]
    
    Find_the_Closest_color_of_the_modified_pixel_available_in_palette(pixel[1][4], palette) -> return color_index
    Set new_pixel[4][4]= color_index
    So, as the matrix gets large, the value adds to each pixel also gets large. Finding the "closest color" from the palette using this brightened pixel is unlikely getting the palette color that match the original pixel.
    in 3x3 matrix, you can get it deviated by +9, in 8x8 matrix, it can get to +64.
    This is also why many implementations of ordered-dither has serious "clipping" problem:
    for 8x8 matrix, any color from RGB(191, 191, 191) to RGB(255, 255, 255) will all becomes WHITE.

    In the better implementations, we first subtract pixel values according to matrix size before adding matrix values, thus getting a closer match to the original.
    But still, the fact that we are adding number has not changed. It get worse for larger matrix
    Stopping development until someone save me from poverty or get me out of Hong Kong...
    Twitter @MaverickTse
    Quote Quote  
  13. As a result, the only way to avoid the "wrong color" problem, is to SKIP processing pixels that already have a very close color in the palette.
    (how do you determine "close" is a big topic...)
    That involve another round of "find closest color" (before the addition) and also breaks the ordered-dither pattern
    Stopping development until someone save me from poverty or get me out of Hong Kong...
    Twitter @MaverickTse
    Quote Quote  
  14. Originally Posted by MaverickTse View Post
    It seems that you don't know what ordered is... but not surprising.

    So I need to explain a bit on the fundamental, then I think you will get why ordered dither by nature will deviate from the original color:
    Perhaps i don't know how it works - thx for explanation.
    My naive approach to ordered dither is that size of matrix is equivalent of the number of shades (or simulated quantization levels) that can be reproduced and after averaging (distance/human vision) perceived.
    For today current implementation of Bayer 8x8 in ffmpeg is useful only when amount of quantization levels is sufficiently high (so for CLUT's like 256 colors) - for small CLUT's seem that only error diffusion provide good results but with all problems related to error diffusion (not video friendly especially compression).
    Quote Quote  
  15. Originally Posted by pandy View Post
    Originally Posted by MaverickTse View Post
    It seems that you don't know what ordered is... but not surprising.

    So I need to explain a bit on the fundamental, then I think you will get why ordered dither by nature will deviate from the original color:
    Perhaps i don't know how it works - thx for explanation.
    My naive approach to ordered dither is that size of matrix is equivalent of the number of shades (or simulated quantization levels) that can be reproduced and after averaging (distance/human vision) perceived.
    For today current implementation of Bayer 8x8 in ffmpeg is useful only when amount of quantization levels is sufficiently high (so for CLUT's like 256 colors) - for small CLUT's seem that only error diffusion provide good results but with all problems related to error diffusion (not video friendly especially compression).
    Personally, I found 128 color sufficient with at most a single major scene change. 64 colors also acceptable if video has no scene change.
    (scene change here just means a drastic change of color composition)
    Error dither can of course lower the color count with result that is perceptively acceptable.
    Error and Ordered dithers are there for a trade-off.

    Many people who test GIF encoders, IMO, often neglected the common use of GIF encoder.

    Code:
    Animated GIF encoders are NOT for:
    ×Lengthy video
    ×High resolution
    ×Complex composition and Active motion
    Code:
    What they are for:
    ○Relatively static image composition
    ○Short clip
    ○Low-Res video
    //
    So testing a GIF encoder on a HD color-rich video, is inappropriate in the sense that the usage is wrong.
    Do not expect good result from them.

    For such demanding usage, wait for WebP to be adapted mainstream.
    (the APNG author will be unhappy, but the reality is no browser plans to support APNG by default, but WebP is already widely supported)

    If you must use GIF for video, like uploading to twitter, try:
    ○keeping the resolution at or under 640x360 or 640x480
    Do not include scene change. Or split the video on scene change
    ○keep framerate at/under 20fps. (P.S. a GIF with less than 20fps will become "invisible" after uploading to twitter)

    I stress again, DO NOT INCLUDE SCENE CHANGE. It is a killer for GIF encoder.
    Stopping development until someone save me from poverty or get me out of Hong Kong...
    Twitter @MaverickTse
    Quote Quote  
  16. Originally Posted by MaverickTse View Post
    Personally, I found 128 color sufficient with at most a single major scene change. 64 colors also acceptable if video has no scene change.
    (scene change here just means a drastic change of color composition)
    Error dither can of course lower the color count with result that is perceptively acceptable.
    Error and Ordered dithers are there for a trade-off.
    Well problem is that error diffusion is not suitable for any video as demonstrated by Yliluoma on this nice example http://bisqwit.iki.fi/story/howto/dither/jy/#ErrorDiffusionDithers

    Name:  jittest_floyd.gif
Views: 4629
Size:  61.5 KB

    Classical example that single pixel change propagate to whole screen that's why it doesn't work for video efficiently - good for static pictures but not sequence of pictures. Video need temporally stable dither which doesn't have high entropy (at best spatial and temporal).
    I've found that 16 colors can be exceptionally good (when compared to other color quantization methods) in many cases when spatial color quantization is used.

    As i pointed earlier for difficult cases RGB332 fixed palette (and stable dither like a_dither) can give good results but other approach is to create "universal" 256 palette by for example NeQuant (by providing to NeQuant or matrix pictures used commonly for color proofing as they cover various aspects of natural and synthetic images or even better by creating matrix - mosaic from each significant picture in video file where such mosaic can be created from smaller and averaged picture samples related to whole clip - NeQuant should be able to select predefined palette that should cover most important factors - this is useful approach where for example video doesn't use neutral light conditions so "uniformly" distributed RGB matrix/CLUT may give suboptimal results).
    Last edited by pandy; 26th Feb 2015 at 06:58.
    Quote Quote  
  17. Originally Posted by pandy View Post
    Originally Posted by MaverickTse View Post
    Personally, I found 128 color sufficient with at most a single major scene change. 64 colors also acceptable if video has no scene change.
    (scene change here just means a drastic change of color composition)
    Error dither can of course lower the color count with result that is perceptively acceptable.
    Error and Ordered dithers are there for a trade-off.
    Well problem is that error diffusion is not suitable for any video as demonstrated by Yliluoma on this nice example http://bisqwit.iki.fi/story/howto/dither/jy/#ErrorDiffusionDithers

    Image
    [Attachment 30450 - Click to enlarge]


    Classical example that single pixel change propagate to whole screen that's why it doesn't work for video efficiently - good for static pictures but not sequence of pictures. Video need temporally stable dither which doesn't have high entropy (at best spatial and temporal).
    I've found that 16 colors can be exceptionally good (when compared to other color quantization methods) in many cases when spatial color quantization is used.
    Again, I have alerted you NOT to dig on Yliluoma's site.
    I will not honor any reference to his site.
    (Sorry to be rude, but I have spent weeks trying to re-implement his methods but the codes are real slow and results are far from satisfactory)

    Conventional Error dither did give a lot of flickering, but recent version of pngquant and also the sierra-dither in this new FFmpeg addition have only minor flickering. The main problem is the poor compression ratio.

    Using a general full-spectrum color palette, in general is inferior to customized palette, unless your video have a lot of scene changes and really color-rich. Otherwise, people won't bother to research into color reduction algorithms...
    Stopping development until someone save me from poverty or get me out of Hong Kong...
    Twitter @MaverickTse
    Quote Quote  
  18. Originally Posted by MaverickTse View Post
    Again, I have alerted you NOT to dig on Yliluoma's site.
    I will not honor any reference to his site.
    (Sorry to be rude, but I have spent weeks trying to re-implement his methods but the codes are real slow and results are far from satisfactory)
    It is illustration how single pixel change propagate (as storm) to whole screen - touch one pixel and all changing - try to compress such thing efficiently. This is why error diffusion is not suitable for any fixed palette video.

    Originally Posted by MaverickTse View Post
    Conventional Error dither did give a lot of flickering, but recent version of pngquant and also the sierra-dither in this new FFmpeg addition have only minor flickering. The main problem is the poor compression ratio.
    Flicker may be reduced by temporal/spatial smoother but still - error diffusion doesn't work from compression perspective. Only ordered dither, stable spatially/temporally may provide sufficiently low entropy i.e. smaller size of gif animation

    Originally Posted by MaverickTse View Post
    Using a general full-spectrum color palette, in general is inferior to customized palette, unless your video have a lot of scene changes and really color-rich. Otherwise, people won't bother to research into color reduction algorithms...
    That's why representative set of pictures as mosaic can be generated, later analyzed with slow but good color quantizer, as a result single picture with for example 128 colors can be build and used as palette (after searching color, i resize picture by point resampling to 16x16 and manually placing colors in order 0 in case of 16 colors this is simple, side to this i need to tweak color LUT values to 4 bit extended to 8 bit - both nibbles same value).

    For picture for CLUT creation something like this can be used:

    Code:
    @set pix=320
    @set sens=0.075
    @ffmpeg -threads %NUMBER_OF_PROCESSORS%*1.5 -i %1 -vf "select='gt(scene\,%sens%)',scale=%pix%:-1:sws_dither=a_dither:out_range=full:sws_flags=spline,tile=8x8" -vsync 0 %1_preview_%%03d.png
    @pause
    where sens means scene change sensitivity - example 0.075 means it is quite sensitive.

    Later NeuQuant can be applied to create CLUT
    Quote Quote  
  19. Member racer-x's Avatar
    Join Date
    Mar 2003
    Location
    3rd Rock from the Sun
    Search Comp PM
    Not as clean as I'd like, but good enough to post. 128 colors. Chroma keying was a PIA because of multicolored background.
    Image Attached Thumbnails Click image for larger version

Name:	spiderweb.gif
Views:	12550
Size:	1.63 MB
ID:	30500  

    Got my retirement plans all set. Looks like I only have to work another 5 years after I die........
    Quote Quote  
  20. Member racer-x's Avatar
    Join Date
    Mar 2003
    Location
    3rd Rock from the Sun
    Search Comp PM
    In case anyone is interested, there is now a new plugin for Paint.NET that allows export to animated.gif with independent frame duration and transparency. Also the ability to add a foreground or background to the animation.

    The author is also looking into developing an animated webp also.
    Image Attached Thumbnails Click image for larger version

Name:	PDN255.gif
Views:	78464
Size:	355.0 KB
ID:	30737  

    Image Attached Images  
    Image Attached Files
    Last edited by racer-x; 15th Mar 2015 at 04:24.
    Got my retirement plans all set. Looks like I only have to work another 5 years after I die........
    Quote Quote  
  21. Member
    Join Date
    Jan 2016
    Location
    Commonwealth
    Search Comp PM
    Actually, recent versions of ffmpeg can produce high quality GIF now, here's a example:

    Code:
    #First, generate a representative palette of a given video
    ffmpeg -i clip.mp4 -ss 3:10 -to 3:15 -f yuv4mpegpipe - | ffmpeg -y -i - -lavfi palettegen pat.png
    
    #Second, use the generated png along with the video clip to produce the gif.
    ffmpeg -i clip.mp4 -i pat.png -ss 3:10 -to 3:15 -lavfi fps=15000/1001,paletteuse high_quality.gif
    You can even use it in single liner:
    Code:
    ffmpeg -loglevel error -i clip.mp4 -ss 1 -to 15 -an -f yuv4mpegpipe - | ffmpeg -loglevel error -i - -c:v png -lavfi palettegen -f image2pipe - | ffmpeg -hide_banner -i clip.mp4 -i - -ss 1 -to 15 -c:v gif -lavfi paletteuse -y outout.gif
    A wrapper written in Python could made it more easier for command line.
    Code:
    #!/usr/bin/env python2.7
    
    import subprocess as subp
    import tempfile
    import os,os.path
    import sys
    
    def ffgif(input, output, fps=None, start=None, end=None, ffmpeg_args=[]):
    
    	pipeyuvArgList = ['ffmpeg', '-loglevel', 'error', '-ss', start, '-i', input]
    	palArgList = ['ffmpeg', '-loglevel', 'error', '-i', '-', '-c:v', 'png']
    	gifArgList = ['ffmpeg', '-hide_banner', '-ss', start, '-i', input, '-i', '-', '-c:v', 'gif']
    
    	if start is not None:
    		pipeyuvArgList.extend(['-ss', start])
    		gifArgList.extend(['-ss', start])
    	if end is not None:
    		pipeyuvArgList.extend(['-to', end, '-copyts'])
    		gifArgList.extend(['-to', end, '-copyts'])
    	if fps is not None:
    		pipeyuvArgList.extend(['-vf', 'fps={0}'.format(fps)])
    		gifArgList.extend(['-lavfi', 'fps={0},paletteuse'.format(fps)])
    	else:
    		gifArgList.extend(['-lavfi', 'paletteuse'])
    	if len(ffmpeg_args):
    		gifArgList.extend(ffmpeg_args)
    
    	pipeyuvArgList.extend(['-f', 'yuv4mpegpipe', '-'])
    	palArgList.extend(['-vf', 'palettegen', '-f', 'image2pipe', '-'])
    	gifArgList.extend(['-f', 'gif', output])
    
    	proc1 = subp.Popen(pipeyuvArgList, stdout=subp.PIPE)
    	proc2 = subp.Popen(palArgList, stdin=proc1.stdout, stdout=subp.PIPE)
    	proc3 = subp.Popen(gifArgList, stdin=proc2.stdout, stdout=sys.stdout)
    	proc3.wait()
    
    if __name__ == '__main__':
    
    	import argparse
    	import re
    
    	def checkTimeFormat(i):
    		ptn = re.compile(r'^(((0?[0-9]|[1-9][0-9]):)?(0?[0-9]|[1-5][0-9]):)?(0?[0-9]|[1-5][0-9])(\.[0-9]{1,3})?$')
    		if ptn.match(i) is None:
    			parser.error('invalid time format: {}'.format(i))
    		return i
    
    	parser = argparse.ArgumentParser(description='Converting video clip to GIF image.')
    	parser.add_argument('-i', '--input', help='The input media file name', dest='input', metavar='FILE', required=True)
    	parser.add_argument('-o', '--output', help='The output gif file name', dest='output', metavar='FILE', required=True)
    	parser.add_argument('-s', '--start', help='Set the starting point: [[HH:]MM:]SS[.ms]', dest='start', metavar='HH:MM:SS.ms', type=checkTimeFormat)
    	parser.add_argument('-e', '--end', help='Set the ending point: [[HH:]MM:]SS[.ms]', dest='end', metavar='HH:MM:SS.ms', type=checkTimeFormat)
    	parser.add_argument('-f', '--fps', help='Set the FPS of GIF, using floating-point number or fraction', dest='fps', metavar='FLOAT|FRACTION')
    	parser.add_argument('ARGS', help='Specifying additional arguments passing to ffmpeg', nargs='*')
    
    	args = parser.parse_args()
    
    	ffgif(input=args.input, output=args.output, fps=args.fps,
    			start=args.start, end=args.end, ffmpeg_args=args.ARGS)
    The visual quality of the result is way better than ffmpeg's procedure with default settings. (of cause the file size is getting bigger too.)
    Last edited by meoow; 5th Jan 2016 at 00:58.
    Quote Quote  
  22. Originally Posted by meoow View Post
    Actually, recent versions of ffmpeg can produce high quality GIF now,
    palettegen, paletteuse were mentioned in post #20
    Quote Quote  
Visit our sponsor! Try DVDFab and backup Blu-rays!