VideoHelp Forum

Try DVDFab and download streaming video, copy, convert or make Blu-rays,DVDs! Download free trial !
+ Reply to Thread
Results 1 to 6 of 6
  1. I'm sharing some directX pixel shader (.hlsl) control code I've written recently:
    They are tested in mpc-hc on intel gpu (integrated graphics) and should be easy to use and modify.

    In Mpc-hc, you need to install them in the Shaders subfolder (you need write permission on the source folder to be able to modify them).
    Modifying the source file is necessary to change parameter values (auto-compilation triggers automatically after saving). For this, it's best to use a text editor with C syntaxic coloring option like Notepad++. You can create multiple versions of the same shader based on your prefered parameter values/use cases and switch between them as required.
    With Mpc-hc, you can save Shader Profiles (a pre/post resize Shader chain), that you can access directly by Right Click>Shaders. Once set, shaders stay active until you disable them, so creating an empty "Disable" Shader Profile is useful.

    So Far:
    - barMask (Custom Mask, Aspect Ratio Mask, + Shift) v1.2:
    no programming required: select mode and adjust parameters in #define

    - MaskCrop: custom mask (Top, Bottom, Left, Right) the sides of the screen and recenter image, you could use this to mask a logo at the bottom of the screen for instance (mpc-hc doesn't have a built-in crop feature).
    - MaskBox: a rectangular masking box.
    - RatioLetterbox: View a custom Aspect Ratio. Simulate a 21/9 display on a standard monitor.
    - OffsetPillarbox: view video as a vertical aspect ratio on a standard landscape monitor with borders on the side (ex: 9/16, 1/1, etc.). You can offset the window to compensate for a non-centered video.
    - Image Shift.
    The borders are image zones, which means you can apply any effect on them (not just colored borders).

    - bStrobe (repeating Effect with configurable timings) v1.0
    Ready to use timed color frame Effect.
    Parameters: ScreenColor, Time of first run (Tfirst), Duration (Td), Repetition period (Tcycle), End (Number of runs, End time or No End).

    More shaders at
    - bSide (ZoneShader_mpc) v1.0
    Side by Side and Box display of Effects

    - bTimeEffect_DePixelate
    a Pixelation-DePixelation Effect/glitch.
    Last edited by butterw; 7th Jul 2020 at 04:01.
    Quote Quote  
  2. Hlsl is a specialized (simple/powerful but limited) programming language with vector operations and c-syntax which runs realtime on the gpu.
    The input video is processed frame by frame. Each frame is processed pixel by pixel by the main function of the .hlsl file.
    the basic scalar datatype is float (float32), float2 and float4 are vector datatypes.
    a pixel has float2 (x, y) coordinates, and float4 (r, g, b, a) color (the alpha channel isn't used by mpc-hc)
    input/output values are in range [0, 1.]

    /* Code for the Invert.hlsl Shader in dx9*/
    sampler s0; //the input frame

    float4 main(float2 tex: TEXCOORD0): COLOR { //Input: tex, current pixel coordinates (x, y)
    float4 c0 = tex2D(s0, tex); //Get rgba pixel color (of pixel tex)

    return float4(1, 1, 1, 1) - c0; //Output: color (r, g, b, a) of current pixel

    if required you can also access (all floats):
    - the frame dimensions W, H, the pixel width px, py
    - the frame Counter or seconds Clock, which start at 0 and are updated each frame.
    not much more to it except you can use #define expressions and pre-compiler instructions.
    #define Red float4(1., 0, 0, 0) //defines constant color Red.

    - runs on uncompressed realtime video, the gpu needs to be able to handle the Texture points and Arithmetic operations used: Not a problem at 1080p even on integrated graphics unless you need to access many other pixels to calculate each output pixel.
    - no built-in random number generator (PRNG).

    In mpc-hc:
    - you can only access the current frame !
    - you cannot access any external file.
    - no external parameters. parameters values are defined in source file.
    - no persistent variables.
    - in a post-resize shader: coordinate calculation is off if not in fullscreen mode. Also you can't access source file resolution, you get screen resolution.
    - the output frame has the same Aspect Ratio as the input frame.
    - shader: single source file with no imports.
    - single pass per file. You can however create a shader chain preset and even pass data using the alpha channel.
    Last edited by butterw; 19th Jul 2020 at 13:48. Reason: updated Limitations
    Quote Quote  
  3. Thanks for that.

    I know nothing about hlsl, but I did manage to fiddle with one of the existing pixel shaders. Feel free to add it to your collection.... or not.....

    BT.601 to BT.709 [SD][HD]
    Converts the colors regardless of resolution. The original pixel shader only worked for HD as it was intended to correct a video card driver problem around 700 years ago, but it's handy for correcting SD video encoded from HD sources as rec.709, if the renderer is ignoring any colorimetry information, or it was encoded with Xvid so it doesn't have any.
    Image Attached Files
    Quote Quote  
  4. I'm guessing at least some avisynth methods could be ported to pixel shaders.

    I saw your FrostyBorders script in your sig. May I ask what method you use to fill these borders ?

    A heavy 2D gaussian blur would be too gpu intensive for integrated graphics, I am looking for cheaper alternatives.
    Quote Quote  
  5. Originally Posted by butterw View Post
    A heavy 2D gaussian blur would be too gpu intensive for integrated graphics, I am looking for cheaper alternatives.
    The final resizing and blurring steps for one edge are below (the script defaults).
    The main filtering prior to that is TemporalSoften(7, 255, 255, 40, 2).

    Frosty_Left = Blend_Left.GaussResize(BL, OutHeight)\
    .FastBlur(Blur=50, iterations=3, dither=true)\
    .AddGrain(var=5.0, constant=true)

    It's closed source though. I can't remember if there's a reason for that. You might have to contact wonkey_monkey to see if he'll share source code if you need it.

    The script originally blurred by running the borders through the QGaussBlur function a couple of times (buried here), but then I discovered FastBlur and the result was better.
    Last edited by hello_hello; 29th Jun 2020 at 12:35.
    Quote Quote  
  6. Unfocused effect looks nice, but big radius blurs don't come cheap (requires huge kernel size >5*Sigma).

    fastblur uses 3 passes of boxblur to approximate a gaussian blur.
    The implementation of boxblur can be heavily optimized (the optimizations will be different on cpu and gpu)
    Also for gaussian blur, you would need to hardcode the filter kernel coefficients in a pixel shader, which is troublesome when you want to try different parameters.
    Game engines have heavily optimized multipass shader implementations of blur, but even then they rely on the power of an external gpu.
    I don't know if perf-optimized implementation can run even at 720p30 on old integrated graphics.

    naive implementation of single pass 2D-boxblur 3x3kernel with gpu pixel shader:
    - Mean function: effect is barely visible with a single pass with such a small kernel.
    - Per pixel: 9 texture lookups, 15 arithmetic operations.

    Better results with an optimized 2pass gaussian Blur (9tap): bShaders\blurGauss.hlsl >> blurGauss_Y.hlsl
    - Per pixel: 2 passes*(5 texture, 8 arithmetic)
    to increase the blur, you can use the shader multiple times (ex: 5times would be equivalent to a 45tap shader)

    For heavier blur with pixel shaders, Kawase (or Dual Kawase) method seems to be the way to go:

    Optimized boxblur (3x) separable filter, radius parameter k, kernel size: 4*k-1
    (2*k texture, 17 arithmetic) *6 iterations with downscaling/upscaling.
    Last edited by butterw; 7th Jul 2020 at 17:45. Reason: added intel link, boxblur info
    Quote Quote  

Similar Threads