VideoHelp Forum




+ Reply to Thread
Results 1 to 28 of 28
  1. Hello friends,
    I'm facing a problem with artifacts when converting the frame rate to 59 fps using Twixtor Pro in Vegas, unfortunately I don't have a video card at the moment and I can't do the conversion using RIFE in Avisynth, so at the moment I'm only having this option.
    Twixtor makes me have artifacts at the top of the screen shown at 22 seconds of the video and at 27 to 29 seconds.
    Is there a configurable option in Twixtor to avoid this side effect?

    By the way, if anyone suggests any visual improvements for this video, I would appreciate it.
    Image Attached Files
    Quote Quote  
  2. For twixtor - changing the warp mode to forward instead of inverse/smart blend should help .

    But you have other problems than those artifacts - duplicate frames stuttering every 5th/6th, artifact/blended scene changes, overdenoised, oversharpened - looks like "waterpainting" effect . Try to decimate the duplicates first, and don't denoise or sharpen so much
    Quote Quote  
  3. Originally Posted by poisondeathray View Post
    For twixtor - changing the warp mode to forward instead of inverse/smart blend should help .

    But you have other problems than those artifacts - duplicate frames stuttering every 5th/6th, artifact/blended scene changes, overdenoised, oversharpened - looks like "waterpainting" effect . Try to decimate the duplicates first, and don't denoise or sharpen so much
    Ok, could you suggest a script on the 64-bit Avisynth platform so I can get rid of the duplicates?
    Quote Quote  
  4. Originally Posted by Roberto.marinho View Post
    Ok, could you suggest a script on the 64-bit Avisynth platform so I can get rid of the duplicates?
    It depends on what you started with before processing. Ideally you would remove duplicates very early on in the workflow, because duplicates negatively affect other processes such as temporal filters and denoising

    If you perform decimation after the interpolation on the MKV, it would be TDecimate(Cycle=6), because you want 1 in 6 decimation
    Quote Quote  
  5. Originally Posted by poisondeathray View Post
    Originally Posted by Roberto.marinho View Post
    Ok, could you suggest a script on the 64-bit Avisynth platform so I can get rid of the duplicates?
    It depends on what you started with before processing. Ideally you would remove duplicates very early on in the workflow, because duplicates negatively affect other processes such as temporal filters and denoising

    If you perform decimation after the interpolation on the MKV, it would be TDecimate(Cycle=6), because you want 1 in 6 decimation
    It would be before, I'm returning the whole process from the beginning, what script could I use for this purpose?
    Quote Quote  
  6. Originally Posted by Roberto.marinho View Post
    Originally Posted by poisondeathray View Post
    Originally Posted by Roberto.marinho View Post
    Ok, could you suggest a script on the 64-bit Avisynth platform so I can get rid of the duplicates?
    It depends on what you started with before processing. Ideally you would remove duplicates very early on in the workflow, because duplicates negatively affect other processes such as temporal filters and denoising

    If you perform decimation after the interpolation on the MKV, it would be TDecimate(Cycle=6), because you want 1 in 6 decimation
    It would be before, I'm returning the whole process from the beginning, what script could I use for this purpose?
    It depends on what you start with. What is the pattern of duplicates ? Or post a sample
    Quote Quote  
  7. Poisondeathray is exactly right: you MUST deal with the frame rate first. With any project, the very first thing you must do is make sure you have exactly one frame of video each time you step forward one frame. If you find that you have duplicates, where nothing happens when you step forward, or blends, where you see images from two adjacent frames, you have to deal with those issues before you do anything else or you will end up with a mess, like what you posted.

    Also, I don't know RIFE, but it appears to generate the same artifacts as all the other motion estimation tools that I've used over the past twenty years. The classic is the garbage around the rifle when the video cuts to a closeup of the dancers.

    Post the original footage of just the closeup shot (which is much easier to analyze). Make sure to not re-encode it. Use a tool which simply cuts the video. Since this appears to be video and not film, I suspect this may be a PAL <--> NTSC frame rate issue.
    Quote Quote  
  8. Originally Posted by johnmeyer View Post
    Poisondeathray is exactly right: you MUST deal with the frame rate first. With any project, the very first thing you must do is make sure you have exactly one frame of video each time you step forward one frame. If you find that you have duplicates, where nothing happens when you step forward, or blends, where you see images from two adjacent frames, you have to deal with those issues before you do anything else or you will end up with a mess, like what you posted.

    Also, I don't know RIFE, but it appears to generate the same artifacts as all the other motion estimation tools that I've used over the past twenty years. The classic is the garbage around the rifle when the video cuts to a closeup of the dancers.

    Post the original footage of just the closeup shot (which is much easier to analyze). Make sure to not re-encode it. Use a tool which simply cuts the video. Since this appears to be video and not film, I suspect this may be a PAL <--> NTSC frame rate issue.
    You're absolutely right! I started the entire project from scratch, trying to restore the frames after deinterlacing with QTGMC using ''restorefps''. In the end, I'll double the frame rate using Hybrid.
    But now comes the most complicated part of this restoration, which would be changing the colors and gamma of the video, since this raw file barely has the original colors that the show transmitted. There is a color remaster that someone did years ago, but it was heavily compressed, making the resolution look like crap, so I tried to emulate the colors of this remaster first using the script:
    ''ColorYUV(gamma_y=60, off_y=-18)
    ConvertToRGB(matrix="PC.601").RGBAdjust(r=190.0/167.0, g=163.0/126.0, b=140.0/87.0).ConvertToYV12(matrix="PC.601")
    Tweak(sat=1.4)'' and doing a color finalization in Vegas Pro,
    However, when increasing the levels and changing the colors, the video gives me several problems, (Banding, loss of details...) Would it be possible to emulate the colors of the sample that I will present with techniques that the video would not look like my first sample with the ''water paint'' effect?

    Attached are 2 videos, (The original interlaced source and the sample showing the colors I want in the video)
    Image Attached Files
    Quote Quote  
  9. Deinterlacing is NOT what you need to do. You need to do inverse telecine. They are two different things.
    Quote Quote  
  10. This was a blended convertfps style conversion from "NTSC" to "PAL", much like discussed in the other thread. Deinterlacing is the correct thing to do in this case, because you need access to all the fields in order for restorefps to work to "undo" the blends. The "ideal" restorefps value is different for that sample. It's closer to 0.5-0.6, different than the other thread. I would check the whole thing, or you might have to divide it up into sections for the clearest deblending


    "3,Colors.VOB" as a "target" has the wrong levels for DVD or normal video - it uses full range instead of "normal" range where reference black is Y=16, white is Y=235 (ie. in vegas you go below 0 IRE, and above 100 IRE). On most displays you will clip shadow and highlight detail. I get it - there is a subset of people that like that oversaturated, high contrast look, but it's technically wrong, technically "illegal" and you lose details on most displays. Are you sure this is what you want? Or maybe you want to purposely do this to avoid seeing the details/noisy shadows instead of dealing with it ?
    Last edited by poisondeathray; 8th Mar 2025 at 12:23.
    Quote Quote  
  11. Thanks. I stand corrected.
    Quote Quote  
  12. Originally Posted by johnmeyer View Post
    Thanks. I stand corrected.
    Still wrong?
    I did deinterlacing with QTGMC + restorefps
    Image Attached Files
    Quote Quote  
  13. Originally Posted by poisondeathray View Post
    This was a blended convertfps style conversion from "NTSC" to "PAL", much like discussed in the other thread. Deinterlacing is the correct thing to do in this case, because you need access to all the fields in order for restorefps to work to "undo" the blends. The "ideal" restorefps value is different for that sample. It's closer to 0.5-0.6, different than the other thread. I would check the whole thing, or you might have to divide it up into sections for the clearest deblending


    "3,Colors.VOB" as a "target" has the wrong levels for DVD or normal video - it uses full range instead of "normal" range where reference black is Y=16, white is Y=235 (ie. in vegas you go below 0 IRE, and above 100 IRE). On most displays you will clip shadow and highlight detail. I get it - there is a subset of people that like that oversaturated, high contrast look, but it's technically wrong, technically "illegal" and you lose details on most displays. Are you sure this is what you want? Or maybe you want to purposely do this to avoid seeing the details/noisy shadows instead of dealing with it ?
    So my friend,

    I like this color level, because for me the raw file has very dull colors. However, I don't know how to change the levels so much without causing so many problems to the quality. I'm very new to Avisynth techniques, but from what I've noticed, if I'm not mistaken, when converting colors to RGB the image loses a lot of quality, or is it just me?

    Is there a better adjustment script to get to a level close to the ''3, Colors.VOB'' sample without causing so many side effects, or would it really be impossible?
    Quote Quote  
  14. Originally Posted by Marcio.ciconne View Post
    I like this color level, because for me the raw file has very dull colors. However, I don't know how to change the levels so much without causing so many problems to the quality.
    Whenever you make color adjustments something, you are usually limited by the noise and compression artifacts that are already in the source. The more you alter something in the grade, the more artifacts and garbage get enhanced. You are increasing contrast, so the artifact's contrast get increased and thus more visible as well. This is the main issue in this scenario in terms of artifacts - they are already present , and in some frames quite bad

    Another factor is 8bit color manipulation - you introduce banding . You can see gaps in the waveform/ histograms as you make manipulations. You can work at the higher bit depth and dither on the downconversion, and/or add noise or grain to reduce the visible problem . But that is a relatively minor issue compared to above

    Another factor deblending - blurring and blending can help obscure artifacts. When you deblend and align images, the image becomes more clear, but some artifacts also become more clear

    In that DVD source, the scenes with higher motion tend to have more problems from compression artifacts - this is expected. But if you denoise/deblock/deband using 1 set of settings, you will degrade "good" sections more than you have to, predisposing you to "waterpainting" effect. Ideally you would filter different sections, differently. The cleaner sections do don't want to denoise so heavily, the frames with heavy artifacts apply stronger filters


    I'm very new to Avisynth techniques, but from what I've noticed, if I'm not mistaken, when converting colors to RGB the image loses a lot of quality, or is it just me?
    It depends on how you convert YUV to RGB. A standard conversion (limited range YUV to full range RGB) will clip IRE values <0 , >100. Or code values Y <16, Y>235, such as in "3,Colors.VOB" which has values in that range. ie. You lose 0-15, 236-255. Working with full range YUV to full range RGB in 10bit/16bit will avoid some of the range clipping and quality issues . You need to use float 32bit, for the YUV to RGB conversion to reversible and lossless . Many filters /programs are not compatible with 32bit float, and many programs do not do the conversion correctly



    Is there a better adjustment script to get to a level close to the ''3, Colors.VOB'' sample without causing so many side effects, or would it really be impossible?
    You always get "side effects" from manipulating color. The more manipulation, the more problems. The more source artifacts and noise already present, the more problems. And you end up denosing more, degrading more - trying to resharpen = waterpainting. It's a vicious cycle.

    If matching "3,Colors.VOB" colors is your main goal, I already suggested doing it in other programs like Resolve in your other thread. But you should be aware the "reference" is technically "wrong" because it has "illegal" levels . Also, "3,Colors.VOB" reference does not match photos of the concert - maybe that's what you want - but it looks less authentic to me compared to the photos. The stage lighting is colored but not as severe as "3,Colors.VOB"
    https://en.wikipedia.org/wiki/Re-Invention_World_Tour

    You can export a LUT from whatever program (e.g. Resolve, NLE of your choice etc..) and apply it in avisynth with full range steps, then convert to limited for the final one - it will be closer than trying to use avisynth filters directly (at least I find it more difficult match colors in avisynth compared to other programs)

    Apply this LUT before whatever you used to denoise or process the prores step (something with your prores sample , highlight compression compared to the source), but after double rate deinteracing and restorefps. You might have to adjust your denosing / other filters a bit, and the end pixel format (I used YUV420P8 for the demo). If you don't have Nvidia GPU, you can use AVSCube instead of DGCube.

    Code:
    .
    .
    .
    z_convertformat(pixel_type="RGBP16", colorspace_op="170m:709:709:f=>rgb:709:709:f")
    DGCube("PATH\roughgrade.cube", in="full", lut="full", out="full", interp="tetrahedral")
    z_convertformat(pixel_type="YUV420P8", colorspace_op="rgb:709:709:f=>170m:709:709:f")
    .
    .
    .
    This is not AR corrected, nr denoiser or other processing applied - the main purpose of this comparison was to match the colors more closely to "3,Colors.VOB". (Personally I would not do it this way, especially because of the "illegal" range)
    Image Attached Files
    Quote Quote  
  15. Originally Posted by poisondeathray View Post
    Originally Posted by Marcio.ciconne View Post

    I'm very new to Avisynth techniques, but from what I've noticed, if I'm not mistaken, when converting colors to RGB the image loses a lot of quality, or is it just me?
    It depends on how you convert YUV to RGB. A standard conversion (limited range YUV to full range RGB) will clip IRE values <0 , >100. Or code values Y <16, Y>235, ....
    For what it's worth here a demo of what gets damaged when applying a standard 8bit limited YUV->full range RGB conversion to the original.VOB source. The cyan pixels on the right panel indicate the damaged pixels, means those which got clipped one or several (R,G,B) components upon YUV->RGB conversion.
    Image Attached Files
    Quote Quote  
  16. Originally Posted by Sharc View Post
    For what it's worth here a demo of what gets damaged when applying a standard 8bit limited YUV->full range RGB conversion to the original.VOB source.
    Keep in mind that that particular source has lots of illegal YUV combinations that are otherwise within the Y=16to235, and UV within Y=16to240 range.
    Quote Quote  
  17. Originally Posted by jagabo View Post
    Originally Posted by Sharc View Post
    For what it's worth here a demo of what gets damaged when applying a standard 8bit limited YUV->full range RGB conversion to the original.VOB source.
    Keep in mind that that particular source has lots of illegal YUV combinations that are otherwise within the Y=16to235, and UV within Y=16to240 range.
    Absolutely. It has both some illegal YUV violating the limited YUV range, plus many YUV which are well within the limited range (see the histogram) but outside the inner RGB block in the YUV cube. It would be less dramatic for a limited YUV->limited RGB conversion (for editing, all 8bit integer realm).
    (Sometimes there is a confusion about the usage and meaning of the term "legal", I think)
    Last edited by Sharc; 9th Mar 2025 at 13:52.
    Quote Quote  
  18. Originally Posted by poisondeathray View Post
    Originally Posted by Marcio.ciconne View Post
    I like this color level, because for me the raw file has very dull colors. However, I don't know how to change the levels so much without causing so many problems to the quality.
    Whenever you make color adjustments something, you are usually limited by the noise and compression artifacts that are already in the source. The more you alter something in the grade, the more artifacts and garbage get enhanced. You are increasing contrast, so the artifact's contrast get increased and thus more visible as well. This is the main issue in this scenario in terms of artifacts - they are already present , and in some frames quite bad

    Another factor is 8bit color manipulation - you introduce banding . You can see gaps in the waveform/ histograms as you make manipulations. You can work at the higher bit depth and dither on the downconversion, and/or add noise or grain to reduce the visible problem . But that is a relatively minor issue compared to above

    Another factor deblending - blurring and blending can help obscure artifacts. When you deblend and align images, the image becomes more clear, but some artifacts also become more clear

    In that DVD source, the scenes with higher motion tend to have more problems from compression artifacts - this is expected. But if you denoise/deblock/deband using 1 set of settings, you will degrade "good" sections more than you have to, predisposing you to "waterpainting" effect. Ideally you would filter different sections, differently. The cleaner sections do don't want to denoise so heavily, the frames with heavy artifacts apply stronger filters


    I'm very new to Avisynth techniques, but from what I've noticed, if I'm not mistaken, when converting colors to RGB the image loses a lot of quality, or is it just me?
    It depends on how you convert YUV to RGB. A standard conversion (limited range YUV to full range RGB) will clip IRE values <0 , >100. Or code values Y <16, Y>235, such as in "3,Colors.VOB" which has values in that range. ie. You lose 0-15, 236-255. Working with full range YUV to full range RGB in 10bit/16bit will avoid some of the range clipping and quality issues . You need to use float 32bit, for the YUV to RGB conversion to reversible and lossless . Many filters /programs are not compatible with 32bit float, and many programs do not do the conversion correctly



    Is there a better adjustment script to get to a level close to the ''3, Colors.VOB'' sample without causing so many side effects, or would it really be impossible?
    You always get "side effects" from manipulating color. The more manipulation, the more problems. The more source artifacts and noise already present, the more problems. And you end up denosing more, degrading more - trying to resharpen = waterpainting. It's a vicious cycle.

    If matching "3,Colors.VOB" colors is your main goal, I already suggested doing it in other programs like Resolve in your other thread. But you should be aware the "reference" is technically "wrong" because it has "illegal" levels . Also, "3,Colors.VOB" reference does not match photos of the concert - maybe that's what you want - but it looks less authentic to me compared to the photos. The stage lighting is colored but not as severe as "3,Colors.VOB"
    https://en.wikipedia.org/wiki/Re-Invention_World_Tour

    You can export a LUT from whatever program (e.g. Resolve, NLE of your choice etc..) and apply it in avisynth with full range steps, then convert to limited for the final one - it will be closer than trying to use avisynth filters directly (at least I find it more difficult match colors in avisynth compared to other programs)

    Apply this LUT before whatever you used to denoise or process the prores step (something with your prores sample , highlight compression compared to the source), but after double rate deinteracing and restorefps. You might have to adjust your denosing / other filters a bit, and the end pixel format (I used YUV420P8 for the demo). If you don't have Nvidia GPU, you can use AVSCube instead of DGCube.

    Code:
    .
    .
    .
    z_convertformat(pixel_type="RGBP16", colorspace_op="170m:709:709:f=>rgb:709:709:f")
    DGCube("PATH\roughgrade.cube", in="full", lut="full", out="full", interp="tetrahedral")
    z_convertformat(pixel_type="YUV420P8", colorspace_op="rgb:709:709:f=>170m:709:709:f")
    .
    .
    .
    This is not AR corrected, nr denoiser or other processing applied - the main purpose of this comparison was to match the colors more closely to "3,Colors.VOB". (Personally I would not do it this way, especially because of the "illegal" range)

    Thank you for all the clarification.

    I don't know how to run the ''LUT'' in Avisynth, I extracted the ''roughgrade.cube'' file to the desktop.
    I am sending two images with the errors.
    Image Attached Thumbnails Click image for larger version

Name:	1.png
Views:	4
Size:	28.5 KB
ID:	86008  

    Click image for larger version

Name:	2.png
Views:	3
Size:	29.3 KB
ID:	86009  

    Quote Quote  
  19. "Cannot init CUDA"

    Do you have a supported Nvidia GPU ? Maybe you need to update drivers

    If no supported Nvidia GPU, you can use AVSCube
    http://avisynth.nl/index.php/AVSCube
    Quote Quote  
  20. Originally Posted by poisondeathray View Post
    "Cannot init CUDA"

    Do you have a supported Nvidia GPU ? Maybe you need to update drivers

    If no supported Nvidia GPU, you can use AVSCube
    http://avisynth.nl/index.php/AVSCube
    I don't have a GPU at the moment, I downloaded ''AVSCube'' to use the processor, but I'm lost on how to run it using the filter
    Image Attached Thumbnails Click image for larger version

Name:	3.png
Views:	4
Size:	46.1 KB
ID:	86011  

    Quote Quote  
  21. Instead of DGCube, call it with Cube. The other default settings are the same and can be left out

    Code:
    Cube("PATH\roughgrade.cube")
    Quote Quote  
  22. Originally Posted by poisondeathray View Post
    Instead of DGCube, call it with Cube. The other default settings are the same and can be left out

    Code:
    Cube("PATH\roughgrade.cube")

    The LUT colors are incredibly the same,
    but the 8-bit manipulation in this source really gives me a lot of gaps or histograms as you said. I just don't understand technically this part where you said "You can work in higher bit depth and hesitate in downconversion". Is there any filter I can use to fill these gaps or would it just be debanding filters like "GradFun3()" or grains?

    There is another source that manipulated the colors and managed to even out this gap and even added artificial details, I don't know what technique was used in this case, especially about the added details
    Image Attached Files
    Quote Quote  
  23. Originally Posted by Marcio.ciconne View Post

    The LUT colors are incredibly the same,
    but the 8-bit manipulation in this source really gives me a lot of gaps or histograms as you said. I just don't understand technically this part where you said "You can work in higher bit depth and hesitate in downconversion". Is there any filter I can use to fill these gaps or would it just be debanding filters like "GradFun3()" or grains?
    Most of the problems occur probably after your other filters like denoising

    If you can , use higher bit depth filters, and the downconversion can use dithering such as error diffusion ( floyd steinberg )

    Depending on your other filters used, the down conversion and pixel format conversions steps can use dithering for the bit depth conversion . For the demo I used YUV420P8, but you should use higher bit depth for the other steps such as denoising, if they are supported by those filters. In general, 10 or 16bit filtering will have fewer additional problems with banding, introduced by the filters and calculations ( you started with a crappy 8 bit source, working at higher bit depth won't magically make the problems disappear; higher bit depth just reduce the additional problems caused by 8bit manipulations)

    So RGBP16 (16bit RGB) , gets converted to 10bit 420 YUV, and uses error diffusion for the dithering . You might use 16bit if your other filter steps accepted that pixel format.
    Code:
    z_convertformat(pixel_type="YUV420P10", colorspace_op="rgb:709:709:f=>170m:709:709:f", dither_type="error_diffusion")

    But yes, moderate to heavy denoising would require additional debanding such as GradFun family, or F3KDb, or grain at the very end


    There is another source that manipulated the colors and managed to even out this gap and even added artificial details, I don't know what technique was used in this case, especially about the added details
    "Source.mkv" has blends , ghosting, duplicates, and the contrast is lower with legalized levels. The black level is slightly too high, so it looks washed out. It has some grain, so it doesn't look as "plasticky" . It's likely machine learning application , likely VEIA, you can tell by the distinctive eye artifacts on some of the side shots
    Quote Quote  
  24. Originally Posted by poisondeathray View Post
    Originally Posted by Marcio.ciconne View Post

    The LUT colors are incredibly the same,
    but the 8-bit manipulation in this source really gives me a lot of gaps or histograms as you said. I just don't understand technically this part where you said "You can work in higher bit depth and hesitate in downconversion". Is there any filter I can use to fill these gaps or would it just be debanding filters like "GradFun3()" or grains?
    Most of the problems occur probably after your other filters like denoising

    If you can , use higher bit depth filters, and the downconversion can use dithering such as error diffusion ( floyd steinberg )

    Depending on your other filters used, the down conversion and pixel format conversions steps can use dithering for the bit depth conversion . For the demo I used YUV420P8, but you should use higher bit depth for the other steps such as denoising, if they are supported by those filters. In general, 10 or 16bit filtering will have fewer additional problems with banding, introduced by the filters and calculations ( you started with a crappy 8 bit source, working at higher bit depth won't magically make the problems disappear; higher bit depth just reduce the additional problems caused by 8bit manipulations)

    So RGBP16 (16bit RGB) , gets converted to 10bit 420 YUV, and uses error diffusion for the dithering . You might use 16bit if your other filter steps accepted that pixel format.
    Code:
    z_convertformat(pixel_type="YUV420P10", colorspace_op="rgb:709:709:f=>170m:709:709:f", dither_type="error_diffusion")

    But yes, moderate to heavy denoising would require additional debanding such as GradFun family, or F3KDb, or grain at the very end


    There is another source that manipulated the colors and managed to even out this gap and even added artificial details, I don't know what technique was used in this case, especially about the added details
    "Source.mkv" has blends , ghosting, duplicates, and the contrast is lower with legalized levels. The black level is slightly too high, so it looks washed out. It has some grain, so it doesn't look as "plasticky" . It's likely machine learning application , likely VEIA, you can tell by the distinctive eye artifacts on some of the side shots

    Sorry to be giving you so much trouble, could you suggest a denoiser and debanging method that is ideal for this situation?

    I couldn't create a syntax for the ''Neo_f3kdb'' filter and when I try to run any of the ''GradFun'' family with YUV420P10, Avisynth doesn't seem to support it and closes by itself
    Quote Quote  
  25. Originally Posted by Marcio.ciconne View Post
    Sorry to be giving you so much trouble, could you suggest a denoiser and debanging method that is ideal for this situation?
    No - because all the filtering - denoising, debanding , sharpening etc... and/or using machine learning filters - depends on subjective personal taste.

    e.g You like certain color manipulations, but I dislike them. You might like the look of some filters, I might dislike them

    The only step that I would consider mandatory is the deblending step


    I couldn't create a syntax for the ''Neo_f3kdb'' filter and when I try to run any of the ''GradFun'' family with YUV420P10, Avisynth doesn't seem to support it and closes by itself
    http://avisynth.nl/index.php/Neo_f3kdb
    http://avisynth.nl/index.php/F3kdb

    You call them with neo_f3kdb or F3dkb , and adjust the settings as in the description

    For GradFun, there is a GradFun3DbMod based on GradFun3 that supports high bit depth
    https://github.com/Asd-g/AviSynthPlus-Scripts/blob/master/GradFun3DBmod.avsi
    Quote Quote  
  26. Originally Posted by poisondeathray View Post
    Originally Posted by Marcio.ciconne View Post
    Sorry to be giving you so much trouble, could you suggest a denoiser and debanging method that is ideal for this situation?
    No - because all the filtering - denoising, debanding , sharpening etc... and/or using machine learning filters - depends on subjective personal taste.

    e.g You like certain color manipulations, but I dislike them. You might like the look of some filters, I might dislike them

    The only step that I would consider mandatory is the deblending step


    I couldn't create a syntax for the ''Neo_f3kdb'' filter and when I try to run any of the ''GradFun'' family with YUV420P10, Avisynth doesn't seem to support it and closes by itself
    http://avisynth.nl/index.php/Neo_f3kdb
    http://avisynth.nl/index.php/F3kdb

    You call them with neo_f3kdb or F3dkb , and adjust the settings as in the description

    For GradFun, there is a GradFun3DbMod based on GradFun3 that supports high bit depth
    https://github.com/Asd-g/AviSynthPlus-Scripts/blob/master/GradFun3DBmod.avsi

    I don't understand how I can configure neo_f3kdb or F3dkb explained on the page,

    And I don't know how to run ''GradFun3DbMod'', I downloaded the avsi file, added it to the folder but I don't know what I might be doing wrong
    Image Attached Thumbnails Click image for larger version

Name:	1.png
Views:	4
Size:	34.9 KB
ID:	86061  

    Click image for larger version

Name:	3.png
Views:	4
Size:	56.2 KB
ID:	86062  

    Quote Quote  
  27. In avisynth the input clip argument can use "implied last" , ie whatever clip preceeded it. Otherwise you can use "last", or some other clip variable

    Code:
    .
    .
    neo_f3kdb(range=15, Y=64)

    ex_luts is found in dogway's ex_tools
    https://github.com/Dogway/Avisynth-Scripts/blob/master/ExTools.avsi
    Quote Quote  
  28. Originally Posted by poisondeathray View Post
    In avisynth the input clip argument can use "implied last" , ie whatever clip preceeded it. Otherwise you can use "last", or some other clip variable

    Code:
    .
    .
    neo_f3kdb(range=15, Y=64)

    ex_luts is found in dogway's ex_tools
    https://github.com/Dogway/Avisynth-Scripts/blob/master/ExTools.avsi
    Thank you very much
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!