I bought a new Nvidia GTX980 graphics cards and its pretty powerful.
Is there a setting or way to have AVISynth utilize and use the graphics card power along with the computers CPU to give it a slight speed boost when encoding?
+ Reply to Thread
Results 1 to 11 of 11
Only a few filters have partial GPU acceleration eg. NLMeansCL, FFT3DGPU, interframe/svpflow , ... maybe a few others
Some source filters can use GPU for decoding and may free up CPU resources to speed up encoding (but might also decrease speed depending on the situation and setup, because FPS is capped) . eg. If you have a license, DGNVTools can use compatible Nvidia cards
DGIndex or DGAVCIndex, that runs on Nvidia cards
It supports Maxwell, Kepler, older models as far back as GTX2xx series . It doesn't use "cuda cores" for compute - it uses the VPx (VP5, VP4, VP3, etc...) engine to offload decoding. So a $100 with the same VPx engine will be as fast as a $1000 card in this respect
Here's a list of the GPU-based filters for Avisynth that I know of:
- Source filter:
- Frame interpolation
- InterFrame - http://www.spirton.com/interframe/
- gpu25 - http://www.avisynth.info/?%E3%82%A2%E3%83%BC%E3%82%AB%E3%82%A4%E3%83%96#x44606bb
- GPU - http://www.avisynth.info/?%E3%82%A2%E3%83%BC%E3%82%AB%E3%82%A4%E3%83%96#x44606bb
- AviShader - http://www.avisynth.info/?%E3%82%A2%E3%83%BC%E3%82%AB%E3%82%A4%E3%83%96#x44606bb
- nnedi3ocl - http://forum.doom9.org/showthread.php?t=169766
a. Since Avisynth 32bit is limited to 4GB RAM, anything more than 4GB of free RAM can't help Avisynth at all.
b. Since you can't set the RAM a specific filter is using, but only set the RAM (SetMemoryMax) which Avisynth itself is allowed to use for variable handling using more than 1024 MB normally (according to my experience) doesn't show any benefits. So I wouldn't recommend to set SetMemoryMax to more than 1024.
Last edited by Selur; 31st Jan 2015 at 22:44.users currently on my ignore list: deadrats, Stears555
i love to try nvidia de-interlace,but nvidia de-interlace never appears up in settings while i create a script with avisynth+DGDecNV(i have no clue till yet whether it use .vob file .m2ts) .
i think wtih DGDecNV we can optimized nvidia de-interlace before we pipe script to x264(i guess i'll optimized things before script use more cpu power),but i hasn' test it yet , maybe if you guy's investigate it more i hope u can find a solution.
good luck friends..
No problem deinterlacing using DGDecNV here,..
LoadPlugin("G:\Hybrid\avisynthPlugins\DGDecodeNV.dll") # deinterlace using DGDECNV DGSource(dgi="H:\Temp\4224595246mpls_d3d839640fa50b48518c68effd2d59f2_16944.dgi",deinterlace=1) return lastdeinterlace: 0/1/2 (default: 0)
Nvidia PureVideo Deinterlacer
0: no deinterlacing
1: single rate deinterlacing
2: double rate deinterlacing (bobbing)
Note that double rate deinterlacing requires Windows XP SP3!
Also note that setting deinterlace to 1 or 2 forces the field operation to be "Ignore Pulldown", regardless of the project setting.
Last edited by Selur; 1st Feb 2015 at 10:50.users currently on my ignore list: deadrats, Stears555
a bobber isn't normally just a 'simple' line doubler, it hopefully does line interpolations otherwise aliasing would be unpleasant for most contentusers currently on my ignore list: deadrats, Stears555