I see that nowadays ffmpeg can be built to accept vapoursynth input directly (and the audio at the same time if you like), no external piping commands etc, and processing it all on one commandline) eg
I just gave it a try and it worked.Code:ffmpeg.exe -f vapoursynth -i "input.vpy" -i "audio_file-or-original_source_file.ts"-map 0:v:0 -map 1:a:0 ...
Even better, I have an nvidia graphics card so was able to try encoding using ffmpeg's nvenc h.264 hardware encoder and output to .mp4.
Some people grumble about relative nvenc video encoding quality, but I found for many home (non videophile) use cases it seems just fine ... and can be up to hundreds of times faster encoding than non-hardware encoding.
If one uses gpu-accelerated plugins, eg denoising such as gpu variants of nlmeans, I found the end-to-end encoding speed went (in one 4k test case)
... 339 times the end-to-end encode speed (a 4k video in this test case) ... which for home uses seems well worth itCode:non-gpu-accelerated ~0.2 fps / 26128.659s to gpu-accelerated ~52 fps / 77.095s
My use case is legal time-shifted home TV viewing via a Raspberry Pi powered media server to chromecast devices on various TVs around the house, so videos must be deinterlaced and "old" TV shows denoised. I currently use VideoRedo to "QuickStreamFix" a source first, and DG's gpu-accelerated vapoursynth plugins as my preferences.
Edit: IIRC, NVEncC accepts .vpy input too ?
+ Reply to Thread
Results 1 to 26 of 26
I (re)build ffmpeg regularly using mingw64 & gcc under ubuntu with target win10x64, and use .vpy input every day without issue. (using standalone vapoursynth with ffmpeg copied into the same folder)
Whilst I don't use MABS, I do take guidance from the build options specified by MABS and also those used by rdp, when using a derivative of deadsix27's build system:
(PS please don't try that link yourself, it likely only works for me).
Thx, guys, for the info.
"Programmers are human-shaped machines that transform alcohol into bugs."
Actually, his butt-hurt response was a bit childish. and unprofessional.
Also, not sure what you got against Python, but VapourSynth is a great frame server, with excellent filters for it, and vastly superior and more stable than, say, AviSynth.
vapoursynth ? I haven't used either in a while, but I remember when you had more than 1 filter, the official version was much slower than the -f vapoursynth_alt "stephen" patch
more like forgot
you were there hydra, your thread, at least at the beginning
vspipe was significantly faster; some speed issue with -f vapoursynth demuxer when using any other filters
EDIT #2: tested other source filters too lsmash, ffms2, other file types (mpeg2, avc); pretty consistent observation - direct read with only source filter is faster, but as soon as you add any additional filter in script (not just QTGMC, it can be anything like a denoiser, or even a simple resize only) it becomes slower than vspipe method at about 50% speed
Some quick tests - in terms of speed wise it seems to be working as expected
Why isn't Stephen implementation the official one ?
and then quot27 built ffmpeg with both, others started compiling with both, like patman IIRC, I stopped using vpy demuxer , it was too inconsistent
Said binaries include *both* VS demuxers. The upstream demuxer is still 'vapoursynth', and the other one is 'vapoursynth_alt'. It's easy to tell them apart because they return different data types as input: '-f vapoursynth' returns wrapped_avframe, '-f vapoursynth_alt' returns rawvideo. The upstream demuxer also doesn't honor relative file paths across directories (in just FFMS2? maybe in general?), while the alt demuxer does.
Ah, thanks again.
Sometime around back then I left doom9 due to perceived excess rudeness coming my way, even though it is a wonderful source of information and has smart people attending
Oh well, that's life.
In my generic cross-compile, the build shows only this for vapoursynth
ffmpeg_OpenCL.exe -demuxers D vapoursynth VapourSynth demuxer
I see per your links,
"The upstream demuxer is still 'vapoursynth', and the other one is 'vapoursynth_alt'.
It's easy to tell them apart because they return different data types as input: '-f vapoursynth' returns wrapped_avframe, '-f vapoursynth_alt' returns rawvideo.
The upstream demuxer also doesn't honor relative file paths across directories (in just FFMS2? maybe in general?), while the alt demuxer does."
It seemed to imply an updated demuxer may perhaps be over at https://github.com/qyot27/FFmpeg however a quick cursory scan didn't spot anything of that nature.
It's also mentioned in passing in ffmpeg-user:
It doesn't seem like vapoursynth_alt made it into "ffmpeg proper" that I can see.
I notice a patch for vapoursynth_alt over at MABS:
and it is applied if vapoursynth is requested in the ffmpeg build.
Since MABS does a patch, whether it gets built and linked or even works, IDK, I guess I'll try it and see what happens with the scripts I commonly use.
Unless someone has more and better info ...
OK, yes, vspipe is an option and I note your preference and view on comparative results and reliability.
I did some tests initially and intuited a view that piping may have more of an overhead.
I should do some comparative testing.
edit: Of interest another (different) patch https://gist.github.com/Patman86/8f7ae3ef3f5a6631093548ed905f338f
however I can't readily see which already-patched source it is applied to
Last edited by hydra3333; 31st Jan 2023 at 04:44.
Sometime around back then I left doom9
I seem to recall doing some QnD tests on a non-alt and it seemed OK performance-wise, perhaps I only loaded DG's stuff and used OpenCL filters or something.
I'd never use vapoursynth for just the source filter; I'd just use ffmpeg directly and skip vapoursynth. Basically "official" patch was broken IMO. Speed might be less of an issue if you had other bottlenecks (maybe slow CPU encoding like AV1), but if you're "GPU" encoding, you know that official patch is going to slow you down
These are more recent windows results, it might be different for linux. FFmpeg binary built by Patman ~ 6 months old (the only one recent I could find with both demuxers) from here.
vspipe is reference at 100%
ffmpeg pipe speed (or demuxer speed for -f vapoursynth, -f vapoursynth_alt)
1080i29.97 AVC source
DGSource + QTGMC
103.8% -f vapoursynth_alt
61.5% -f vapoursynth
UHD AVC source
DGSource + Spline16 downscale (zimg) to 1920x1080
101.7% -f vapoursynth_alt
72.6% -f vapoursynth
UHD AVC source
DGSource + Spline16 downscale (zimg) to 1920x1080 + SMDegrain
100.0% -f vapoursynth_alt
67.6% -f vapoursynth
Unless someone fixed it in the last 6mo, the -f vapoursynth patch is basically useless for windows IMO. Maybe linux is not affected ?
OK, will give it a try with a home build.
If you don't mind, how did you do your timings, was it from ffmpeg encode stats or a time either side or something ?
note to self, per https://ffmpeg.org/pipermail/ffmpeg-user/2021-February/051871.html
When piping a YUV format, the vspipe --y4m flag conveys the header info, pixel type, fps from the script; But the receiving ffmpeg pipe also has to indicate -f yuv4mpegpipe , otherwise it will be considered a raw video pipe (in that latter cause you wouldn't use --y4m) vspipe --y4m SCRIPT.vpy - | ffmpeg -f yuv4mpegpipe -i pipe: ...
Last edited by hydra3333; 31st Jan 2023 at 17:09.
Like the doom9 post, it' s just measuring the pipe speed (or demuxer speed) as read by ffmpeg. There is no "encode stats", because there is no encode being tested. But a delta that large will have a significant impact on encoding speed , if the encoder/settings wasn't a bottleneck
I found a more recent ffmpeg build that has vpy demuxer from a few days ago (again windows), same results, same "penalty" with -f vapoursynth
Note in newer vapoursynth versions vspipe uses -c y4m , not --y4m
-f null NUL might be a windows thing too
vspipe -c y4m script.vpy - | "ffmpeg" -f yuv4mpegpipe -i - -f null NULCode:
"ffmpeg" -f vapoursynth -i script.vpy -f null NULCode:
"ffmpeg" -f vapoursynth_alt -i script.vpy -f null NUL
EDIT: And in case it was some artificial pipe speed doesn't translate into real world issues - I ran some nvenc encodes, the slowdown problem with -f vapoursynth when using more than a source filter translates to real encodes too. The delta is a bit larger on real encode
Last edited by poisondeathray; 31st Jan 2023 at 18:05.
ffmpeg on it a year or so ago (no vapoursynth) however hardware accelerated encoding was severely limited by the prevailing ffmpeg and Pi codec code/interfaces.
I guess I can look at ffmpeg/vapoursynth in a linux VM, I wonder if it all runs under wine or there's linux native build instructions somewhere for vapoursynth and ffmpeg.
edit: ah. https://trac.ffmpeg.org/wiki/CompilationGuide/Ubuntu
Last edited by hydra3333; 31st Jan 2023 at 20:05.
Initial results based on a build of ffmpeg git head as at today including MABS's vapoursynth_alt patch.
It's sort of the same as poisondeathray says except that patch for vapoursynth_alt yields terrible results.
NOTE: ffmpeg/vapoursynth single-filter
import vapoursynth as vs # this allows use of constants eg vs.YUV420P8 from vapoursynth import core # actual vapoursynth core core.std.LoadPlugin(r'C:\SOFTWARE\Vapoursynth-x64\DGIndex\DGDecodeNV.dll') # do it like gonca https://forum.doom9.org/showthread.php?p=1877765#post1877765 core.avs.LoadPlugin(r'C:\SOFTWARE\Vapoursynth-x64\DGIndex\DGDecodeNV.dll') # do it like gonca https://forum.doom9.org/showthread.php?p=1877765#post1877765 # NOTE: deinterlace=1, use_top_field=True for "Interlaced"/"TFF" "AVC"/"MPA1L2" video = core.dgdecodenv.DGSource(r'G:\HDTV\TEST\TEST-VSPIPE-vs-DIRECT-INPUTS\test_mpeg2.ts.mpg.dgi', deinterlace=1, use_top_field=True, use_pf=False) video = core.avs.DGSharpen(video, strength=0.2) video = vs.core.text.ClipInfo(video) video.set_output()
"C:\SOFTWARE\Vapoursynth-x64\VSPipe.exe" --progress --filter-time --container y4m "G:\HDTV\TEST\TEST-VSPIPE-vs-DIRECT-INPUTS\test_mpeg2.ts.mpg.vpy" - > NUL Output 127326 frames in 77.69 seconds (1638.83 fps) Filtername Filter mode Time (%) Time (s) DGSource unordered 98.90 76.84 DGSharpen parreq 64.12 49.81 ClipInfo parallel 50.36 39.12
ffmpeg using output -f null NUL
frame=127326 fps=1568 q=-0.0 Lsize=N/A time=01:24:53.00 bitrate=N/A speed=62.7x
ffmpeg with nvenc h.264 encoding
frame=127326 fps=675 q=14.0 Lsize= 1242115kB time=01:24:52.88 bitrate=1998.0kbits/s speed= 27x
3. using -f vapoursynth -i something.vpy
ffmpeg using output -f null NUL
frame=127326 fps=840 q=-0.0 Lsize=N/A time=01:24:53.00 bitrate=N/A speed=33.6x
ffmpeg with nvenc h.264 encoding
frame=127326 fps=684 q=14.0 Lsize= 1242115kB time=01:24:52.88 bitrate=1998.0kbits/s speed=27.4x
4. using -f vapoursynth_alt -i something.vpy
ffmpeg using output -f null NUL
frame=127326 fps=126 q=-0.0 Lsize=N/A time=01:24:53.00 bitrate=N/A speed=5.05x
ffmpeg with nvenc h.264 encoding
frame=127326 fps=119 q=14.0 Lsize= 1242115kB time=01:24:52.88 bitrate=1998.0kbits/s speed=4.77x
It's fair to say I won't be using that patch for vapoursynth_alt, given the apparent ffmpeg "fps" and "speed" results,
Without ffmpeg encoding, using vspipe close to doubled the apparent ffmpeg fps and speed ... I wonder how much if anything is due to work being shifted back to vspipe on a 12-core ?
With ffmpeg encoding included, the apparent ffmpeg fps and speed were about the same.
No advantage for vspipe here (it has been reliable enough for my uses).
That may perhaps change when qtgmc or something is used ?
Last edited by hydra3333; 1st Feb 2023 at 08:01.
Initial results based on a build of ffmpeg git head as at today excluding MABS's vapoursynth_alt patch.
NOTE: ffmpeg/vapoursynth multi-filter
import vapoursynth as vs # this allows use of constants eg vs.YUV420P8 from vapoursynth import core # actual vapoursynth core core.std.LoadPlugin(r'C:\SOFTWARE\Vapoursynth-x64\DGIndex\DGDecodeNV.dll') # do it like gonca https://forum.doom9.org/showthread.php?p=1877765#post1877765 core.avs.LoadPlugin(r'C:\SOFTWARE\Vapoursynth-x64\DGIndex\DGDecodeNV.dll') # do it like gonca https://forum.doom9.org/showthread.php?p=1877765#post1877765 #core.std.LoadPlugin(r'C:\SOFTWARE\Vapoursynth-x64\vapoursynth64\plugins\fft3dfilter.dll') #core.std.LoadPlugin(r'C:\SOFTWARE\Vapoursynth-x64\vapoursynth64\plugins\libawarpsharp2.dll') #core.std.LoadPlugin(r'C:\SOFTWARE\Vapoursynth-x64\vapoursynth64\plugins\assrender.dll') # NOTE: deinterlace=1, use_top_field=True for "Interlaced"/"TFF" "AVC"/"MPA1L2" video = core.dgdecodenv.DGSource(r'G:\HDTV\TEST\TEST-VSPIPE-vs-DIRECT-INPUTS\test_mpeg2.ts.mpg.dgi', deinterlace=1, use_top_field=True, use_pf=False) video = core.fft3dfilter.FFT3DFilter(clip=video, sigma=2.00) video = core.warp.AWarpSharp2(video) video = core.assrender.Subtitle(video, r'This is a test subtitle', style="sans-serif,18,&H00FFFFFF,&H000000FF,&H00000000,&H00000000,0,0,0,0,100,100,0,0,1,0.9,0.5,3,2,2,2,1") video = vs.core.text.ClipInfo(video) video.set_output()
1. multifilter, vspipe chugged along
"C:\SOFTWARE\Vapoursynth-x64\VSPipe.exe" --progress --filter-time --container y4m "G:\HDTV\TEST\TEST-VSPIPE-vs-DIRECT-INPUTS\test_mpeg2.ts.mpg.multifilter.vpy" - > NUL Output 127326 frames in 602.07 seconds (211.48 fps) Filtername Filter mode Time (%) Time (s) AWarpSharp2 parallel 282.58 1701.35 FFT3DFilterInv0 parreq 99.46 598.79 FFT3DFilterMain0 parallel 92.45 556.61 FFT3DFilterTrans0 parreq 65.54 394.59 DGSource unordered 36.41 219.19 FFT3DFilterMain1 parallel 26.62 160.27 FFT3DFilterMain2 parallel 26.53 159.75 FFT3DFilterInv2 parreq 21.09 126.98 FFT3DFilterInv1 parreq 20.87 125.65 FFT3DFilterTrans2 parreq 15.94 95.95 FFT3DFilterTrans1 parreq 15.93 95.92 ClipInfo parallel 5.99 36.07 Subtitle parreq 5.57 33.52 ShufflePlanes parallel 0.04 0.22
2. using vspipe
multifilter, vspipe&ffmpeg using output -f null NUL
frame=127326 fps=213 q=-0.0 Lsize=N/A time=01:24:53.00 bitrate=N/A speed=8.51x
This is our vspipe&ffmpeg reference speed, with no gpu encoding, to compare with 3. below.
I think it represents the speed at which vspipe can process the filters and pipe the frames and ffmpeg can accept them and get rid of them without encoding.
Windows caching ... possibly kicks in somewhere, maybe around here.
multifilter, vspipe&ffmpeg with nvenc h.264 encoding
frame=127326 fps=205 q=11.0 Lsize= 1241804kB time=01:24:52.88 bitrate=1997.5kbits/s speed=8.19x
This is our vspipe&ffmpeg reference speed, with gpu encoding, to compare with 3. below.
It's close enough to "with no gpu encoding" above not to matter.
3. using -f vapoursynth -i something.vpy
multifilter, ffmpeg using output -f null NUL
frame=127326 fps= 38 q=-0.0 Lsize=N/A time=01:24:53.00 bitrate=N/A speed=1.53x
Wow that's horribly slow.
multifilter, ffmpeg with nvenc h.264 encoding
frame=127326 fps= 35 q=11.0 Lsize= 1241800kB time=01:24:52.88 bitrate=1997.5kbits/s speed=1.41x
And that's even slower.
Run on an amd 3900X 12-core with 32Mb memory and an nvidia 2060-Super gpu and WD black 7200rpm disk, Win11 Pro.
ffmpeg is built x64, vapoursynth is the x64 version.
Well well, multi-filter direct input .vpy into ffmpeg without encoding is much slower than the equivalent of vspipe into ffmpeg without encoding.
vspipe/ffmpeg at fps=213 speed=8.51x vs ffmpeg/vpy at fps= 38 speed=1.53x
Goodness me, that does not appear intuitive but definitely aligns with poisondeathray reports when using mutiple filters, even with latest ffmpeg.
Interestingly, Resource Monitor says ffmpeg cpu hovers around 6% and has 50 threads.
Something funny must be going on within vapoursynth underneath ffmpeg ?
One could expect vapoursynth to just deliver processed frames back to ffmpeg, but obviously not.
Doubly funny since as poisondeathray points out the abominable behaviour seems to occur when vapoursynth uses multiple filters.
Yes the above is not a fair comparison, however it reconfirms poisondeathray's findings when using "current" ffmpeg git head and latest release of vapoursynth latest and compatible python.
So, it looks like it's time for me to change over to use vspipe instead of using direct .vpy input to ffmpeg
Thank you for the really good information, poisondeathray !
vapoursynth, I think if it's a "CPU" filter . You tested DGSharpen for your "source filter plus one" filter test. I tested core.resize.Spline16. And for that , -f vapoursynth_alt was the fastest. I'll check some "GPU" filters and see if they differ from your results, and some opencl filters
It might not just be "CPU" vs "GPU" filter; there are some vapoursynth cache functions that affect results. Maybe "auto" is not ideal for direct ffmpeg demuxer
OK. My -f vapoursynth_alt wasn't the best first off, don't know why.
Anything I can do to assist ?
Interestingly, trimming to be a 2 minute clip and using the same .vpy
video = core.text.CoreInfo(video, alignment=7, scale=2)
direct .vpy seems to use 60% of the cache used by vspipe.
vspipe maximum framebuffer cache size 4,294,967,296 Used framebuffer size 320,911,680 direct .vpy maximum framebuffer cache size 4,294,967,296 Used framebuffer size 191,699,904
and perhaps even
setmaxcpu makes no difference. can't get setvideocache to work, but probably anticipate no diff there either.
Last edited by hydra3333; 1st Feb 2023 at 17:01.
But your -f vapoursynth_alt results were quite different than mine - it doesn't really matter, I'm just curious and none of these results are going to prevent me from using vspipe
-f vapoursynth_alt seems pretty fast on some quick tests for me; but I mentioned it above that I remember some other inconsistent results with -f vapoursynth_alt (I'm trying to find the details) , and that's why I've been using vspipe that last few years (since that thread)
Maybe I'll ask wiiaboo to look at it hahaha I never called him idiot or anything bad . It would be nice to have a vpy demuxer in ffmpeg that works well and reliably
I guess DGSharpen + core.Text would be considered 2 filters. Deinterlace=1 for DGSource isn't really a filter, but it's slower than not deinterlacing
I can't reproduce the "slowness" you experienced with -f vapoursynth_alt with several variations of test scripts. I used GPU filters, OpenCL filters, core.Text... It was always similar to vspipe, usually a bit faster.
For me -f vapoursynth was always the slowest by a significant margin, except when using source filter only . Even for source filter + 1 fast GPU filter. DGSource on a progressive source , + DGSharpen , -vf vapoursynth is penalized
I just spent a couple of hours converting all of my stuff across to use vspipe instead of direct .vpy input to ffmpeg.
Had some fun debugging errors caused by new pipe symbols in ECHO and REM statements as generated by a bunch of vbscript ... REM doesn't necessarily REM out a full line Yes, I'm a retired dinosaur.
I suspect wiiaboo won't want to debug ffmpeg interfacing with vapoursynth, though, on the basis MABS "only" provides an ffmpeg build system.
You never know your luck though
I skimmed the doom9 thread where you mentioned, and the issue has been around for a while. A couple of people have had a go i.e. quot27 and patman however nothing was finalised that I can find from googling "vapoursynth_alt" even though it is mentioned in ffmpeg-user https://ffmpeg.org/pipermail/ffmpeg-user/2021-February/051871.html
I'm slightly tempted to look a this https://gist.github.com/Patman86/8f7ae3ef3f5a6631093548ed905f338f "Last active 10 months ago" which appears to be a patch on top of a vapoursynth_alt patch, however its status is unclear (it may never have been built for all I know).
@poisondeathray would it be possible for you to ask patman about it over on doom9 ?
Nothing to do with portable vapoursynth on top of portable python, surely.
ffmpeg join 2 videos with GPU accelerated encodingBy M00nsp3ll in forum Newbie / General discussionsReplies: 0Last Post: 9th Apr 2020, 19:00
GPU Accelerated AV Converter multi-formats for nVidia CUDA Hardware.By sev7en in forum Newbie / General discussionsReplies: 0Last Post: 21st Oct 2019, 02:29
FFMPEG using GPU?By ujang in forum Newbie / General discussionsReplies: 12Last Post: 14th Apr 2019, 05:56
Best H265 gpu accelerated video conversion softwareBy Wolluf in forum Newbie / General discussionsReplies: 29Last Post: 3rd Feb 2019, 10:27
ffmpeg nvidia-gpu-accelerated encoding using NVENC - commandline settingsBy hydra3333 in forum Video ConversionReplies: 3Last Post: 7th Sep 2016, 09:11