I see that nowadays ffmpeg can be built to accept vapoursynth input directly (and the audio at the same time if you like), no external piping commands etc, and processing it all on one commandline) eg
ffmpeg.exe -f vapoursynth -i "input.vpy" -i "audio_file-or-original_source_file.ts"-map 0:v:0 -map 1:a:0 ...
I just gave it a try and it worked.

Even better, I have an nvidia graphics card so was able to try encoding using ffmpeg's nvenc h.264 hardware encoder and output to .mp4.
Some people grumble about relative nvenc video encoding quality, but I found for many home (non videophile) use cases it seems just fine ... and can be up to hundreds of times faster encoding than non-hardware encoding.

If one uses gpu-accelerated plugins, eg denoising such as gpu variants of nlmeans, I found the end-to-end encoding speed went (in one 4k test case)
non-gpu-accelerated	~0.2 fps / 26128.659s
to gpu-accelerated	~52 fps  / 77.095s
... 339 times the end-to-end encode speed (a 4k video in this test case) ... which for home uses seems well worth it

My use case is legal time-shifted home TV viewing via a Raspberry Pi powered media server to chromecast devices on various TVs around the house, so videos must be deinterlaced and "old" TV shows denoised. I currently use VideoRedo to "QuickStreamFix" a source first, and DG's gpu-accelerated vapoursynth plugins as my preferences.

Edit: IIRC, NVEncC accepts .vpy input too ?