So I have Vegas 12 and I was testing out some video rendering using Sony's MainConcept mp4 codec (because those are the only ones that apparently allow CUDA/OpenCL to be utilized).
Vegas definitely knows my card is there because it's available to be selected in the GPU acceleration setting.
I rendered out 1:00 of 1440x900 video at 30 FPS three different times:
"Use OpenCL if possible" - 2:24 render time
"Use CUDA if possible" - 2:24 render time
"Use CPU only" 2:14 render time
Should this be happening? I feel like my GTX960 should be at least marginally better at rendering video out than the CPU that I have, or that they should be working together a little better?
Am I fundamentally misunderstanding this whole thing? Is there a codec I should try instead?
Try StreamFab Downloader and download from Netflix, Amazon, Youtube! Or Try DVDFab and copy Blu-rays! or rip iTunes movies!
+ Reply to Thread
Results 1 to 14 of 14
Thread
-
-
Didn't Nvidia dropped support for CUDA recently?
https://forum.videohelp.com/threads/366488-NVIDIA-CUDA-is-no-more
You probably need an update to Vegas to use the new NVENC encoder. -
Hrm, well that's bothersome.
Is there any way that I can utilize GPU acceleration with a different program (like VirtualDub) at least just for simple video conversion? All I'm doing presently is converting FRAPS videos into x264 for size purposes. -
You can use NVEnc ; the commonly used one is based on Rigaya's NVEncC, but some GUI's available are staxrip, hybrid . There is a NVEnc plugin for Premiere , but not Vegas yet
It should only take maybe a few seconds at that resolution (you should be getting a few hundred FPS, provided there are no other bottlenecks like decoding speed). But the compression/quality is quite low; at typical bitrate ranges you might need ~2x the filesize for similar quality as x264 -
Is it not possible to frameserve out of Vegas to a GUI that supports NVenc?
-
That would be a beautiful thing but I'm too much of a noob to know how to do that. There seems to be Debug Frameserver, which if anyone could explain how to use I'd be forever grateful.
Hmm, I still seem to be getting roughly 40 FPS out of Hybrid. Is there some kind of command line argument I'm supposed to include to invoke my GPU?Last edited by CursedLemon; 28th Jan 2016 at 09:26.
-
I think it's a decoding bottleneck with the native fraps decoder, which it sounds like you are using Fraps is optimized for low CPU usage (so gameplay isn't as negatively affected), but the decoding speed is slower .
Some quick 1920x1080p30 fraps (in lossless RGB mode) tests show about 30-40 FPS decoding, but 10-15% CPU usage . In contrast, something like UT video gets 90-100 FPS decoding, but 40-50% CPU usage. UT video is more optimized for decoding speed and video editing. Libavcodec/ffmpeg however is more optimized than the native fraps decoder. Fraps decoding is about 140-150FPS with 90-100% CPU usage . Fraps in non lossless (non RGB) mode should be faster
Non bottleneck decoding tests (e.g. AVC recording, GPU decoding, NVenc encoding), you typically get about 350-450FPS encoding on Maxwell 1 cards for 1920x1080. You should get almost 2x that for Maxwell 2 using default settings, with <10% CPU usage -
Is there something I'm doing wrong with Hybrid? I don't see any options in the GUI that specifically relate to utilizing the GPU, unless I'm just totally missing something. Or is it that you're just saying because I'm using FRAPS to record these videos I'm kind of screwing myself?
-
That would likely be a disk I/O bottleneck. Assuming 8 bit RGB, about 3:1 compression with fraps, 30 fps: 1920 x 1080 x 30 = 62 MB/s. That's in the typical range for a modern hard drive. And it explains the low CPU usage -- the CPU waiting for the hard drive most of the time.
-
Fraps native decoder (at least in RGB mode) is going to be a bottleneck. That is consistent with your observations. You didn't say if you were using the non lossless mode
I actually don't use hybrid, But I know it's an available option there along with staxrip . I use NVEncC
If you force ffmpeg/libav decoding and NVEnc encoding, that should be the fastest way to encode a FRAPS source, because of the higher FPS decoding and avoid the decoding bottleneck. One way you could do that is pipe ffmpeg to NVEnC, or create an avs (avisynth) or vpy (vapoursynth) , not sure how to configure that in the GUI, but you should be able to make an avs with ffmpegsource2, or l-smash -
That test was on a SSD with ~ 450/450 R/W. Besides, UT Video was roughly the same size (~3-4% smaller). It is likely the Fraps VFW decoder is the problem, since ffmpeg/libav decodes Fraps 4x as fast . If it was an I/O bottleneck, then ffmpeg/libav shouldn't be able to decode that fast either
EDIT: If you were suggesting an I/O bottleneck for the OP? An SSD still doesn't help, because it's still maxing out with fraps decoder at about 40FPS with <15% CPU (should be faster at 1440x900, since those tests were at 1920x1080, but the computer is different).Last edited by poisondeathray; 28th Jan 2016 at 10:54.
Similar Threads
-
Premiere rendering jerkyness or acceleration rendering with MJPEG sources.
By Sennyotai in forum Video ConversionReplies: 4Last Post: 12th Jan 2013, 13:29 -
dumb question about x264 lossless
By deadrats in forum Video ConversionReplies: 4Last Post: 1st Jan 2013, 21:00 -
Write Speed - Dumb Question
By will7370 in forum Newbie / General discussionsReplies: 5Last Post: 13th Aug 2012, 08:21 -
Dumb Questions about x264,mp4 etc...
By kuurgen in forum Newbie / General discussionsReplies: 2Last Post: 13th May 2012, 15:37 -
Probably a dumb burning question
By neroguy in forum Newbie / General discussionsReplies: 14Last Post: 7th May 2011, 04:34