Unfortunately, I do not yet have an CUDA enabled system/processing capabilities and am running CS 5.5, Windows 7 with a I7-2600 2nd generation processor.
My problem is that V -Dub currently takes a very long time ( multiple hours!!) to render a 45 minute project!
How can I speed up the rendering times by changing the default "processing thread priority" from normal to something higher before the rendering actually starts?
Will it improve matters if that's changed?
Are there any other things to try?
+ Reply to Thread
Results 1 to 11 of 11
Changing thread priority will make neglible difference, unless you are multitasking (it will divert CPU cycles preferentially to the higher priority assignment). (But you shouldn't be playing games or doing other stuff when rendering if you want it to go faster in the first place)
What are you doing in vdub? Any processing or filtering? Some filters are archatic, very slow or single threaded
I agree with posindeathray if yu are doing any filtering you shouldn't be using vdub. Avisynth would be better choice and if you are doing any heavy filtering even avisynth will be slow in most cases. And i wouldn't use Cuda by any means, that is if you are trying to retain quality.
You haven't told us what you are trying to do are you encoding an x264?
Are you using the x264 VFW codec in vdub?
What does your filter chain in in vdub look like?
You mentioned CS 5.5 what are you using it for?
Need more information that i'm trying to encoded a video in vdub.Murphy's law taught me everything I know.
Well, I confess:
I'm still in the experimental/trial and error stages trying to optimize results when downsizing HD 2 SD video with or without Avisynth.
Only tried resizing thus far for the purposes just described above.
It seems that whatever path I follow, V Dub takes hours.
Sounds like I may have to live with it!
It's not necessarily vdub's fault. It's just a GUI interface. Some filters that you are using might be the bottleneck. When you have a bottleneck, you only use a small percentage of your CPU's power, while it's waiting idle for the filter to finish so it can proceed to render whatever export format
If you have a single threaded filter bottleneck, you can render multiple instances since you are exporting a lossless intermediate (i.e. divide up your script with Trim(), spawn for example 4 separate instances, then specify the join in the final script).
If your cpu usage isn't 100% , then you have slack capacity (unused resources). e.g If your CPU usage is only 20-25%, you can get almost 4x speed up by parallel processing
There maybe other bottlenecks like HDD I/O. You need to provide more information.
I'm encoding a fully edited and finalized NTSC 16:9 file to MPEG-2 DVD spec delivery.
Rarely encode to x264 as thus far circumstances have not required it.
To date: I either feed in .avs HD to SD downsize script or set up V-Dubs downsize filter settings manually.
Output is interlaced LAGS YV12. That's it.
I followed everything you said except for paragraph 2 where you lost me.
What other information do you need?
How would I incorporate the contents of para 2 into a solution?
All I'm saying is if your workflow has a bottleneck, either remove the bottleneck or divide up the work so you can process the video in multiple sections. Since you are using lossless intermediates, this has no impact on quality
Monitor your CPU usage with windows task manager, if it's not 100%, you may benefit. If it is near 100%, then disregard the following
e.g. let's say your CPU usage is only 50% with the current workflow.
Process the 1st half and 2nd half with 2 instances of avisynth/vdub
Lets's say your 1080i60 intermediate exported from premiere is called "1.avi" , and it has 1000 frames
The first half you might call 1a.avs
The 2nd half you might call 2a.avs
Here Trim(x,y) returns the first frame "x" to the last frame "y" . So the 1st script would divide up 1.avi into frames 0-500, the 2nd 501-1000
So the 1st half is processed in parallel with the 2nd half
To join them back together (you have 1a.avi and 1b.avi which are now SD versions of lagarith)
1a = AVISource("1a")
1b = AVISource("1b")
1a ++ 1b
Here, 1a ++ 1b just re-joins the split videos, and you can feed that into HCEnc for example
vdub?!?! To do what exactly? Why not frame serve directly to an mpeg2 encoder?
A single core proc is capable of real time encodes in the scenario you have laid out. here are a few things that could potentially help.
1. Use a separate source in destination drives. (could increase speed as much as 10%)
2. Defrag drive (a heavily fragmented drive could impact encode time.)
3. Monitor proc temps if the proc is overheating it will slow down.Murphy's law taught me everything I know.