I'm trying to figure out why the throughput in Sony Vegas is so low.
For example, when rendering from CineForm 720p to another CineForm 720p (to transcode with MeGUI via Avisynth) with all the modifications/FX included, my system only seems to get less than 20MB through (approx. 10-20MB input, and 10-20MB output). I use 2x 1TB WD Black in RAID0 as the source/work disk and render to another 3 TB Seagate Drive so that each drive is only performing Reads or Writes, not both.
My PC specs are listed in my sig. and CPU usage rarely goes over 20%.
I'm completely confused about this and can't seem to find where the bottleneck/performance weakness is.
Any advice?
+ Reply to Thread
Results 1 to 30 of 64
-
-
The FX applied are usually Studio to PC color conversion, cross fades between shots, trimming clips, some audio FX. Nothing really serious.
I even tried to render the same output clip without any modifications to see if it would make any difference, but there is little to no difference.
I'm still using Vegas 8, which, after many tests and trying each successive version of Vegas, is still the fastest for rendering material in various formats.
Also, for the record, I tried rendering the same, unedited clip with Premier Pro CS6 and the results are very similar.
Perhaps it may be some other issue, such as the codec being used.
Just to be more specific, a 45 min, 1280x720 video takes about 10-12 minutes to render (is this normal?) Are my expectations too high?
I use MeGUI with avs scripts to convert to x264 (using MeGUI is quite a different story, CPU usage is nearly always over 95% and very quick) and is also much more flexible than Handbrake and others. -
Last edited by newpball; 5th Dec 2014 at 23:10.
-
Actually, I've had it up to 4.75GHz and stable. 4.6 GHz, when encoding x264 can get the temps up around 78-82 deg C. well below the threshold of 95 deg C.
Software tested:
Prime95
AIDA64
Various synthetic benchmarks
Custom PC Suite found here: http://www.bit-tech.net/hardware/2014/08/29/intel-core-i7-5960x-review/6
Just for the record, the i7-5960x scored an overall mark of 3071 pts and that's an 8-core, 16 thread CPU. I got 3005 pts on my 6-core, 12 thread. -
my system only seems to get less than 20MB through (approx. 10-20MB input, and 10-20MB output).
Just to be more specific, a 45 min, 1280x720 video takes about 10-12 minutes to render (is this normal?) Are my expectations too high?
If you want to determine the theoretically fastest times on that specific setup with cineform, with the least overhead, then do the test in virtualdub, with video=>fast recompress, video=> compression (select cineform, and the same settings), audio=> no audio (so you test video compression only). The FPS (and thus total export time) will be higher than NLE's which have greater overhead and will be slower . That will give you a rough idea of "fastest" on your specific system setup -
Another option is debugmode frameserver, you can skip the cineform step and save that extra export time . Since you tested disabling FX, and it made minimal difference, those aren't the bottlenecks. But if you had very slow filters, or maybe doing 2 pass encodes with x264, then you might want to use an intermediate file
-
OK, that's a glaring mistake I missed. I meant to say 10-20MB/second. (with the current video, that's about 82-87 fps).
Another good point is the "Framerate" that I left out. I should've said HDV 720-24p intermediate (according to Vegas templates).
I'll test the theoretical fastest times on this specific setup with cineform as you mentioned and get back here.
I rendered the video only with the following:
I opened the video .avi in VirtualDub
Render from RAID0 to a SSD to use the fastest drives I have available.
Video length: 0:45:24 @ 23.976fps (65,294 frames)
CineForm compression setting -> High -> Time: 0:05:39 (192.61 fps)
I've used Debugmode Frameserver years ago when encoding Xvid in SD (640x480 @ 23.976fps) and I really liked it. However, with HD video and x264's features like "rc_lookahead=40" among others, I wasn't sure how it works or how well. If you have experience with that combination, (Vegas -> Frameserver -> MeGUI) I'd appreciate more input on the pros and cons of it. MeGUI encodes the audio first (or muxes files like AC3 last), so I'll have to look into how it all works together for x264. Any suggestions? -
Yes, those numbers are more descriptive. FPS, dimensions, those are more appropriate for describing compressed video
Personally, I typically use digital intermediate . If you have slower filters or 2pass rate control, it will end up being faster overall
Debugmode can serve both video & audio , and the longer the lookahead, the slower overall (even slower than you woud expect encoding directly from a digital intermediate with a larger x264 lookahead value) . I haven't measured the speed or done tests objectively, but it "seems" slower than it should be . My guess is that the overhead from the NLE compounds, not just linearly
But if you want to do it faster than megui , have a look at ffmpeg (or some GUI for ffmpeg) which can encode video, audio, mux at the same time, so you don't waste time doing each step, audio, then video, then muxing at a separate step . Most commonly distributed ffmpeg binaries have avs support compiled, so you have the avisynth options available
Storage I/O rarely makes a difference when you have different read and write destination drives, unless HDD is very fragged or you're dealing with uncompressed data rates . A SSD to SSD setup might make a few seconds difference , essentially negligible . The difference you are seeing from the vdub test is primarily from the NLE overhead -
I don't mind the way MeGUI works; it gives me great results at the end and that's what is important to me.
What I was looking for was some potential performance increase of Sony Vegas (or at least answer some questions about the I/O throughput). Perhaps it simply is the software itself; laggy as you put it. However, I've been keeping up with each new release of Vegas and the performance simply does not improve, at least not in the areas that I need. I read somewhere that the Sony Vegas code hasn't been changed in a long time, which would explain little to no performance changes, which is why I still haven't purchased a newer version (current version 8.0c)
I cannot think of anything else I could do to improve the performance. Maybe some other intermediate codecs, I'm using CineForm at the moment. HuffYUV and Lagarith both perform much worse. -
These are pretty typical findings for a NLE. Your vdub test was about 2x faster , right ? You're not CPU or I/O limited, or codec limited (cineform is fast, multithreaded), the bottleneck is the NLE itself. If you don't need to do NLE type projects, there are other software that are faster
For lossless codecs, you'll find ut video codec faster than huffyuv or lagarith. Both encoding and decoding (especially important for use in NLE's for editing/scrubbing). x264 also encodes faster using ut than say lagarith by 1-2% on typical usage scenarios. magic yuv is even a bit faster than ut video, but it's relatively new and stability hasn't been as thoroughly tested on different platforms, software.
Newer Vegas versions do have incremental additions each release, so the newest versions are faster for some GPU accelerated effects, scaling , those sorts of transformations. So If your project uses those types of things it will certainly be faster (same with Adobe with the mercury playback engine, GPU effects) . So a heavy FX project , with GPU accelerated filters like denoising , can easily be 10-20x faster . -
After doing some research I've come to an interesting conclusion.
After fixing and adjusting the colors in Vegas to suite more of a PC RGB only to be disappointed in the results.
All the videos I've been trying with newer versions of Vegas have always left me with color issues (looking washed out) after rendering. This was a major reason for me keeping my older version of Vegas.
I thought it had something to do with Vegas until I disabled the ITU BT.709 colospace options in the CineForm codec options.
It has now come to my attention that not all my clips are in the same "color space" (I think that's what most people call it). I thought it had more to do with video presentation (ie. flags for the final output video player to read) and not anything to do with editing.
Must pay attention to color spaces!
If anyone can clarify this more, by all means, feel free to express them. I'm always willing to learn new/better ways to improve the quality of my work.
Also, I rendered the following as a test:
Video length: 720-24p 0:45:24 @ 23.976fps (65,294 frames)
Sony Vegas -> Frameserver -> MeGUI -> Render Time: 0:16:42 (63.73 fps)
MeGUI had all default values except a CRF of 19.0
CPU never went over 70% usage, but hey, it's a hex-core and the reason I bought it. -
Most native camera formats get studio RGB treatment in vegas. If you frameserve out, you should do it in RGB, then use a PC matrix in avisynth to bring the levels back to "normal" . Even if you use an intermediate, I would use RGB lossless intermediate (cineform will do the conversion to YUV 422), so you can control the colorspace conversions in avisynth . If you still have problems, you need to provide more specific information about the input clips and levels
Earlier you said you were "happy" with megui quality. Megui is just a front end for x264. If you use the same settings, same libx264 version, same filters/processing, you will get same thing with ffmpeg libx264
Another way you could speed things up is use 2 instances of vegas and render out 2 lossless intermediates (e.g. AVI01.avi, AVI02.avi) . Because the bottleneck here is vegas. The avs script can be used to join them. I don't know if you can frameserve 2 instances through dmfs -
One thing I don't get in Vegas is the cross-fade for video especially for Studio RGB, which I believe is from 16-235. However, when you fade in/out the levels go all the way to 0 instead of 16.
I'll try to explain it.
Sometimes a clip will start playing immediatly without any lead time, so I add an empty clip (usually 1-2 seconds) of black video and no audio. Now, when you play if from the begining, the video starts off black and as soon as you hit the transition to the actual video there is a sudden color change (which is usually fixed used the color corrector preset -> Studio RGB to PC RGB). When using the Video scopes, the color range goes all the way from 0-255 (PC RGB I think)
I'm so confused with all these color spaces, BT.709, "ConvertToRGB32(matrix="Rec601")", YV12, YUY2, etc. this has been the hardest part for me to learn about video editing and I still don't understand it fully.
I hope this makes sense. All the video I edit fully utilizes the 0-255 range. Is this the correct way of doing it? The results I see on my TV or PC are very good.
I really appreciate your patience in explaining some things to me, thank you.
As for MeGUI, yeah I know it's only a front end for the x264 codec, but it's the one I learned how to encode in so I tend to refer to it as an encoder/codec as the same thing. -
A lot of this has been discussed over and over, you can use search for more information. I don't expect you to "get it" right away, you 'll probably have to come back and read bits here and there before you finallly understand; I know I had to
"video" black is Y' 16, video white is 235 . 0-255 Y' is full range and "illegal" . End delivery formats are always Y' 16-235, CbCr 16-240 (you can have small offshoots, but the majority of the data is in that range). RGB is always 0-255 . "studio rgb" essentially means Y' 0-255 gets "mapped" to RGB 0,0,0-255,255,255 when you import a YUV input . The problem here is vegas handles different input formats differently. Some get studio RGB treatment, yet others get "computer range" RGB treatment. Also it handles different exports differently. Some get full, some get limited because some are "expected" to be using Studio RGB, others Computer RGB. And the behaviour changes if you use 32bit mode. It can get confusing in a hurry.... If you haven't already, read the "Glenn Chan" articles on Vegas colorspaces, it's basically required reading for any non casual Vegas user.
Vegas users have their own personal ways of doing things. Some prefer to work in computer RGB . Bottom line - you have to know what each event is using, because some get studio RGB treatment, some get computer RGB treatment. You have to convert one to the other so they match. You already figured this out, but apply either studio=> computer RGB or vice versa to the event. Same thing upon export, if you use a format that "expects" studio RGB, you might have to use that conversion either for preview or before exporting
709 and 601 are the matrices. They are the math equations that determine YUV<=> RGB conversions . There are exceptions, but by default, 709 is used for all HD material, 601 for all SD material. Essentially, PC matrix is full range and "maps" RGB 0,0,0-255,255,255 <=> YUV 0-255 . Rec matrix is standard range and maps RGB0,0,0-255,255,255 <=> YUV 16-235, CbCr 16-240 in 8bit values
All the video I edit fully utilizes the 0-255 range. Is this the correct way of doing it? The results I see on my TV or PC are very good. -
The video industry is very conservative, 'stuffy old bearded engineers' who keep insisting that 220 luminance (and it isn't even luminance just to make it more complicated) levels must obviously be far superior to 256 luminance levels. I am sure there are still engineers who get heart palpitations when they are confronted with content having text one pixel over title safe. I suppose some still believe that millions of people have this in their homes:
One would have thought that with the introduction of HD all this would have been fixed, but no it lingers on.Last edited by newpball; 7th Dec 2014 at 13:03.
-
for HD video, why don't you try to skip applying those rgb to studio effects and just adding line in Avisynth script (exporting RGB):
ConvertToYV12(matrix="PC.709")
for interlaced HD video: ConvertToYV12(interlaced=true, matrix="PC.709")
for interlaced SD video: ConvertToYV12(interlaced=true, matrix="PC.601") -
I think from his description, his problem was the "empty clip" generated media. So apply computer to studio for those events and everything will be in "studio" levels , then you can use PC matrix in avisynth . Or transform everything to "computer RGB" and use Rec matrix. Eitherway, you need to match the levels for all events/media
-
I just read this: http://www.glennchan.info/articles/vegas/colorspaces/colorspaces.html
Never mind all the confusion between color spaces and how Vegas handles them, Vegas decided to change how they handle them between Vegas 8 (which I've been using) and the newer versions I've been avoiding for this very reason.
Correct me if I'm wrong, but editing in Computer RGB seems to be the way to go, then, when exporting to other formats (DVD, HD, web, etc.) do the appropriate color space conversions.
@ _Al_
I'm assuming that the lines are added to Avisynth scripts for the final encode (ie. x264)?
@ newpball
I think you meant to say that, with the advent of HDTV, they "super-powers" could've come up with one single "standard" across the board and fix the age-old issues of different frame rates, colors, resolutions, etc. If so, I totally agree with that.
I'll try to upload a screenshot of what I'm working on. -
-
Never mind all the confusion between color spaces and how Vegas handles them, Vegas decided to change how they handle them between Vegas 8 (which I've been using) and the newer versions I've been avoiding for this very reason.
Correct me if I'm wrong, but editing in Computer RGB seems to be the way to go, then, when exporting to other formats (DVD, HD, web, etc.) do the appropriate color space conversions.
Usually you would test the end result's levels, then work backwards in the workflow to see where /if / at what stage something went wrong.Last edited by poisondeathray; 7th Dec 2014 at 13:27.
-
My end result looks like the last image; both on my computer and on TV using x264 in an mkv.
-
It's a bit more complicated than that. You posted an RGB screenshot. The method of taking the screenshot matters, because there is a YUV=>RGB conversion .
So the actual levels , colors, might not be what you think they are, because of different matrix, renderer, decoder
eg. Chances are it's not, but.....It's possible that both your computer/gfx card and TV are calibrated in the same incorrect way . This actually occurs quite frequently . What you "see" might be different than what other people are seeing -
I'd put it right after loading the clip:
AviSource("RGB_dmfs_from_Vegas.avi")
ConvertToYV12(interlaced=true, matrix="PC.709")
#rest of script
this way you can load black clips on timeline, but watch for loosing detail in highlights, like for example check some shots with electric lines against the sky, if you can find some, and watch result, if those wire lines kind of start to disappear, etc, basically loosing details in highlights. If it is not acceptable, you go with that Studio to RGB in Vegas, but then using black clip for example, you have to fix that blank clip as well, like giving it 16,16,16 or apply sudio to RGB effect as well -
-
You can if you want, but you should learn to check it yourself. One way to do this is in avisynth
Compare the input source , and final output (mkv). Histogram() will return the Y' values. It's really a waveform monitor. The "brown" areas are "illegal", from 0-15 and 236-255
Load them into avisynth , one way is avspmod, which has tabs that you can swap back & forth with the number keys
AVISource("input.avi")
Histogram()
ConvertToRGB(matrix="rec709")
FFVideoSource("output.mkv")
Histogram()
ConvertToRGB(matrix="rec709")
This way you control more variables and take things like the matrix used, renderer out of the equation. If you take screenshot with some video player, it might be using a different renderer for example
If they match, and they "look" ok both to your eyes and the waveform, no clipping, then your current workflow is fine in terms of levels / colorspace issuesLast edited by poisondeathray; 7th Dec 2014 at 14:12.
-
It depends on what the difference is or what the actual problem is
There are still some "gotchas" and things "not controlled" in that mini test - for example the AVISource() should be using the cineform VFW decoder. But depending on how you have you system setup and the cineform settings setup it might be returning something else, not YUV 422
At this point it might be appropriate to upload a sample of the input and corresponding sample of the output -
I'm not sure what you're asking. Are you asking for the input/output CineForm clips?