I've only been at this for about ten days, but I've run into some questions and a couple hitches that I could use advice on as well as those more experienced than myself performing a sort of sanity-check on my workflow. Any consideration is greatly appreciated.
Goal
Our primary goal is the archival and preservation of video game footage captured from an actual playthrough without any added watermarks, commentary, or lower-thirds in such a way that the video could be referenced as a source of reasonable quality unmolested material decades into the future. At the same time, size is certainly a concern as a sixty hour game can easily consume 6tb of lossless video and $600 for enough drives to store and backup a copy of it *per game* is out of the question. Even encoded copies will be enormous at 300GB (double that for backup) for such a game encoded at 720p and a 10240 bitrate.
I'm currently only recording from the PC, though I'd like to do it with consoles from all generations, at a later point. Doing this properly seems more involved and there is a lot of gimmicky garbage out there. Ideally, I'd do everything (PC and consoles) via dedicated PCI card like the Black Magic Intensity Pro, except that i believe it has a number of limitations as well as massive disk I/O requirement. And I'm not sure how HDCP might play into it come the next generation of consoles.
Gaming and Encoding System
- i7-3770K
- 16GB 1600mhz (9-9-9-24)
- 2 x GTX 670 4GB
- ASUS P8Z77-WS
- Samsung 830 256GB SSD
- 2560x1600 (16:10) Apple Cinema Display
Utilities
- Dxtory
- MeGUI
- Vegas Movie Studio Platinum
- FAAC codec.
- x264 CLI codec
Workflow
- Record from full 2560x1600 (~60fps) to 1152x720 30fps (16:20 720p) using Dxtory.
Video: Lagarith codec using RGB24.Audio: PCM 48khz 16bit Stereo.Result is approximately 1.35GB/minute.- Minimal editing in Sony Vegas Movie Studio Platinum (cutting, merging, splitting for continuity and length).
Render out as Lagarith and PCM and keep fingers crossed that it uses Smart Rendering so as not to re-encode and lose data.ALTERNATE: You could use Virtualdub for this process and rely on "direct stream copy" mode to avoid re-encoding on export, though the editing process is more limited and Virtualdub has its own issues, in general.- Convert to x.264 with MP4 container via MeGUI.
Video: Using 2-pass 10240 bitrate with High AVC Profile and Very Slow preset. CABAC enabled, GOP closed and based oN FPS, B-Frames set to 3, adaptive B-Frames Optimal, B-Pyramid disabled, 5 reference frames and 40 extra I-Frames, 10 (QP-R) subpixel refinement, and Trellis set to Always.Audio: 448 ABR FAAC.Result is 75MB/minute.
The result is an 1152x720 30fps file just under 5GB/hr with a 600-800% encode time (so an hour of content takes six to eight hours to encode).
Here is an example of output rendered from this workflow, on Youtube - with all of youtube's inherent changes applied. It should give you an idea of what I'm accomplishing with my effort (or what I'm failing to accomplish).
QUESTIONS / ACTION ITEMS
1. MKV, MP4, OTHER...?
MKV Benefits: More options, including many that one may not use right now, but could in the future. Handles almost all video and audio codecs. Multiple tracks, dubs, subs, better chapter handling. And it's an open standard. It may be the best/only choice if you wanted to later add commentary so that you could have one track that is purely gameplay, another that is gameplay commentary, another that is review and historical information about the game, etc. The chapter features may be great for keeping files large instead of chopping them into hour-long pieces and then adding jump-points for levels/acts/chapters/whatever in a game. Oh, one good thing about MKV is that I am given to understand it handles combining the video and audio better, without any issues with audio and video interleaving (though I could be wrong on this).
MKV Liabilities: Poor support and constantly evolving standard. In a decade or two, will it be as readily playable as MP4 and other containers, today? Or will it be an obscure has-been leaving users/files of it totally stuck?
MP4 Benefits: Wider support. Fewer features and functions. Usually handled natively on most devices. AAC/FAAC used to be a problem, which could undermine the choice of MP4, but even XBOX 360 should handle the MP4 with AAC/FAAC, today. Will almost certainly remain accessible in a decade or two, though it may be tomorrow's VFW when something else comes along.
MP4 Liabilities: Fewer options and functions. Primarily addresses business concerns over community concerns (DRM, etc). Seems less likely to grow significantly in terms of functionality.
While features are important, so is longevity and ease of playability. That is, portable devices should work with whatever is chosen. Standard players should work with it. And consoles and other boxes should play it, rather than require transcoding on-the-fly via PLEX, PS3 Media Server, and other external media servers.
Of course, it is my understanding that both MKV and MP4 should be easy to demux in the future, so if something much better came along in twenty years, I could easily demux from MP4 or MKV and apply another container with no loss. Correct?
2. COLOR SPACE
I want to maintain RGB24, but during encoding (actually, right after indexing), MeGUI always reports "Successfully converted to YV12" and if I look at the temporary AviSynth script MeGUI has created, it has "ConvertToYV12()" added to the end of the script. I don't understand where it is picking this up from and nothing I do seems to prevent it. I understand that it may be automatically added just before launching any plugins that can only work in YV12, but I don't believe that the above workflow necessitates any AviSynth plugins to be used...?
3. ROOM FOR IMPROVEMENT?
What could I do to improve my workflow and produce better finished results?
I could move to 1080p (well, to maintain 16:10, it'd be 1728x1080), but that begins to incur a performance hit while recording. Lagarith still produces a fine file, but in-game FPS drops below 60fps in many situations -- in the 50s and even down to the 40s. Also, 720p bitrates don't flat-line until after 30000 kilobits, so there's still a lot of data space room at 720p to play with (though, if I were to increase the amount of storage I allow per hour of content, I'm not sure if it would be better to go 20,480 bitrate at 720 or at 1080, if 1080 were feasible at all).
Additionally, we'd have to make a compromise to avoid consuming much more storage space. So, higher resolution, but lower quality with the same bitrate. There's probably an argument to be made for "resolution is more important than everything", but in the long run, I don't think it's going to matter.
I say this, because the idea is that when watching on a higher resolution device, 720p is going to appear degraded. However, when we move to 2k displays, the same is going to be true of 1080p encodings. And when we move to 4k displays, it'll be even worse. And 8k screens... So while 1080p will always provide more than 720p, the difference between the two at the coming massive display resolutions will be almost negligible.
Is Sony Vegas Movie Studio Platinum worth the price? The Pro version seems unnecessary for $600. I don't plan on mixing twenty video channels and twenty audio channels, which appears to be the primary difference. For the most part, I will probably only be using it for cutting, joining, splitting content. Though things could change down the road, I don't plan to add sound effects, lower-thirds, or watermarks. If Virtualdub didn't have so many gotchas and would edit more than one video stream at a time, it would probably suffice, even.
Any other thoughts on my process would be welcome.
Try StreamFab Downloader and download from Netflix, Amazon, Youtube! Or Try DVDFab and copy Blu-rays! or rip iTunes movies!
+ Reply to Thread
Results 1 to 19 of 19
Thread
-
-
MKV Liabilities: Poor support and constantly evolving standard.
evolving standard: as much evolving as mp4 (might even be less, but hey both are ment to be STANDARDs); mkvtoolnix did change some stuff, but nothing that wasn't in the specification
In a decade or two, will it be as readily playable as MP4 and other containers, today?
Or will it be an obscure has-been leaving users/files of it totally stuck?
Will almost certainly remain accessible in a decade or two, though it may be tomorrow's VFW when something else comes along.
Of course, it is my understanding that both MKV and MP4 should be easy to demux in the future, so if something much better came along in twenty years, I could easily demux from MP4 or MKV and apply another container with no loss. Correct?
MP4 Liabilities: ... Seems less likely to grow significantly in terms of functionality.
Color space: ... I don't understand where it is picking this up from and nothing I do seems to prevent it.
side notes:
Using 2-pass 10240 bitrate
10 (QP-R) subpixel refinement, and Trellis set to Always
Audio: PCM 48khz 16bit Stereo. -> Audio: 448 ABR FAAC.
Render out as Lagarith and PCM and keep fingers crossed that it uses Smart Rendering so as not to re-encode and lose data.
Standard players should work with it.
I say this, because the idea is that when watching on a higher resolution device, 720p is going to appear degraded. However, when we move to 2k displays, the same is going to be true of 1080p encodings.
Cu Selur -
The main difference seeming to be that, for the most part, anything that can play MP4 will play MP4. Whereas MKV constantly evolving means a device that isn't regularly updated may play some MKVs and not others.
Will almost certainly remain accessible in a decade or two, though it may be tomorrow's VFW when something else comes along.
Color space: ... I don't understand where it is picking this up from and nothing I do seems to prevent it.
Using 2-pass 10240 bitrate
The one thing that does suck about 2-pass, however, is that the first pass takes ridiculously long. It seems to be an efficiency issue with however they handle multi-threading, because it'll only consume ~10% CPU.
10 (QP-R) subpixel refinement, and Trellis set to Always
Audio: PCM 48khz 16bit Stereo. -> Audio: 448 ABR FAAC.
Render out as Lagarith and PCM and keep fingers crossed that it uses Smart Rendering so as not to re-encode and lose data.
But, again, that could be meaningless. I've tried to look up information on this to no avail. I don't currently have the knowledge to know if this is insignificant.
Standard players should work with it.
I say this, because the idea is that when watching on a higher resolution device, 720p is going to appear degraded. However, when we move to 2k displays, the same is going to be true of 1080p encodings.
Thank you for your insight and the time you spent in reply. I've spent countless hours trying to test things and educate myself on them and try to do things the right way, so as not to waste anyone's time with questions to obvious answers. I really appreciate your response.
Regards. -
4:4:4
unless there is a good reason to convert it.
A second pass should more usefully distribute the bits I'm allocated to the video, so that the overall quality in parts that need it is improved, right?
The one thing that does suck about 2-pass, however, is that the first pass takes ridiculously long. It seems to be an efficiency issue with however they handle multi-threading, because it'll only consume ~10% CPU.
I think 9 or even 7 would probably be enough, but when testing, this didn't seem to really impact the encoding time.
which doesn't make sense, because I'm using FAAC with ABR 448... *NOT* VBR... unless ABR (as opposed to CBR) also causes them to appear or even be detected as VBR.
That should ensure that it avoids actually doing any encoding
my understanding is that YV12 (4:2:0?) strips out a lot of light information...?
Cu Selur -
YV12 is subsampled chroma, you lose a lot of the color information, but it's usually not percievable in motion. Human eyes aren't as sensitive to color as greyscale, that's why 4:2:0 is used for virtually all distribution formats (blu-ray, flash, dvd, portable video, everything...) . 4:2:0 1280x720 would contain 1280x720 in ithe Y' channel (greyscale), but the CbCr information would only have 640x360 pixels of color information . If you zoom in frame by frame you will notice the loss , especially on color borders, and moreso with graphics, games, CGI, than live action content
http://en.wikipedia.org/wiki/Chroma_subsampling
Using 2-pass 10240 bitrate
The one thing that does suck about 2-pass, however, is that the first pass takes ridiculously long. It seems to be an efficiency issue with however they handle multi-threading, because it'll only consume ~10% CPU.
An arbitratry 2pass bitrate generally isn't a good way to do encoding . You might allocate too much or too little depending on the content. How did you choose 10240 ? For some types of video, too much, other too little. Quality based encoding will deliver the quality desired (filesize changes proportional to content complexity)
10 (QP-R) subpixel refinement, and Trellis set to Always
Render out as Lagarith and PCM and keep fingers crossed that it uses Smart Rendering so as not to re-encode and lose data.
Just a guess, but it might be larger because you render out alpha channel RGBA, but imported RGB . Even dummy alpha requires bitrate (also assuming you didn't edit the video, add overlays, that sort of thing, just a in/out operation)Last edited by poisondeathray; 20th Sep 2012 at 16:29.
-
Do a search on CRF vs 2pass encoding. This has been discussed to death.
-
lagarith is a "pig" (i.e very slow), and not good choice for recording or editing (very high latencies for encoding, decoding speed), but the compression is fairly good . It's more suitable for an archive or storage format
personally I would use a higher resolution, and using a different less compressed codec should enable you do to that without dropping in game fps too much
e.g. ut video codec, amv2 -
I hadn't understood this before and assumed RGB24 was the default, unless you had to make a sacrifice for sake of storage or performance. So support for playing RGB is uncommon? Is support for 4:2:2 just as uncommon? Any insight as to the reason we're still using 4:2:0 for everything? The space savings can't be that significant compared to sacrifice of fidelity, with the processing speed and bandwidth we have now.
I'm primarily going to deal with games (CGI, graphics, animation, etc) and though I suppose YV12 should be good enough for me if it's enough for BluRay, I definitely do dislike the fuzz of 4:2:0 that can sometimes be seen. Of course, if compatibility is going to be a problem far into the future if I choose RGB, then it's a moot point.
Thanks for that clarification. I had consistently read that you should always choose multi-pass, because it would force better quality due to more efficient bit allocation. The only reason I've been using it in my testing is the belief that single-pass was just wasting bits, even though most CRF could result in a smaller file overall. I was also concerned with how this could impact streaming, should I choose to do that at some point. It seems that Youtube and the like prefer consistent b-frames, bitrates, etc.
I originally understood multi-pass to be the *solution* to not over/under allocating bitrate, which is one reason I chose it. As for the choice of bitrate, it was somewhat arbitrary. I started out with what Google advises for most of their settings for uploads (H.264, MP4, AAC-LC, 2 bframes, and so on -- and then they advice a minimum of 8,000 kbps for 1080 and 5,000 kbps for 720 "standard" and 30,000 and 50,000 accordingly for "professional"). Then I took into account the amount of storage I could reasonably allocate per hour of content and came out with about 10 mbps. Checking for visual clarity and performance, it seemed identical to the eye compared to the content I was recording (taking the transition from 2560x1600 to 1152x720 into account). Though I'm not necessarily aiming to make this stuff youtube content, it seemed like a good place to start and being compatible with their recommendations would ensure that when/if I ever wanted to offload content to youtube, it would come out as good as possible.
I also figured that "quality setting 17 or 23" seemed really arbitrary and preferred the fine control of specifically saying what the average bitrate should be in my files, but now that you pose the question to me, I realize that "quality setting 17 or 23" is no more arbitrary than "10 mbps", other than for file size.
And as the following comment from the x264 dev enlightens me, CRF and multipass achieve the same results for the same amount of space. Somehow I didn't clearly understand this, until now. With CRF, the file size is an uncertain and unlimited outcome based on the wanted quality. With multipass, the first pass is used to help determine what CRF quality to use during the second pass that will not exceed the filesize/bitrate you have chosen. So, if I have a 500MB file from multipass, there is a CRF quality value that will produce a 500MB file with the (essentially, but not literally) same quality as multipass. However, that 500MB file may be largely unnecessary, because it could equate to CRF 15 or something, while a CRF 17 would be more than adequate and only a bit more than half the size.
Not unless using the 'veryslow' preset secretly forces it to behave the same way as --slow-firstpass, which I don't think it does. I selected the 'veryslow' preset and then manually tweaked a few things, such as lowering the bframes, etc.CRF, 1pass, and 2pass all use the same bit distribution algorithm. 2-pass tries to approximate CRF by using the information from the first pass to decide on a constant quality factor. 1-pass tries to approximate CRF by guessing a quality factor over time and varying it to reach the target bitrate. -- Dark Shikari
# program --preset veryslow --pass 2 --bitrate 10240 --stats ".stats" --bframes 3 --b-pyramid none --ref 5 --output "output" "input"
I really don't think I'm bottlenecking anywhere. Final pass consumes available CPU resources (50-100%), but the first pass of any multi-pass process always consumes only around 10-30% CPU and very minimal memory and disk IO. Single pass encoding (CRF, etc) also uses 50-100% CPU. Selecting multi-pass presets (medium, faster, superfast, ultrafast) result in faster second pass encodes, but the first pass remains the same as does resource utilization.
Here's a snip from my ever-growing spreadsheet:
FPS on First Pass, Second Pass, Size, Settings...
27.13 | 13.13 | 570MB | subpixel refinement: 11, trellis 2, adaptive b-frames 2, preset -- very slow
27.77 | 13.21 | 570MB | subpixel refinement: 09, trellis 0, adaptive b-frames 2, preset -- very slow
27.72 | 13.09 | 570MB | subpixel refinement: 07, trellis 0, adaptive b-frames 2, preset -- very slow
28.31 | 21.44 | 570MB | subpixel refinement: 07, trellis 0, adaptive b-frames 1, preset -- very slow
28.31 | 19.30 | 570MB | subpixel refinement: 09, trellis 0, adaptive b-frames 1, preset -- very slow
28.01 | 25.11 | 570MB | preset -- medium
28.31 | 27.00 | 570MB | preset -- faster
28.06 | 28.34 | 570MB | preset -- superfast
28.42 | 28.28 | 570MB | preset -- ultrafast
25.99 | XX.XX | 221MB | crf const quality 20, preset medium
25.23 | XX.XX | 343MB | crf const quality 17, preset medium
22.54 | XX.XX | 334MB | crf const quality 17, preset slow
16.07 | XX.XX | 331MB | crf const quality 17, preset slower
9.64 | XX.XX | 306MB | crf const quality 17, preset veryslow
25.05 | XX.XX | 395MB | crf const quality 16, preset medium
24.64 | XX.XX | 521MB | crf const quality 14, preset medium
I should clarify: If I've chosen to encode as ABR, shouldn't the meta-data for the file show it as ABR? For example, original PCM 48khz 16bit stereo at 1,652 kbps is encoded at my selection as FAAC 48khz 16bit stereo 448, but MediaInfo shows the finished product as "AAC LC Variable 165kbps 48khz stereo with a *max* bitrate of 224 kbps". I understand the average bitrate may not be what I selected in my settings due to the original source and useful manipulation of it by the codec, but would the selection of ABR still show up as VBR in said metadata?
I'm just concerned that I may not be getting what I think I'm getting/selecting.
My misunderstanding, then. I thought if the program (Vegas, Virtualdub, etc) had "smart rendering / direct stream copy" functionality that it would employ that always, as long as you were using the same codec and settings going out as the original file has. So that by selecting Lagarith as the output format of this original lagarith file with the same settings it came in with, it would "smart render" it.
I leave the Lagarith configuration the same when rendering from Vegas as I do originally (RGB and not RGBA). So it *should* be retaining that and not adding anything of its own. However, I'm just not familiar enough with Vegas to know this for certain. Hell, it never even occurred to me that I might ever want to use a full-fledged media editor until just a few days ago when attempting to cut videos one at a time and then merging them separately proved to be unwieldy.
The only other thing I could think of is that it is somehow doing something with audio interleaving, though I can't imagine changing it from about 0.998ms or 1001ms (the default when viewing the raw lagarith file) to 0.250ms would account for a 300MB increase on a 4.5+gb file. And when I tested using both no interleaving and quarter second interleaving, it came out with a similar increase of file size, no matter what.
Anyway, as long as it's essentially not touching the content or impacting the later encode, I won't fret.
I've actually had the opposite experience. I found FRAPS provided horrendous in-game performance and since it shouldn't be very CPU intensive, it appeared to be IO-related. I don't really want to have a dedicated RAID0 just for recording some games, so it's writing to single HDDs with about 70-85MBps consistent write speed. (Also, it doesn't provide the needed resolution options).
I also tried Dxtory's own codec. It's interesting, in that it provides lossless with non-RAID setups, by alternating frame writes between drives and recombining them at the end. However, even this encounters significant problems. In-game performance is just fine, but the write-speed just bites it, so when you play back the raw dxtory file, it quickly becomes choppy and is unwatchable. It doesn't make sense, because two drives offering 70-85MBps writes *each* should not cause an IO bottleneck. Last time I tested, I got ~14fps written to file with dxtory compression on off and
However, with Lagarith, I'm able to record to 1152x720 at 30fps while playing at 2560x1600 without ever dipping below 60fps in-game. If I switch to 1728x1080 in Lagarith, in-game frequently drops to 40s and 50s. As you see at the top of my original post, I'm running a fairly robust system, so this really shouldn't be an issue. But it is. And I suspect it's largely due to the resolution I'm actually playing at. Most people don't seem to be playing at 2560x1600 while recording and that's a hell of a lot of pixels.
I've thought about buying another SSD, which should give me 300-400% write-speed or more, but if I can only fit around 200GB at a time, I'm going to have to stop playing and recording and spend quite some time moving the data off to a storage drive every hour or two.
If I could figure this out, I would gladly stick with recording at 1728x1080.
Here are a few examples of what I'm seeing, performance-wise:
Codec | color | mbps | resolution | result
dxtory | yuv410 | 319mbps | 1728x1080 | fine
dxtory | yuv420 | 382mbps | 1728x1080 | fine
dxtory | YUV410 | 114mbps | 1152x720 | fine
dxtory | yuv24 | 481mbps | 1728x1080 | choppy
dxtory w/ compression | rgb24 | 533mbps | 1728x1080 | extremely choppy - ingame writing to file at ~14fps, but playback is literally at 0.06FPS
dxtory w/o compression | rgb24 | 597mbps | 1728x1080 | extremely choppy
lagarith | YV12 | 206mbps | 1728x1080 | very choppy (but it's smooth while playing...?)
lagarith | RGB24 | 193mbps | 1152x720 | fine
ut | RGB | 436mbps | 1728x1080 | very choppy
ut | RGB | 219mbps | 1152x720 | fine
I'm a little baffled by some of these results, because (for example), lagarith YV12 at 1728x1080 is very choppy, but while it's recording, CPU usage isn't more than about 50 or so and disk usage is only around 35MB/s. So . . . I have no idea where this bottleneck is that is causing the end-product to be so choppy?
Even with dxtory codec at RGB24 and 481mbps or higher at 1728x1080 giving me incredibly choppy behavior... I don't get it. During recording, CPU is no more than 50-60% and disk usage is around 35-45MB/s.
I tried ut RGB and the video stream it produced was 436mbps at 1728x1080. It was very choppy, even though we only saw 50-60% CPU utilization and 60MB/s.
In either of these cases, I don't seem to be hitting a CPU or disk IO bottleneck. So . . . I have no idea what I could focus on that would improve performance of recording. I think my system should certainly be capable of playing at high FPS at 2560x1600 while writing 1080p to disk.
One thing to note is that VLC can't play dxtory files at all and while it can play lagarith files, it can only do so in RGB. If they use YV12, for example, it has a lot of trippy colorized blocking snapping all over the place. This is important, because it *seems* like the stuttering occurs only when being played in Windows Media Player, though all of the rainbow warping stuff in VLC while playing these particular files does make it hard to tell if the choppiness is happening there, too.
In closing:
1) My use of bitrate is unnecessary and I should switch to CRF and unless I have a very particular reason to explicitly set something in the configuration, using a preset in combination with CRF should be just fine.
2) How much of the encoding process and choices are subjective? By nature, i tend to want to know how to precisely measure the quality of something and relying on my eyes as the deciding factor seems ripe for failure. Others may see something differently than I do and different hardware and environments now and into the future may create great uncertainty. Is this all really just a case of eye-balling and saying "works for me" and moving on?
3) I'm still totally lost for why I'm seeing poor performance in many recording situations (in fact, this is what drove me to come to dxtory after FRAPS, to begin with).
Finally, thanks for everyone participating in this discussion. I've been able to learn a lot, already, and appreciate each reply. I've benefited from several "Oh, I get it!" moments, already. Thanks! -
Is this all really just a case of eye-balling and saying "works for me" and moving on?
I'm primarily going to deal with games (CGI, graphics, animation, etc) and though I suppose YV12 should be good enough for me if it's enough for BluRay, I definitely do dislike the fuzz of 4:2:0 that can sometimes be seen. Of course, if compatibility is going to be a problem far into the future if I choose RGB, then it's a moot point.
I was also concerned with how this could impact streaming, should I choose to do that at some point. It seems that Youtube and the like prefer consistent b-frames, bitrates, etc.
Not unless using the 'veryslow' preset secretly forces it to behave the same way as --slow-firstpass, which I don't think it does. I selected the 'veryslow' preset and then manually tweaked a few things, such as lowering the bframes, etc.
# program --preset veryslow --pass 2 --bitrate 10240 --stats ".stats" --bframes 3 --b-pyramid none --ref 5 --output "output" "input"
If I've chosen to encode as ABR, shouldn't the meta-data for the file show it as ABR? -
I think you have I/O issues.
Do you need lossless recording? Do you need RGB ?
Since you're going to YV12 and lossy encoding later anyway maybe a less demanding lossy codec is the solution for higher res, better game performance -
Though my intent is to archive video of the content of gameplays far into the future, the constrains of storage require that I only keep the encoded copy, which is why I figured that if I could at least retain all of the color space data, it would be beneficial down the road. But as it will remain the only copy of the content that is kept, it won't be very useful if its compatibility for play is also limited.
The ideal situation would obviously be archival of lossless data, but as I threw out many posts above, the length of a single game playthrough would make such an ambition painfully expensive. Though some games are 8-15 hours, many are 50-100. For long-term lossless archival, I probably wouldn't use lagarith (limited support, windows only, etc), so I'd probably be looking at 200gb/hr. That's 10-20 terabytes. Throw in the RAID overhead. Then double that for at least one backup copy and I'd be looking at potentially 50 terabytes for a single game. Not counting other hardware, just the drives for that would be more than a couple thousand bucks.
Which makes this all sort of a half-assed endeavor, I guess. But I don't know that anyone else is really doing anything more grand at the moment, either, so . . . *grumble*
Anyway, I can live with 4:2:0, if that's what everything uses. I just thought I was making some sort of sacrifice compared to other content we consume; not realizing almost everything is 4:2:0.
I was also concerned with how this could impact streaming, should I choose to do that at some point. It seems that Youtube and the like prefer consistent b-frames, bitrates, etc.
Not unless using the 'veryslow' preset secretly forces it to behave the same way as --slow-firstpass, which I don't think it does. I selected the 'veryslow' preset and then manually tweaked a few things, such as lowering the bframes, etc.
# program --preset veryslow --pass 2 --bitrate 10240 --stats ".stats" --bframes 3 --b-pyramid none --ref 5 --output "output" "input"
If I've chosen to encode as ABR, shouldn't the meta-data for the file show it as ABR?
Thanks again! -
Lossless recording would be ideal for minimal editing before encoding and the non-lossless I've tried (straight to x264) seems to incur a substantial performance hit (for obvious reasons). Going straight to YV12 (for the codecs which have that option rather than something like YUV420 which seems to be something slightly different) does give some improvement, but not really enough. This goes against what I would have expected, though, as going from RGB to something else should require more processing power without saving much in the way of disk-writes...?
At any rate, I agree that it seems like there is something going on IO-wise and I'll be damned if I can figure it out. Just doing a flat-out write-speed test on the two SATA drives I'm using gives me about ~80MBps which should be plenty, since the highest resolution recorded only seems to be going at around 580Mbps/75MBps. And the other ones I'm having problems with as demonstrated in my pasted table of results in my previous post are at a much lower rate than that.
Even the dxtory codec, which shares the load across multiple drives has problems, when the combined write-speed of both drives should be around 150MBps.
Of course, these particular drives are 3tb and 1.5tb 5400RPM WD Caviar Green drives, but I didn't see anything better using a Samsung 500GB 7200RPM drive, either. In fact, the 7200RPM drive had slightly worse write-speeds.
(Edited the following due to most recent testing):
Recording to my primary OS drive (Samsung 830 256GB SSD with 2000+mbps/250+MBps write-speed) is much improved. Using lagarith to record to 1728x1080 with RGB causes an in-game FPS hit (not a lot, but around 50-60fps), but the playback of the final product seems to be decent. The video ends up with a 427mbps/57MBps bitrate.
Recording to the same drive with the same everything, but YV12, produces a 205mbps/25MBps bitrate video file that also seems fine.
I don't understand. Why is performance to my SSD (the same SSD I'm running on, at that) performing better, when testing my SATA drives show they should be capable of far more than is necessary on a single drive, much less when using dxtory's built-in multi-drive writing (which would be about six times what is needed for the YV12 test, above).
Presumably, a separate SSD dedicated just to recording would be great (though I'd be concerned about long-term issues), but even dropping another couple hundred bucks on a 256GB SSD wouldn't be enough, since the write-speeds would almost certainly degrade severely as more capacity is used and, even if it was fine for 200GB at a time, would require stopping recording every hour just to sit there for a couple hours to move data off the drive to another storage drive.
I don't think RAID would help, either. I'm getting 75-85MBps per SATA drive and that's not enough for a video with a 25MBps video.
On the other hand, the lagarith 1152x720 RGB files I *am* recording end up using around 189mbps/23MBps, so . . .
Now, on the recordings where everything becomes incredibly choppy, the disk usage is very low, as I say. Around 35MB/s and that should leave plenty of overhead for SATA drives getting 75-85MBps write-speeds. However, they also sometimes seem to be incurring 1+ queue lengths while the non-choppy recordings are not (are keeping around 0.5 or less).Last edited by Cronjob; 21st Sep 2012 at 18:51.
-
Right, not necessarily the single or multi-pass, but the constant bitrate, choice of b-frames, constant GOP size, etc.
-
Because you're probably measuring maximum sequential transfer rates. Minimum transfer rate is what you need to know. Not only that, the block size for a large file transfer is going to be different than what is typically used for low level benchmarks and measurements
Mechanical HDD's slow down as they fill up, you might be getting somewhere around 20-30MB/s near the end (the density of platter is different at the beginning vs. the end. The sequential transfer rate tapers off as the drive gets more full (SSD's decrease in performance as you fill capacity too ( but for different reasons) and the minimum transfer rate is going to be more than enough for most SSD's, the early generation one's excluded) . That's why RAID-0 will helpLast edited by poisondeathray; 21st Sep 2012 at 21:35.
-
Aren't these recording utilities essentially performing sequential writes? I thought they were and therefore have been emphasizing sequential write tests. If this is incorrect, then I've been fundamentally focusing on the wrote things while bench-marking.
Mechanical HDD's also slow down as they fill up, you might be getting 10-20MB/s near the end
RAID-0 will help
I've no doubt that RAID-0 would improve maximum write-speed, but thought I should get far more out of a single drive (or even two drives, using the dxtory multi-drive-write process), too. And if I wasn't getting that, something must have been wrong. Of course, the only way I'm going to know is to finally get off my ass and test that (I have never dealt with RAID under windows, but I imagine it'll be fairly simple). -
Yes , I added some comments before you posted ^
Yes these are sequential transfers, but common benchmarking software might use different block size than a large file transfer. It' s not necessarily applicable
The other possiblity is that you have some sort of controller problem. You might try switching ports ,or off another controller (e.g. try the marvel instead of the intel ports)
I'm not familar with dxtory's multidrive write process. Maybe since it's some sort of software RAID, it adds some other overhead ? Surely hardware RAID off the chipset would be less demanding
I don't know , some of it doesn't make sense. But maximum transfer rates are pretty much useless. You need to know minimum transfer rates. (same with game play FPS. Who cares about maximum FPS, it's minimum FPS that's important - thats when it gets laggy and you get fragged). Why else would recording to the OS SSD drive make a difference ? If the transfer rates weren't important, it should be worse recording to the OS drive, not better. It shouldn't be a latency issue because you're doing squential writes of a large fileLast edited by poisondeathray; 21st Sep 2012 at 21:57.
-
True. The dxtory program has an option for testing write-speed of each drive that you enable (this is only used when also using the dxtory codec with the dxtory program; not other codecs with it) and you can say how much data you want to test. 1GB by default, though I usually have it try around 10GB. I figured this would be most representative, at least, of the real-world use by the application of those drives.
The other possiblity is that you have some sort of controller problem. You might try switching ports ,or off another controller
I'm not familar with dxtory's multidrive write process. Maybe since it's some sort of software RAID, it adds some other overhead ? Surely hardware RAID off the ICHR would be less demanding
As I understand it, you tell it what hard drives you have. Then you enable them for use in the process after running write-speed tests. When you're actually recording, it will alternate drives with each frame that is written, up to eight drives. (ie, frame 1 is written to the first drive, frame 2 to the second, frame 3 to the third . . . frame 9 to the first again). It seems like a clever work around to actually using RAID.
Your input has been very helpful. Despite my background, it can be difficult when dealing with a whole new area like this, where you don't have the experience or instinct to know if you're seeing a problem or it's how things generally are. It sounds like this definitely isn't what I should be expecting and that further diagnostics on disk IO are the next step.
These last 36hrs have been very enlightening. I hate to think how much time I would have wasted or quality I would have sacrificed if not for the guidance of you guys in this forum. -
By the way, it isn't much, but I wanted to show my appreciation for your help these couple of days, Selur and poisonedeathray, by making a small $5 donation to Engineers Without Borders on behalf of you guys (receipt attached).
Thank you. -
im not even going to pretend i know what your going on about but i have an idea for you hows about hooking up an hdd rcorder or dvdr to the output signal of your video card to input of hdd recorder or dvd recorder no impact on gameing and perfect recording dont know if this would work but in my mind it seems to make sence its not a bad idea if someone came up with a box that simply records to memory stick say 32meg or so no need for larg box with hdd only record circut with input from video card and usb slot for memory stick in my mind it works perfectly lol for good workflow there i go dreaming again not a bad idea me thinks.
Similar Threads
-
General capturing hardware, workflow and software advice please?
By Ennio in forum CapturingReplies: 2Last Post: 28th Aug 2012, 02:10 -
First video on Youtube, I need workflow and format advice
By Arty84 in forum Newbie / General discussionsReplies: 3Last Post: 20th Feb 2011, 12:43 -
Seeking advice on software & workflow
By Richmilnix in forum Newbie / General discussionsReplies: 8Last Post: 8th Feb 2010, 00:06 -
AVCHD to DVD workflow advice
By petem23 in forum Video ConversionReplies: 3Last Post: 16th May 2009, 07:57 -
Video workflow and archive advice
By atrick in forum Video ConversionReplies: 4Last Post: 17th Mar 2009, 23:09