VideoHelp Forum
+ Reply to Thread
Results 1 to 19 of 19
Thread
  1. Member
    Join Date
    Sep 2012
    Location
    Denver
    Search PM
    I've only been at this for about ten days, but I've run into some questions and a couple hitches that I could use advice on as well as those more experienced than myself performing a sort of sanity-check on my workflow. Any consideration is greatly appreciated.

    Goal
    Our primary goal is the archival and preservation of video game footage captured from an actual playthrough without any added watermarks, commentary, or lower-thirds in such a way that the video could be referenced as a source of reasonable quality unmolested material decades into the future. At the same time, size is certainly a concern as a sixty hour game can easily consume 6tb of lossless video and $600 for enough drives to store and backup a copy of it *per game* is out of the question. Even encoded copies will be enormous at 300GB (double that for backup) for such a game encoded at 720p and a 10240 bitrate.

    I'm currently only recording from the PC, though I'd like to do it with consoles from all generations, at a later point. Doing this properly seems more involved and there is a lot of gimmicky garbage out there. Ideally, I'd do everything (PC and consoles) via dedicated PCI card like the Black Magic Intensity Pro, except that i believe it has a number of limitations as well as massive disk I/O requirement. And I'm not sure how HDCP might play into it come the next generation of consoles.

    Gaming and Encoding System
    • i7-3770K
    • 16GB 1600mhz (9-9-9-24)
    • 2 x GTX 670 4GB
    • ASUS P8Z77-WS
    • Samsung 830 256GB SSD
    • 2560x1600 (16:10) Apple Cinema Display

    Utilities
    • Dxtory
    • MeGUI
    • Vegas Movie Studio Platinum
    • FAAC codec.
    • x264 CLI codec

    Workflow
    1. Record from full 2560x1600 (~60fps) to 1152x720 30fps (16:20 720p) using Dxtory.
      Video: Lagarith codec using RGB24.
      Audio: PCM 48khz 16bit Stereo.
      Result is approximately 1.35GB/minute.
    2. Minimal editing in Sony Vegas Movie Studio Platinum (cutting, merging, splitting for continuity and length).
      Render out as Lagarith and PCM and keep fingers crossed that it uses Smart Rendering so as not to re-encode and lose data.
      ALTERNATE: You could use Virtualdub for this process and rely on "direct stream copy" mode to avoid re-encoding on export, though the editing process is more limited and Virtualdub has its own issues, in general.
    3. Convert to x.264 with MP4 container via MeGUI.
      Video: Using 2-pass 10240 bitrate with High AVC Profile and Very Slow preset. CABAC enabled, GOP closed and based oN FPS, B-Frames set to 3, adaptive B-Frames Optimal, B-Pyramid disabled, 5 reference frames and 40 extra I-Frames, 10 (QP-R) subpixel refinement, and Trellis set to Always.
      Audio: 448 ABR FAAC.
      Result is 75MB/minute.

    The result is an 1152x720 30fps file just under 5GB/hr with a 600-800% encode time (so an hour of content takes six to eight hours to encode).

    Here is an example of output rendered from this workflow, on Youtube - with all of youtube's inherent changes applied. It should give you an idea of what I'm accomplishing with my effort (or what I'm failing to accomplish).



    QUESTIONS / ACTION ITEMS

    1. MKV, MP4, OTHER...?

    MKV Benefits: More options, including many that one may not use right now, but could in the future. Handles almost all video and audio codecs. Multiple tracks, dubs, subs, better chapter handling. And it's an open standard. It may be the best/only choice if you wanted to later add commentary so that you could have one track that is purely gameplay, another that is gameplay commentary, another that is review and historical information about the game, etc. The chapter features may be great for keeping files large instead of chopping them into hour-long pieces and then adding jump-points for levels/acts/chapters/whatever in a game. Oh, one good thing about MKV is that I am given to understand it handles combining the video and audio better, without any issues with audio and video interleaving (though I could be wrong on this).

    MKV Liabilities: Poor support and constantly evolving standard. In a decade or two, will it be as readily playable as MP4 and other containers, today? Or will it be an obscure has-been leaving users/files of it totally stuck?

    MP4 Benefits: Wider support. Fewer features and functions. Usually handled natively on most devices. AAC/FAAC used to be a problem, which could undermine the choice of MP4, but even XBOX 360 should handle the MP4 with AAC/FAAC, today. Will almost certainly remain accessible in a decade or two, though it may be tomorrow's VFW when something else comes along.

    MP4 Liabilities: Fewer options and functions. Primarily addresses business concerns over community concerns (DRM, etc). Seems less likely to grow significantly in terms of functionality.

    While features are important, so is longevity and ease of playability. That is, portable devices should work with whatever is chosen. Standard players should work with it. And consoles and other boxes should play it, rather than require transcoding on-the-fly via PLEX, PS3 Media Server, and other external media servers.

    Of course, it is my understanding that both MKV and MP4 should be easy to demux in the future, so if something much better came along in twenty years, I could easily demux from MP4 or MKV and apply another container with no loss. Correct?

    2. COLOR SPACE

    I want to maintain RGB24, but during encoding (actually, right after indexing), MeGUI always reports "Successfully converted to YV12" and if I look at the temporary AviSynth script MeGUI has created, it has "ConvertToYV12()" added to the end of the script. I don't understand where it is picking this up from and nothing I do seems to prevent it. I understand that it may be automatically added just before launching any plugins that can only work in YV12, but I don't believe that the above workflow necessitates any AviSynth plugins to be used...?


    3. ROOM FOR IMPROVEMENT?

    What could I do to improve my workflow and produce better finished results?

    I could move to 1080p (well, to maintain 16:10, it'd be 1728x1080), but that begins to incur a performance hit while recording. Lagarith still produces a fine file, but in-game FPS drops below 60fps in many situations -- in the 50s and even down to the 40s. Also, 720p bitrates don't flat-line until after 30000 kilobits, so there's still a lot of data space room at 720p to play with (though, if I were to increase the amount of storage I allow per hour of content, I'm not sure if it would be better to go 20,480 bitrate at 720 or at 1080, if 1080 were feasible at all).

    Additionally, we'd have to make a compromise to avoid consuming much more storage space. So, higher resolution, but lower quality with the same bitrate. There's probably an argument to be made for "resolution is more important than everything", but in the long run, I don't think it's going to matter.

    I say this, because the idea is that when watching on a higher resolution device, 720p is going to appear degraded. However, when we move to 2k displays, the same is going to be true of 1080p encodings. And when we move to 4k displays, it'll be even worse. And 8k screens... So while 1080p will always provide more than 720p, the difference between the two at the coming massive display resolutions will be almost negligible.

    Is Sony Vegas Movie Studio Platinum worth the price? The Pro version seems unnecessary for $600. I don't plan on mixing twenty video channels and twenty audio channels, which appears to be the primary difference. For the most part, I will probably only be using it for cutting, joining, splitting content. Though things could change down the road, I don't plan to add sound effects, lower-thirds, or watermarks. If Virtualdub didn't have so many gotchas and would edit more than one video stream at a time, it would probably suffice, even.

    Any other thoughts on my process would be welcome.
    Quote Quote  
  2. MKV Liabilities: Poor support and constantly evolving standard.
    poor support: yes, only a fraction of what is possible is supported by Media Player (btw. same goes for mp4)
    evolving standard: as much evolving as mp4 (might even be less, but hey both are ment to be STANDARDs); mkvtoolnix did change some stuff, but nothing that wasn't in the specification

    In a decade or two, will it be as readily playable as MP4 and other containers, today?
    probably not

    Or will it be an obscure has-been leaving users/files of it totally stuck?
    that neither, since there will always be the possibility to repack to mp4 through ffmpeg&co

    Will almost certainly remain accessible in a decade or two, though it may be tomorrow's VFW when something else comes along.
    doesn't make any sense, mp4 is a container, vfw is a programming interface

    Of course, it is my understanding that both MKV and MP4 should be easy to demux in the future, so if something much better came along in twenty years, I could easily demux from MP4 or MKV and apply another container with no loss. Correct?
    assuming your container supports the formats and there are tools that can multiplex them: yes

    MP4 Liabilities: ... Seems less likely to grow significantly in terms of functionality.
    Probably as likely as mkv changing.

    Color space: ... I don't understand where it is picking this up from and nothing I do seems to prevent it.
    contact the current author of MeGui or use another tool. Also if you want to stay with a 4:4:4 color space you need to use High 4:4:4 which doesn't have really wide support atm.


    side notes:

    Using 2-pass 10240 bitrate
    looking at the scenario you present, it sounds to me more like a 1-pass crf scenario

    10 (QP-R) subpixel refinement, and Trellis set to Always
    doubt, that it is worth it

    Audio: PCM 48khz 16bit Stereo. -> Audio: 448 ABR FAAC.
    448 seems to be overkill

    Render out as Lagarith and PCM and keep fingers crossed that it uses Smart Rendering so as not to re-encode and lose data.
    you convert lossless to lossless there should be none smart rendering involved, since there should be no quality loss during reencoding,...

    Standard players should work with it.
    -> forget 4:4:4 color space

    I say this, because the idea is that when watching on a higher resolution device, 720p is going to appear degraded. However, when we move to 2k displays, the same is going to be true of 1080p encodings.
    when higher display resolutions are main stream, you are probably are using H.265 (HEVC) or another codec.

    Cu Selur
    Quote Quote  
  3. Member
    Join Date
    Sep 2012
    Location
    Denver
    Search PM
    Originally Posted by Selur View Post
    MKV Liabilities: Poor support and constantly evolving standard.
    poor support: yes, only a fraction of what is possible is supported by Media Player (btw. same goes for mp4)
    evolving standard: as much evolving as mp4 (might even be less, but hey both are ment to be STANDARDs); mkvtoolnix did change some stuff, but nothing that wasn't in the specification
    The main difference seeming to be that, for the most part, anything that can play MP4 will play MP4. Whereas MKV constantly evolving means a device that isn't regularly updated may play some MKVs and not others.


    Will almost certainly remain accessible in a decade or two, though it may be tomorrow's VFW when something else comes along.
    doesn't make any sense, mp4 is a container, vfw is a programming interface
    I was making a comparison to the deprecated status.


    Color space: ... I don't understand where it is picking this up from and nothing I do seems to prevent it.
    contact the current author of MeGui or use another tool. Also if you want to stay with a 4:4:4 color space you need to use High 4:4:4 which doesn't have really wide support atm.
    It's not necessarily that I must stay with RGB24 as not wanting to lose color data unless there is a good reason to convert it. My assumption was that I must have been doing something dumb with MeGUI and/or AviSynth that was leading to the insertion of the YV12 conversion, but I haven't been successful in determining what it is I may be doing wrong.


    Using 2-pass 10240 bitrate
    looking at the scenario you present, it sounds to me more like a 1-pass crf scenario
    Could I impose on you and ask for an explanation of why you feel this way? I'm not questioning your advice, but just wondering what the thought process is and how you arrived there. A second pass should more usefully distribute the bits I'm allocated to the video, so that the overall quality in parts that need it is improved, right? Or is that something that only makes a real difference at a higher bitrate to begin with?

    The one thing that does suck about 2-pass, however, is that the first pass takes ridiculously long. It seems to be an efficiency issue with however they handle multi-threading, because it'll only consume ~10% CPU.



    10 (QP-R) subpixel refinement, and Trellis set to Always
    doubt, that it is worth it
    I think 9 or even 7 would probably be enough, but when testing, this didn't seem to really impact the encoding time.


    Audio: PCM 48khz 16bit Stereo. -> Audio: 448 ABR FAAC.
    448 seems to be overkill
    Agreed. However, with a 5gb/hr file, it should come out to be a very small percentage, so it seemed reasonable. I've actually had a hard time refining this simply because when I'm encoding from PCM to FAAC, it goes from a 1,536 kbps two channel audio stream to a 165 kbps variable two channel stream with 224 kbps max bitrate. . . which doesn't make sense, because I'm using FAAC with ABR 448... *NOT* VBR... unless ABR (as opposed to CBR) also causes them to appear or even be detected as VBR. Frankly, it has me a little bit confused and I'm still at the stage where it's difficult to determine when confusion is ignorance versus a utility not working right.

    Render out as Lagarith and PCM and keep fingers crossed that it uses Smart Rendering so as not to re-encode and lose data.
    you convert lossless to lossless there should be none smart rendering involved, since there should be no quality loss during reencoding,...
    Since I've been playing with Vegas Media Studio (it seems to be the best option for multi-track split/merg/join work without having to work on one file at a time), I've been rendering back to Lagarith with the same settings. That should ensure that it avoids actually doing any encoding. However, the resulting file ends up *larger* than the original with a higher bit per pixel rate. A 7GB Lagarith file loaded into Vegas and then rendered back out with no editing changes comes out about 300MB larger and the "Bits/(Pixel*Frame): comes out at about 5.209 when the original is something like 4.612.

    But, again, that could be meaningless. I've tried to look up information on this to no avail. I don't currently have the knowledge to know if this is insignificant.

    Standard players should work with it.
    -> forget 4:4:4 color space
    As mentioned, I don't have any real need to stick with RGB24 or RGB 8/8/8 or whatever. I just figured I might as retain as much color data as possible since I have no idea what the future holds and my understanding is that YV12 (4:2:0?) strips out a lot of light information...?

    I say this, because the idea is that when watching on a higher resolution device, 720p is going to appear degraded. However, when we move to 2k displays, the same is going to be true of 1080p encodings.
    when higher display resolutions are main stream, you are probably are using H.265 (HEVC) or another codec.
    True, but that doesn't help anything being encoded *today* when it comes to higher resolution displays of the future and I suspect we're quite some time from H.265 having mature codec and tool support on the level that H.264 does at the moment. Perhaps I'm wrong and video stuff may go from the standards process to regular utilization much quicker, but that's definitely not the cases with the standards I deal with day to day.


    Thank you for your insight and the time you spent in reply. I've spent countless hours trying to test things and educate myself on them and try to do things the right way, so as not to waste anyone's time with questions to obvious answers. I really appreciate your response.

    Regards.
    Quote Quote  
  4. 4:4:4
    unless there is a good reason to convert it.
    compatibility, hardware player will probably not support High 4:4:4 profiles for quite some time,...

    A second pass should more usefully distribute the bits I'm allocated to the video, so that the overall quality in parts that need it is improved, right?
    Nope. crf and 2pass use exactly the same rate control. -> for the same size both create visibly identical files (minor differences in frame-by-frame comparision, where one time 2pass one time crf result is better), but in general the results are visually identical

    The one thing that does suck about 2-pass, however, is that the first pass takes ridiculously long. It seems to be an efficiency issue with however they handle multi-threading, because it'll only consume ~10% CPU.
    If you are not using '--slow-firstpass', 1st pass should be a lot faster than the second pass,...

    I think 9 or even 7 would probably be enough, but when testing, this didn't seem to really impact the encoding time.
    that's strange,... might be because of trellis, which really slows things down )

    which doesn't make sense, because I'm using FAAC with ABR 448... *NOT* VBR... unless ABR (as opposed to CBR) also causes them to appear or even be detected as VBR.
    normally you can't really say if a stream is encoded abr or vbr since for both cases the data rate should fluctuate,... btw. you might want to try fdk-aac instead of faac. (for windows build I use https://github.com/rdp/ffmpeg-windows-build-helpers to build ffmpeg 32/64bit which supports it in a Linux VM )

    That should ensure that it avoids actually doing any encoding
    I doubt that there's a tool that actually does smart rendering for lagarith,...

    my understanding is that YV12 (4:2:0?) strips out a lot of light information...?
    that is correct, for archival 4:4:4 probably makes sense main problem with it is the decoder support, but that should not be a problem with software decoders

    Cu Selur
    Quote Quote  
  5. Originally Posted by Cronjob View Post


    Color space: ... I don't understand where it is picking this up from and nothing I do seems to prevent it.
    contact the current author of MeGui or use another tool. Also if you want to stay with a 4:4:4 color space you need to use High 4:4:4 which doesn't have really wide support atm.
    It's not necessarily that I must stay with RGB24 as not wanting to lose color data unless there is a good reason to convert it. My assumption was that I must have been doing something dumb with MeGUI and/or AviSynth that was leading to the insertion of the YV12 conversion, but I haven't been successful in determining what it is I may be doing wrong.
    YV12 is subsampled chroma, you lose a lot of the color information, but it's usually not percievable in motion. Human eyes aren't as sensitive to color as greyscale, that's why 4:2:0 is used for virtually all distribution formats (blu-ray, flash, dvd, portable video, everything...) . 4:2:0 1280x720 would contain 1280x720 in ithe Y' channel (greyscale), but the CbCr information would only have 640x360 pixels of color information . If you zoom in frame by frame you will notice the loss , especially on color borders, and moreso with graphics, games, CGI, than live action content

    http://en.wikipedia.org/wiki/Chroma_subsampling



    Using 2-pass 10240 bitrate
    looking at the scenario you present, it sounds to me more like a 1-pass crf scenario
    Could I impose on you and ask for an explanation of why you feel this way? I'm not questioning your advice, but just wondering what the thought process is and how you arrived there. A second pass should more usefully distribute the bits I'm allocated to the video, so that the overall quality in parts that need it is improved, right? Or is that something that only makes a real difference at a higher bitrate to begin with?

    The one thing that does suck about 2-pass, however, is that the first pass takes ridiculously long. It seems to be an efficiency issue with however they handle multi-threading, because it'll only consume ~10% CPU.
    Do a search on CRF vs 2pass encoding. This has been discussed to death.

    An arbitratry 2pass bitrate generally isn't a good way to do encoding . You might allocate too much or too little depending on the content. How did you choose 10240 ? For some types of video, too much, other too little. Quality based encoding will deliver the quality desired (filesize changes proportional to content complexity)


    10 (QP-R) subpixel refinement, and Trellis set to Always
    doubt, that it is worth it
    I think 9 or even 7 would probably be enough, but when testing, this didn't seem to really impact the encoding time.
    It most certainly does impact encoding. You probably have a bottleneck elsewhere , or not doing controlled testing. if you're not seeing impact


    Render out as Lagarith and PCM and keep fingers crossed that it uses Smart Rendering so as not to re-encode and lose data.
    you convert lossless to lossless there should be none smart rendering involved, since there should be no quality loss during reencoding,...
    Since I've been playing with Vegas Media Studio (it seems to be the best option for multi-track split/merg/join work without having to work on one file at a time), I've been rendering back to Lagarith with the same settings. That should ensure that it avoids actually doing any encoding. However, the resulting file ends up *larger* than the original with a higher bit per pixel rate. A 7GB Lagarith file loaded into Vegas and then rendered back out with no editing changes comes out about 300MB larger and the "Bits/(Pixel*Frame): comes out at about 5.209 when the original is something like 4.612.
    Vegas doesn't smart render lagarith. The point is moot anyways (in quality terms, not processing speed terms), because you are working in the lossless domain.

    Just a guess, but it might be larger because you render out alpha channel RGBA, but imported RGB . Even dummy alpha requires bitrate (also assuming you didn't edit the video, add overlays, that sort of thing, just a in/out operation)
    Last edited by poisondeathray; 20th Sep 2012 at 16:29.
    Quote Quote  
  6. Do a search on CRF vs 2pass encoding. This has been discussed to death.
    especially over at doom9
    Quote Quote  
  7. lagarith is a "pig" (i.e very slow), and not good choice for recording or editing (very high latencies for encoding, decoding speed), but the compression is fairly good . It's more suitable for an archive or storage format

    personally I would use a higher resolution, and using a different less compressed codec should enable you do to that without dropping in game fps too much
    e.g. ut video codec, amv2
    Quote Quote  
  8. Member
    Join Date
    Sep 2012
    Location
    Denver
    Search PM
    Originally Posted by Selur View Post
    4:4:4
    unless there is a good reason to convert it.
    compatibility, hardware player will probably not support High 4:4:4 profiles for quite some time,...

    my understanding is that YV12 (4:2:0?) strips out a lot of light information...?
    that is correct, for archival 4:4:4 probably makes sense main problem with it is the decoder support, but that should not be a problem with software decoders
    Originally Posted by poisondeathray View Post
    YV12 is subsampled chroma, you lose a lot of the color information, but it's usually not percievable in motion. Human eyes aren't as sensitive to color as greyscale, that's why 4:2:0 is used for virtually all distribution formats (blu-ray, flash, dvd, portable video, everything...) . 4:2:0 1280x720 would contain 1280x720 in ithe Y' channel (greyscale), but the CbCr information would only have 640x360 pixels of color information . If you zoom in frame by frame you will notice the loss , especially on color borders, and moreso with graphics, games, CGI, than live action content
    I hadn't understood this before and assumed RGB24 was the default, unless you had to make a sacrifice for sake of storage or performance. So support for playing RGB is uncommon? Is support for 4:2:2 just as uncommon? Any insight as to the reason we're still using 4:2:0 for everything? The space savings can't be that significant compared to sacrifice of fidelity, with the processing speed and bandwidth we have now.

    I'm primarily going to deal with games (CGI, graphics, animation, etc) and though I suppose YV12 should be good enough for me if it's enough for BluRay, I definitely do dislike the fuzz of 4:2:0 that can sometimes be seen. Of course, if compatibility is going to be a problem far into the future if I choose RGB, then it's a moot point.



    Originally Posted by Selur View Post
    A second pass should more usefully distribute the bits I'm allocated to the video, so that the overall quality in parts that need it is improved, right?
    Nope. crf and 2pass use exactly the same rate control. -> for the same size both create visibly identical files (minor differences in frame-by-frame comparision, where one time 2pass one time crf result is better), but in general the results are visually identical
    Thanks for that clarification. I had consistently read that you should always choose multi-pass, because it would force better quality due to more efficient bit allocation. The only reason I've been using it in my testing is the belief that single-pass was just wasting bits, even though most CRF could result in a smaller file overall. I was also concerned with how this could impact streaming, should I choose to do that at some point. It seems that Youtube and the like prefer consistent b-frames, bitrates, etc.


    Originally Posted by poisondeathray View Post
    Do a search on CRF vs 2pass encoding. This has been discussed to death.

    An arbitratry 2pass bitrate generally isn't a good way to do encoding . You might allocate too much or too little depending on the content. How did you choose 10240 ? For some types of video, too much, other too little. Quality based encoding will deliver the quality desired (filesize changes proportional to content complexity)
    I originally understood multi-pass to be the *solution* to not over/under allocating bitrate, which is one reason I chose it. As for the choice of bitrate, it was somewhat arbitrary. I started out with what Google advises for most of their settings for uploads (H.264, MP4, AAC-LC, 2 bframes, and so on -- and then they advice a minimum of 8,000 kbps for 1080 and 5,000 kbps for 720 "standard" and 30,000 and 50,000 accordingly for "professional"). Then I took into account the amount of storage I could reasonably allocate per hour of content and came out with about 10 mbps. Checking for visual clarity and performance, it seemed identical to the eye compared to the content I was recording (taking the transition from 2560x1600 to 1152x720 into account). Though I'm not necessarily aiming to make this stuff youtube content, it seemed like a good place to start and being compatible with their recommendations would ensure that when/if I ever wanted to offload content to youtube, it would come out as good as possible.

    I also figured that "quality setting 17 or 23" seemed really arbitrary and preferred the fine control of specifically saying what the average bitrate should be in my files, but now that you pose the question to me, I realize that "quality setting 17 or 23" is no more arbitrary than "10 mbps", other than for file size.

    And as the following comment from the x264 dev enlightens me, CRF and multipass achieve the same results for the same amount of space. Somehow I didn't clearly understand this, until now. With CRF, the file size is an uncertain and unlimited outcome based on the wanted quality. With multipass, the first pass is used to help determine what CRF quality to use during the second pass that will not exceed the filesize/bitrate you have chosen. So, if I have a 500MB file from multipass, there is a CRF quality value that will produce a 500MB file with the (essentially, but not literally) same quality as multipass. However, that 500MB file may be largely unnecessary, because it could equate to CRF 15 or something, while a CRF 17 would be more than adequate and only a bit more than half the size.
    CRF, 1pass, and 2pass all use the same bit distribution algorithm. 2-pass tries to approximate CRF by using the information from the first pass to decide on a constant quality factor. 1-pass tries to approximate CRF by guessing a quality factor over time and varying it to reach the target bitrate. -- Dark Shikari
    Originally Posted by Selur View Post
    The one thing that does suck about 2-pass, however, is that the first pass takes ridiculously long. It seems to be an efficiency issue with however they handle multi-threading, because it'll only consume ~10% CPU.
    If you are not using '--slow-firstpass', 1st pass should be a lot faster than the second pass,...
    Not unless using the 'veryslow' preset secretly forces it to behave the same way as --slow-firstpass, which I don't think it does. I selected the 'veryslow' preset and then manually tweaked a few things, such as lowering the bframes, etc.

    # program --preset veryslow --pass 2 --bitrate 10240 --stats ".stats" --bframes 3 --b-pyramid none --ref 5 --output "output" "input"


    Originally Posted by Selur View Post
    I think 9 or even 7 would probably be enough, but when testing, this didn't seem to really impact the encoding time.
    that's strange,... might be because of trellis, which really slows things down )
    Originally Posted by poisondeathray View Post
    It most certainly does impact encoding. You probably have a bottleneck elsewhere , or not doing controlled testing. if you're not seeing impact
    I really don't think I'm bottlenecking anywhere. Final pass consumes available CPU resources (50-100%), but the first pass of any multi-pass process always consumes only around 10-30% CPU and very minimal memory and disk IO. Single pass encoding (CRF, etc) also uses 50-100% CPU. Selecting multi-pass presets (medium, faster, superfast, ultrafast) result in faster second pass encodes, but the first pass remains the same as does resource utilization.

    Here's a snip from my ever-growing spreadsheet:

    FPS on First Pass, Second Pass, Size, Settings...
    27.13 | 13.13 | 570MB | subpixel refinement: 11, trellis 2, adaptive b-frames 2, preset -- very slow
    27.77 | 13.21 | 570MB | subpixel refinement: 09, trellis 0, adaptive b-frames 2, preset -- very slow
    27.72 | 13.09 | 570MB | subpixel refinement: 07, trellis 0, adaptive b-frames 2, preset -- very slow
    28.31 | 21.44 | 570MB | subpixel refinement: 07, trellis 0, adaptive b-frames 1, preset -- very slow
    28.31 | 19.30 | 570MB | subpixel refinement: 09, trellis 0, adaptive b-frames 1, preset -- very slow
    28.01 | 25.11 | 570MB | preset -- medium
    28.31 | 27.00 | 570MB | preset -- faster
    28.06 | 28.34 | 570MB | preset -- superfast
    28.42 | 28.28 | 570MB | preset -- ultrafast
    25.99 | XX.XX | 221MB | crf const quality 20, preset medium
    25.23 | XX.XX | 343MB | crf const quality 17, preset medium
    22.54 | XX.XX | 334MB | crf const quality 17, preset slow
    16.07 | XX.XX | 331MB | crf const quality 17, preset slower
    9.64 | XX.XX | 306MB | crf const quality 17, preset veryslow
    25.05 | XX.XX | 395MB | crf const quality 16, preset medium
    24.64 | XX.XX | 521MB | crf const quality 14, preset medium

    Originally Posted by Selur View Post
    which doesn't make sense, because I'm using FAAC with ABR 448... *NOT* VBR... unless ABR (as opposed to CBR) also causes them to appear or even be detected as VBR.
    normally you can't really say if a stream is encoded abr or vbr since for both cases the data rate should fluctuate,... btw. you might want to try fdk-aac instead of faac. (for windows build I use https://github.com/rdp/ffmpeg-windows-build-helpers to build ffmpeg 32/64bit which supports it in a Linux VM )
    I should clarify: If I've chosen to encode as ABR, shouldn't the meta-data for the file show it as ABR? For example, original PCM 48khz 16bit stereo at 1,652 kbps is encoded at my selection as FAAC 48khz 16bit stereo 448, but MediaInfo shows the finished product as "AAC LC Variable 165kbps 48khz stereo with a *max* bitrate of 224 kbps". I understand the average bitrate may not be what I selected in my settings due to the original source and useful manipulation of it by the codec, but would the selection of ABR still show up as VBR in said metadata?

    I'm just concerned that I may not be getting what I think I'm getting/selecting.



    Originally Posted by Selur View Post
    That should ensure that it avoids actually doing any encoding
    I doubt that there's a tool that actually does smart rendering for lagarith,...
    Originally Posted by poisondeathray View Post
    Vegas doesn't smart render lagarith. The point is moot anyways (in quality terms, not processing speed terms), because you are working in the lossless domain.
    My misunderstanding, then. I thought if the program (Vegas, Virtualdub, etc) had "smart rendering / direct stream copy" functionality that it would employ that always, as long as you were using the same codec and settings going out as the original file has. So that by selecting Lagarith as the output format of this original lagarith file with the same settings it came in with, it would "smart render" it.



    Originally Posted by poisondeathray View Post
    Just a guess, but it might be larger because you render out alpha channel RGBA, but imported RGB . Even dummy alpha requires bitrate (also assuming you didn't edit the video, add overlays, that sort of thing, just a in/out operation)
    I leave the Lagarith configuration the same when rendering from Vegas as I do originally (RGB and not RGBA). So it *should* be retaining that and not adding anything of its own. However, I'm just not familiar enough with Vegas to know this for certain. Hell, it never even occurred to me that I might ever want to use a full-fledged media editor until just a few days ago when attempting to cut videos one at a time and then merging them separately proved to be unwieldy.

    The only other thing I could think of is that it is somehow doing something with audio interleaving, though I can't imagine changing it from about 0.998ms or 1001ms (the default when viewing the raw lagarith file) to 0.250ms would account for a 300MB increase on a 4.5+gb file. And when I tested using both no interleaving and quarter second interleaving, it came out with a similar increase of file size, no matter what.

    Anyway, as long as it's essentially not touching the content or impacting the later encode, I won't fret.



    Originally Posted by poisondeathray View Post
    lagarith is a "pig" (i.e very slow), and not good choice for recording or editing (very high latencies for encoding, decoding speed), but the compression is fairly good . It's more suitable for an archive or storage format

    personally I would use a higher resolution, and using a different less compressed codec should enable you do to that without dropping in game fps too much
    e.g. ut video codec, amv2
    I've actually had the opposite experience. I found FRAPS provided horrendous in-game performance and since it shouldn't be very CPU intensive, it appeared to be IO-related. I don't really want to have a dedicated RAID0 just for recording some games, so it's writing to single HDDs with about 70-85MBps consistent write speed. (Also, it doesn't provide the needed resolution options).

    I also tried Dxtory's own codec. It's interesting, in that it provides lossless with non-RAID setups, by alternating frame writes between drives and recombining them at the end. However, even this encounters significant problems. In-game performance is just fine, but the write-speed just bites it, so when you play back the raw dxtory file, it quickly becomes choppy and is unwatchable. It doesn't make sense, because two drives offering 70-85MBps writes *each* should not cause an IO bottleneck. Last time I tested, I got ~14fps written to file with dxtory compression on off and

    However, with Lagarith, I'm able to record to 1152x720 at 30fps while playing at 2560x1600 without ever dipping below 60fps in-game. If I switch to 1728x1080 in Lagarith, in-game frequently drops to 40s and 50s. As you see at the top of my original post, I'm running a fairly robust system, so this really shouldn't be an issue. But it is. And I suspect it's largely due to the resolution I'm actually playing at. Most people don't seem to be playing at 2560x1600 while recording and that's a hell of a lot of pixels.

    I've thought about buying another SSD, which should give me 300-400% write-speed or more, but if I can only fit around 200GB at a time, I'm going to have to stop playing and recording and spend quite some time moving the data off to a storage drive every hour or two.

    If I could figure this out, I would gladly stick with recording at 1728x1080.


    Here are a few examples of what I'm seeing, performance-wise:

    Codec | color | mbps | resolution | result
    dxtory | yuv410 | 319mbps | 1728x1080 | fine
    dxtory | yuv420 | 382mbps | 1728x1080 | fine
    dxtory | YUV410 | 114mbps | 1152x720 | fine
    dxtory | yuv24 | 481mbps | 1728x1080 | choppy
    dxtory w/ compression | rgb24 | 533mbps | 1728x1080 | extremely choppy - ingame writing to file at ~14fps, but playback is literally at 0.06FPS
    dxtory w/o compression | rgb24 | 597mbps | 1728x1080 | extremely choppy
    lagarith | YV12 | 206mbps | 1728x1080 | very choppy (but it's smooth while playing...?)
    lagarith | RGB24 | 193mbps | 1152x720 | fine
    ut | RGB | 436mbps | 1728x1080 | very choppy
    ut | RGB | 219mbps | 1152x720 | fine

    I'm a little baffled by some of these results, because (for example), lagarith YV12 at 1728x1080 is very choppy, but while it's recording, CPU usage isn't more than about 50 or so and disk usage is only around 35MB/s. So . . . I have no idea where this bottleneck is that is causing the end-product to be so choppy?

    Even with dxtory codec at RGB24 and 481mbps or higher at 1728x1080 giving me incredibly choppy behavior... I don't get it. During recording, CPU is no more than 50-60% and disk usage is around 35-45MB/s.

    I tried ut RGB and the video stream it produced was 436mbps at 1728x1080. It was very choppy, even though we only saw 50-60% CPU utilization and 60MB/s.

    In either of these cases, I don't seem to be hitting a CPU or disk IO bottleneck. So . . . I have no idea what I could focus on that would improve performance of recording. I think my system should certainly be capable of playing at high FPS at 2560x1600 while writing 1080p to disk.

    One thing to note is that VLC can't play dxtory files at all and while it can play lagarith files, it can only do so in RGB. If they use YV12, for example, it has a lot of trippy colorized blocking snapping all over the place. This is important, because it *seems* like the stuttering occurs only when being played in Windows Media Player, though all of the rainbow warping stuff in VLC while playing these particular files does make it hard to tell if the choppiness is happening there, too.



    In closing:

    1) My use of bitrate is unnecessary and I should switch to CRF and unless I have a very particular reason to explicitly set something in the configuration, using a preset in combination with CRF should be just fine.

    2) How much of the encoding process and choices are subjective? By nature, i tend to want to know how to precisely measure the quality of something and relying on my eyes as the deciding factor seems ripe for failure. Others may see something differently than I do and different hardware and environments now and into the future may create great uncertainty. Is this all really just a case of eye-balling and saying "works for me" and moving on?

    3) I'm still totally lost for why I'm seeing poor performance in many recording situations (in fact, this is what drove me to come to dxtory after FRAPS, to begin with).

    Finally, thanks for everyone participating in this discussion. I've been able to learn a lot, already, and appreciate each reply. I've benefited from several "Oh, I get it!" moments, already. Thanks!
    Quote Quote  
  9. Is this all really just a case of eye-balling and saying "works for me" and moving on?
    quality metrics are only useful to give a general direction nothing more, so yes in the end it 'works for me' is the best you can do.

    I'm primarily going to deal with games (CGI, graphics, animation, etc) and though I suppose YV12 should be good enough for me if it's enough for BluRay, I definitely do dislike the fuzz of 4:2:0 that can sometimes be seen. Of course, if compatibility is going to be a problem far into the future if I choose RGB, then it's a moot point.
    4:4:4 and 4:2:2 are common in video editing, so there's nothing wrong with it for archiving, but don't count on hardware support or devices that play such content on their own. -> if it's for archiving, keep all the quality you can, if it's a backup for content that will be shared don't use 4:4:4.

    I was also concerned with how this could impact streaming, should I choose to do that at some point. It seems that Youtube and the like prefer consistent b-frames, bitrates, etc.
    Streaming compatibility is due to VBV restrictions and not due to choosing 2pass or 1pass.

    Not unless using the 'veryslow' preset secretly forces it to behave the same way as --slow-firstpass, which I don't think it does. I selected the 'veryslow' preset and then manually tweaked a few things, such as lowering the bframes, etc.

    # program --preset veryslow --pass 2 --bitrate 10240 --stats ".stats" --bframes 3 --b-pyramid none --ref 5 --output "output" "input"
    That doesn't say anything about you 1pass settings,..

    If I've chosen to encode as ABR, shouldn't the meta-data for the file show it as ABR?
    that assumes that there is such meta data,... *gig*
    Quote Quote  
  10. I think you have I/O issues.

    Do you need lossless recording? Do you need RGB ?

    Since you're going to YV12 and lossy encoding later anyway maybe a less demanding lossy codec is the solution for higher res, better game performance
    Quote Quote  
  11. Member
    Join Date
    Sep 2012
    Location
    Denver
    Search PM
    Originally Posted by Selur View Post
    I'm primarily going to deal with games (CGI, graphics, animation, etc) and though I suppose YV12 should be good enough for me if it's enough for BluRay, I definitely do dislike the fuzz of 4:2:0 that can sometimes be seen. Of course, if compatibility is going to be a problem far into the future if I choose RGB, then it's a moot point.
    4:4:4 and 4:2:2 are common in video editing, so there's nothing wrong with it for archiving, but don't count on hardware support or devices that play such content on their own. -> if it's for archiving, keep all the quality you can, if it's a backup for content that will be shared don't use 4:4:4.
    Though my intent is to archive video of the content of gameplays far into the future, the constrains of storage require that I only keep the encoded copy, which is why I figured that if I could at least retain all of the color space data, it would be beneficial down the road. But as it will remain the only copy of the content that is kept, it won't be very useful if its compatibility for play is also limited.

    The ideal situation would obviously be archival of lossless data, but as I threw out many posts above, the length of a single game playthrough would make such an ambition painfully expensive. Though some games are 8-15 hours, many are 50-100. For long-term lossless archival, I probably wouldn't use lagarith (limited support, windows only, etc), so I'd probably be looking at 200gb/hr. That's 10-20 terabytes. Throw in the RAID overhead. Then double that for at least one backup copy and I'd be looking at potentially 50 terabytes for a single game. Not counting other hardware, just the drives for that would be more than a couple thousand bucks.

    Which makes this all sort of a half-assed endeavor, I guess. But I don't know that anyone else is really doing anything more grand at the moment, either, so . . . *grumble*

    Anyway, I can live with 4:2:0, if that's what everything uses. I just thought I was making some sort of sacrifice compared to other content we consume; not realizing almost everything is 4:2:0.

    I was also concerned with how this could impact streaming, should I choose to do that at some point. It seems that Youtube and the like prefer consistent b-frames, bitrates, etc.
    Streaming compatibility is due to VBV restrictions and not due to choosing 2pass or 1pass.
    Right, not necessarily the single or multi-pass, but the constant bitrate, choice of b-frames, constant GOP size, etc.

    Not unless using the 'veryslow' preset secretly forces it to behave the same way as --slow-firstpass, which I don't think it does. I selected the 'veryslow' preset and then manually tweaked a few things, such as lowering the bframes, etc.

    # program --preset veryslow --pass 2 --bitrate 10240 --stats ".stats" --bframes 3 --b-pyramid none --ref 5 --output "output" "input"
    That doesn't say anything about you 1pass settings,..
    No, the above is just the generated line for the multipass with set bitrate that I was using, where the first passes were always very light on resource usage. After running tons of different encoding tests, I'm lead to believe that there is something in the process during the first pass that isn't really multi-thread optimized. Possibly not the entire process, but I'm betting it's dependent on one specific setting out of many that I'm using that may be particularly bad. I just haven't sorted it out yet. But since you guys have helped clarify CRF vs multipass with set bitrate, I no longer need to really worry about that (and the low resource utilization only occurred during the first pass of a multipass encode -- using CRF/single-pass uses available resources).

    If I've chosen to encode as ABR, shouldn't the meta-data for the file show it as ABR?
    that assumes that there is such meta data,... *gig*
    Ah. Fair enough. That is also what I was unsure of -- whether it was pulling the variable bitrate and the average/max bitrates in MediaInfo out of metadata in the file or if it was analyzing the audio stream and providing its best guess as to what it is.

    Thanks again!
    Quote Quote  
  12. Member
    Join Date
    Sep 2012
    Location
    Denver
    Search PM
    Originally Posted by poisondeathray View Post
    I think you have I/O issues.

    Do you need lossless recording? Do you need RGB ?

    Since you're going to YV12 and lossy encoding later anyway maybe a less demanding lossy codec is the solution for higher res, better game performance
    Lossless recording would be ideal for minimal editing before encoding and the non-lossless I've tried (straight to x264) seems to incur a substantial performance hit (for obvious reasons). Going straight to YV12 (for the codecs which have that option rather than something like YUV420 which seems to be something slightly different) does give some improvement, but not really enough. This goes against what I would have expected, though, as going from RGB to something else should require more processing power without saving much in the way of disk-writes...?

    At any rate, I agree that it seems like there is something going on IO-wise and I'll be damned if I can figure it out. Just doing a flat-out write-speed test on the two SATA drives I'm using gives me about ~80MBps which should be plenty, since the highest resolution recorded only seems to be going at around 580Mbps/75MBps. And the other ones I'm having problems with as demonstrated in my pasted table of results in my previous post are at a much lower rate than that.

    Even the dxtory codec, which shares the load across multiple drives has problems, when the combined write-speed of both drives should be around 150MBps.

    Of course, these particular drives are 3tb and 1.5tb 5400RPM WD Caviar Green drives, but I didn't see anything better using a Samsung 500GB 7200RPM drive, either. In fact, the 7200RPM drive had slightly worse write-speeds.

    (Edited the following due to most recent testing):

    Recording to my primary OS drive (Samsung 830 256GB SSD with 2000+mbps/250+MBps write-speed) is much improved. Using lagarith to record to 1728x1080 with RGB causes an in-game FPS hit (not a lot, but around 50-60fps), but the playback of the final product seems to be decent. The video ends up with a 427mbps/57MBps bitrate.

    Recording to the same drive with the same everything, but YV12, produces a 205mbps/25MBps bitrate video file that also seems fine.

    I don't understand. Why is performance to my SSD (the same SSD I'm running on, at that) performing better, when testing my SATA drives show they should be capable of far more than is necessary on a single drive, much less when using dxtory's built-in multi-drive writing (which would be about six times what is needed for the YV12 test, above).

    Presumably, a separate SSD dedicated just to recording would be great (though I'd be concerned about long-term issues), but even dropping another couple hundred bucks on a 256GB SSD wouldn't be enough, since the write-speeds would almost certainly degrade severely as more capacity is used and, even if it was fine for 200GB at a time, would require stopping recording every hour just to sit there for a couple hours to move data off the drive to another storage drive.

    I don't think RAID would help, either. I'm getting 75-85MBps per SATA drive and that's not enough for a video with a 25MBps video.

    On the other hand, the lagarith 1152x720 RGB files I *am* recording end up using around 189mbps/23MBps, so . . .

    Now, on the recordings where everything becomes incredibly choppy, the disk usage is very low, as I say. Around 35MB/s and that should leave plenty of overhead for SATA drives getting 75-85MBps write-speeds. However, they also sometimes seem to be incurring 1+ queue lengths while the non-choppy recordings are not (are keeping around 0.5 or less).
    Last edited by Cronjob; 21st Sep 2012 at 18:51.
    Quote Quote  
  13. Right, not necessarily the single or multi-pass, but the constant bitrate, choice of b-frames, constant GOP size, etc.
    constant bitrate is not needed, if you understand what the vbv restrictions do, constant gop size etc. is only restricting the seeking accuracy but should be no problem for streaming,..
    Quote Quote  
  14. Because you're probably measuring maximum sequential transfer rates. Minimum transfer rate is what you need to know. Not only that, the block size for a large file transfer is going to be different than what is typically used for low level benchmarks and measurements

    Mechanical HDD's slow down as they fill up, you might be getting somewhere around 20-30MB/s near the end (the density of platter is different at the beginning vs. the end. The sequential transfer rate tapers off as the drive gets more full (SSD's decrease in performance as you fill capacity too ( but for different reasons) and the minimum transfer rate is going to be more than enough for most SSD's, the early generation one's excluded) . That's why RAID-0 will help
    Last edited by poisondeathray; 21st Sep 2012 at 21:35.
    Quote Quote  
  15. Member
    Join Date
    Sep 2012
    Location
    Denver
    Search PM
    Originally Posted by poisondeathray View Post
    Because you're probably measuring maximum sequential transfer rates. Minimum transfer rate is what you need to know
    Aren't these recording utilities essentially performing sequential writes? I thought they were and therefore have been emphasizing sequential write tests. If this is incorrect, then I've been fundamentally focusing on the wrote things while bench-marking.

    Mechanical HDD's also slow down as they fill up, you might be getting 10-20MB/s near the end

    RAID-0 will help
    I'm performing most of the testing (both of sequential write bench-marking and of the recording process itself) on nearly or completely empty drives, precisely to avoid that.

    I've no doubt that RAID-0 would improve maximum write-speed, but thought I should get far more out of a single drive (or even two drives, using the dxtory multi-drive-write process), too. And if I wasn't getting that, something must have been wrong. Of course, the only way I'm going to know is to finally get off my ass and test that (I have never dealt with RAID under windows, but I imagine it'll be fairly simple).
    Quote Quote  
  16. Yes , I added some comments before you posted ^

    Yes these are sequential transfers, but common benchmarking software might use different block size than a large file transfer. It' s not necessarily applicable

    The other possiblity is that you have some sort of controller problem. You might try switching ports ,or off another controller (e.g. try the marvel instead of the intel ports)

    I'm not familar with dxtory's multidrive write process. Maybe since it's some sort of software RAID, it adds some other overhead ? Surely hardware RAID off the chipset would be less demanding

    I don't know , some of it doesn't make sense. But maximum transfer rates are pretty much useless. You need to know minimum transfer rates. (same with game play FPS. Who cares about maximum FPS, it's minimum FPS that's important - thats when it gets laggy and you get fragged). Why else would recording to the OS SSD drive make a difference ? If the transfer rates weren't important, it should be worse recording to the OS drive, not better. It shouldn't be a latency issue because you're doing squential writes of a large file
    Last edited by poisondeathray; 21st Sep 2012 at 21:57.
    Quote Quote  
  17. Member
    Join Date
    Sep 2012
    Location
    Denver
    Search PM
    Originally Posted by poisondeathray View Post
    Yes , I added some comments before you posted ^

    Yes these are sequential transfers, but common benchmarking software might use different block size than a large file transfer. It' s not necessarily applicable
    True. The dxtory program has an option for testing write-speed of each drive that you enable (this is only used when also using the dxtory codec with the dxtory program; not other codecs with it) and you can say how much data you want to test. 1GB by default, though I usually have it try around 10GB. I figured this would be most representative, at least, of the real-world use by the application of those drives.

    The other possiblity is that you have some sort of controller problem. You might try switching ports ,or off another controller

    I'm not familar with dxtory's multidrive write process. Maybe since it's some sort of software RAID, it adds some other overhead ? Surely hardware RAID off the ICHR would be less demanding
    Unfortunately, I don't think anyone is too familiar with it. The author of dxtory is Japanese and the amount of information directly available is sparse and very much "Engrish" (no slight intended toward the developer; I can't even speak Japanese *poorly*!).

    As I understand it, you tell it what hard drives you have. Then you enable them for use in the process after running write-speed tests. When you're actually recording, it will alternate drives with each frame that is written, up to eight drives. (ie, frame 1 is written to the first drive, frame 2 to the second, frame 3 to the third . . . frame 9 to the first again). It seems like a clever work around to actually using RAID.

    Your input has been very helpful. Despite my background, it can be difficult when dealing with a whole new area like this, where you don't have the experience or instinct to know if you're seeing a problem or it's how things generally are. It sounds like this definitely isn't what I should be expecting and that further diagnostics on disk IO are the next step.

    These last 36hrs have been very enlightening. I hate to think how much time I would have wasted or quality I would have sacrificed if not for the guidance of you guys in this forum.
    Quote Quote  
  18. Member
    Join Date
    Sep 2012
    Location
    Denver
    Search PM
    By the way, it isn't much, but I wanted to show my appreciation for your help these couple of days, Selur and poisonedeathray, by making a small $5 donation to Engineers Without Borders on behalf of you guys (receipt attached).

    Thank you.
    Image Attached Thumbnails Click image for larger version

Name:	ewb_videohelp_donation.JPG
Views:	281
Size:	56.0 KB
ID:	14000  

    Quote Quote  
  19. Member Trippedout's Avatar
    Join Date
    Aug 2012
    Location
    scotland
    Search Comp PM
    im not even going to pretend i know what your going on about but i have an idea for you hows about hooking up an hdd rcorder or dvdr to the output signal of your video card to input of hdd recorder or dvd recorder no impact on gameing and perfect recording dont know if this would work but in my mind it seems to make sence its not a bad idea if someone came up with a box that simply records to memory stick say 32meg or so no need for larg box with hdd only record circut with input from video card and usb slot for memory stick in my mind it works perfectly lol for good workflow there i go dreaming again not a bad idea me thinks.
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!