VideoHelp Forum
+ Reply to Thread
Results 1 to 15 of 15
Thread
  1. Any suggestions?


    Summary:

    i7 (860)
    ASUS P55 mobo
    EVGA Gtx260
    4x2GB DDR3 1600 (PC3 12800) w/ XMP (7-7-7-20 @ 1.90V)
    2xWD Caviar Black 1TB 7200 RPM 32MB @ Raid 0
    X-Fi Titanium Fatal1ty Champion Series 7.1 (24-bit 192KHz)
    CORSAIR TX 750W PSU
    Antec 900-2 case

    Detail:

    Intel Core i7-860 Lynnfield 2.8GHz 8MB L3 Cache LGA 1156 95W Quad-Core Processor
    http://www.newegg.com/Product/Product.aspx?Item=N82E16819115214
    ASUS SABERTOOTH 55i LGA 1156 Intel P55 ATX Intel Motherboard
    http://www.newegg.com/Product/Product.aspx?Item=N82E16813131601

    EVGA 896-P3-1257-AR GeForce GTX 260 Core 216 896MB 448-bit GDDR3 x16 HDCP SLI
    http://www.newegg.com/Product/Product.aspx?Item=N82E16814130433

    Patriot Viper 4GB (2 x 2GB) 240-Pin DDR3 SDRAM DDR3 1600 (PC3 12800) Dual Channel
    http://www.newegg.com/Product/Product.aspx?Item=N82E16820220285

    Western Digital Caviar Black WD1001FALS 1TB 7200 RPM 32MB Cache SATA 3.0Gb/s 3.5"
    http://www.newegg.com/Product/Product.aspx?Item=N82E16822136284

    Sound Blaster X-Fi Titanium Fatal1ty Champion Series 70SB088600007 7.1 Channels 24-bit 192KHz
    http://www.newegg.com/Product/Product.aspx?Item=N82E16829102021

    CORSAIR CMPSU-750TX 750W ATX12V / EPS12V SLI Ready CrossFire Ready 80 PLUS Certified
    http://www.newegg.com/Product/Product.aspx?Item=N82E16817139006


    Purpose:

    All around - CS4/gaming/1080p video/Minesweeper

    Future:

    2xGtx260 SLI
    2xRAM for 8GB total
    PSU supports the above

    Doubts:

    2x 1TB Caviar Black @ raid 0 vs. 1x VRaptor (both options at matching prices...which wins?)

    Does space a HDD space impact performance? Partition allocation? Logical drives?

    Would a 80GB HDD with equal specs outperform it's 1TB twin? (no SCSI or SSD here)

    Can 2x 7200 RPM 32 cache @ raid 0 match (or outmatch) a single Vraptor 10000 RPM 16 cache?

    What else on this build can be improved?
    Quote Quote  
  2. Originally Posted by Engineering

    2x 1TB Caviar Black @ raid 0 vs. 1x VRaptor (both options at matching prices...which wins?)
    Thoughput will be better on Raid0 drives, but vraptor will have shorter latency and access times. Things will load quicker with VR. Neither "wins" because it is very task dependent

    Thoughput is not very useful, unless your tasks consist of copy & pasting or transferring files, or dealing with uncompressed video

    Does space a HDD space impact performance? Partition allocation? Logical drives?
    Yes filled capacity does. Transfer rates drop linearly as you fill up capacity. Have a look at various review sites for the graphs.

    For setup, I would get VR for primary system drive and apps , 1TB for storage. For video editing in CS4 , separating your video files on the 2nd hard drive significantly improves performance. The primary drive will still hold application data and temp files. If you had a 3rd drive, putting temp data, page file on that would even be better

    Would a 80GB HDD with equal specs outperform it's 1TB twin? (no SCSI or SDD here)
    Was that a typo ? 80GB and 1000GB in the same class?

    The transfer speeds depend on the platter density. Usually the higher capacity models have denser platters as well as more platters (not always though).

    Can 2x 7200 RPM 32 cache @ raid 0 match (or outmatch) a single Vraptor 10000 RPM 16 cache?
    Again it depends on the task and scenario. Only for thoughput, not for access times. For app loading, startup , most tasks, the VR will be faster. For file transfers, the RAID0 drives maybe faster (depends on the model)
    Quote Quote  
  3. Originally Posted by poisondeathray
    For setup, I would get VR for primary system drive and apps , 1TB for storage. For video editing in CS4 , separating your video files on the 2nd hard drive significantly improves performance. The primary drive will still hold application data and temp files. If you had a 3rd drive, putting temp data, page file on that would even be better
    So,

    Primary drive (boot): [2xVR @ raid 0 and 128 kb strip or single VR] APPs dir only

    Secondary drive: [2x1TB @ raid 0 and 128 kb strip] APPs temp data dir + Swap/Page file

    Third drive: [single drive] video files (or change these around to 2nd drive or combine?)


    Is that the ideal setup? Let us say on a CS4 driven scenario. Unlimited drives and any raid config possible.

    And is raid 0 the absolute ideal for performance? I've read some things about raid 1+0 vs. raid 0+1.

    And what about the strip size? I image for boot drive smaller strip and for CS4 work 128 kb strip?

    Any ideas?
    Quote Quote  
  4. If data loss is ok with you (i.e. not important data), and are diligent with backups, RAID0 is ok. Otherwise there are greater risks of failure than JBOD, and it's more than double for 2x RAID0

    RAID0 is only good for increasing throughput. It does nothing for access times unless you short stroke the RAID array (i.e. make it a smaller capacity, so only the most dense part of the platters are used). "Performance" is such a generic term. There are many aspects of performance. performance under what conditions ? Throughput is often useless benchmark except for file transfers. You see review sites using hdtach, hdtune - these are basically useless and only measure transfer speeds. Increasing throughput won't (or very negligibly) load apps faster, it won't encode faster, it won't do anything faster except things like copy & paste files. Low access times are the primary reason why perceptibly everything "feels" faster to the user - this is why most SSD's "feel" so quick.

    Ideal stripe size will also vary by what controller is used, and what application. Some might be better with 128, some with 64. You have to do some testing for the best.

    If you could afford it, I would use a single decent quality SSD (like x25-m or vertex) for Boot/APPS/page (or even 2 smaller capacity in RAID0), and 2x1TB RAID0 for video files, and 1x1TB for miscellaneous storage. NAND based SSD's encounter degredation with capacity filling as well as HDD, so it's important to keep them "empty" as well as HDD for maximum performance.
    Quote Quote  
  5. in terms of data reliability (less data loss) would a SSD beat any raid configuration?

    If performance were not an issue, what raid configuration (for HDD not SSD) would you recommend? for important backups or server data, etc?
    Quote Quote  
  6. Originally Posted by Engineering
    in terms of data reliability (less data loss) would a SSD beat any raid configuration?
    If you mean a RAID0 array of HDD, in theory yes. Because n x RAID0 has (n) times the chance of mechanical failure. It's actually slightly greater than that because of issues with RAID controllers, but I digress... Also, SSD's have no mechanical moving parts. But commercial NAND SSD's have only been around for 1-2years. Not enough reliability information IMHO. I've built several PC's with SSD's without issues, the longest one is ~1year old. But keep in mind there are low quality SSD's as well. The 1st generation ones based on jmicron controllers were abysmal!

    If performance were not an issue, what raid configuration (for HDD not SSD) would you recommend? for important backups or server data, etc?
    RAID configuration is not a replacement for regular backups! Period. You should do regular backups always! Trust me, or ask anyone in IT or storage

    For non important stuff I would still use the config I suggested above. If you have more important stuff, in addition to backups, you might use a RAID1 array. Performance for different RAID levels like RAID5 or RAID6 with onboard controllers is usually poor. Most of the bundled motherboard controllers can only do RAID0 and RAID1 and maybe 5. e.g. if you want a good controller you need a dedicated PCIe Card like an Areca
    Quote Quote  
  7. Outstanding input. Can't thank you enough how much the insight is appreciated.
    Quote Quote  
  8. Hmm. Know anything about SSD architectures? Does SLC or MLC impact data safety on SSD?

    SLC vs. MLC? Is one more reliable than the other? I'm yet to understand the distinctions.

    What else can be done to prevent data loss, aside from avoiding raid, and going SSD?
    Quote Quote  
  9. Glad to help (and I feel a bit guilty for contributing to your other thread hijack)

    There are 100s of review sites and forums that have benchmarks on tech items like hard drives, cpus etc.. and they may help with your purchase decisions

    eg anandtech.com , techreport.com , xbitlabs.com, tomshardware.com , pcper.com etc...
    Quote Quote  
  10. Originally Posted by Engineering
    Hmm. Know anything about SSD architectures? Does SLC or MLC impact data safety on SSD?

    SLC vs. MLC? Is one more reliable than the other? I'm yet to understand the distinctions.
    In theory, SLC is more reliable and can withstand 10x the writes (so endurance is higher). Most SLC drives are "enterprise" class and have gone through stricter testing, and come with longer warranty periods. They also costs more

    There are several articles on SLC vs. MLC and intricacies of SSD's on anandtech.com if you look in their storage subsection

    What else can be done to prevent data loss, aside from avoiding raid, and going SSD?
    BACKUP , BACKUP , BACKUP. multiple redundancies , make sure adequate cooling (cpu and case temps), good ventilation fans and such, low vibration environment

    Well obviously RAID0 is the least secure and worst for data loss. You can have various partial failures with the other RAID levels, and reconstruct the data.
    Quote Quote  
  11. Originally Posted by poisondeathray
    eg anandtech.com , techreport.com , xbitlabs.com, tomshardware.com , pcper.com etc...
    bookmarked
    Quote Quote  
  12. contrarian rallynavvie's Avatar
    Join Date
    Sep 2002
    Location
    Minnesotan in Texas
    Search Comp PM
    I never recommend RAID 0 for boot volumes, I only recommend redundant RAID levels like 1, 10, or 5. While backups can help save your data in the event of a failure you are still faced with the dilemma of rebuilding the array (if you wait until you receive another drive to replace the dead one) or re-installing to a single drive in the interim. In the case of the former you might be down for a week waiting on an RMA, so is it worth it? At least if a drive fails in some of the redundant levels you'd be able to run until your new drive comes in and in most cases rebuild the array on the fly with no downtime (assuming you don't lose another drive in that time).

    Another factor that most consumers fail to realize is that most onboard RAID controllers are rubbish. Even the ones on some of the expensive workstation boards aren't that trustworthy. Most anyone serious about their array is going to get a good controller which is going to run $300-800. These almost always have a large cache and many have battery backups in case of power loss (which help prevention of array failures when you power your system on again). Some onboard "controllers" also use CPU cycles to supplement their performance for their arrays so while you're increasing HDD performance you're possibly losing some CPU performance.

    I'd recommend just keeping those drives separate. Having separate drives is very handy when encoding as you can read the source from one drive and write to a second drive. Even with RAID 0 doing a read/write to the same volume would probably be slower. And you don't really want to clutter up your boot drive with documents. Using the smaller Raptor drive forces you to keep your boot volume a little less cluttered. I always recommend installing your OS and applications to this drive only and remap all of your Documents folders to a second drive. A third drive is even nicer when you're working with video projects as you can dedicate one to being your final storage drive and the other to work in progress. I run 5x 1TB drives separately in my workstation, each with a specific role.

    Like I said in the other thread I wouldn't recommend SSD just yet. The cost is too high and without most of the consumer items not supporting TRIM it's not worth the hassle. I have two friends with SSD that stopped using them for anything but a secondary drive to install some games to, not even bothering with installing their OS to them. I'm not going to play with any until I get some in at work to test in the VM environment, certainly none for home.
    FB-DIMM are the real cause of global warming
    Quote Quote  
  13. RAID0 for boot is a bad idea, any RAID on a mobo controller is a bad idea. Cost/benefit is just not there until you get into expensive, large arrays with hot spares, OR they are just temp storage with no data loss concerns. Better performance requires a dedicated card and GOD HELP YOU if that card fails.

    IMO, the SSD's are not yet ready for prime time.

    Just a note on the Raptors. I have 8 drives in regular usage, most 3 to 5 years old. One Raptor dirve, which was the newest of the bunch. Now a couple of these drives are a bit flaky, require extra cooling, occassional maintenance, etc. But only one of them just suffered a complete failure with no recovery possible. Three guesses which one that was?

    Higher performance often brings a higher failure rate. Yes, it was fast, but not really that noticeable in most situations.

    Also, in building a new system, there is no need to get everything right away, many parts can be added at a later date. As a bonus, almost all will get less expensive over time.
    Quote Quote  
  14. contrarian rallynavvie's Avatar
    Join Date
    Sep 2002
    Location
    Minnesotan in Texas
    Search Comp PM
    Originally Posted by Nelson37
    Higher performance often brings a higher failure rate. Yes, it was fast, but not really that noticeable in most situations.
    I certainly hope the 15krpm SAS drives I use in my enterprise environment, which are chosen for their superior performance (and reliability) do no suffer from higher failure rate

    I don't think your statement is quite accurate. High-performance equipment should not only cost more because it offers higher performance but also because its build quality is a lot higher and has much stricter tolerances. I would trust any of the 15krpm drives I've used over the years (U160, U320, SAS) to last longer than a 7200rpm SATA or IDE drive. The 10krpm Raptors are used in enterprise environments (in fact our HP workstations use them) and I've not seen any failures with them so far. I've yet to see any of my 5 current Raptors at home fail either. An older 74GB got a bad sector (I think from a brown-out that happened) and it was RMAd before it had a chance to fail but it lasted 3 years prior to that. And since it was only one drive and not hundreds of them I used that all failed I can't assume 100% of their drives fail. There's no good sample set on which to base that assumption.
    FB-DIMM are the real cause of global warming
    Quote Quote  
  15. The really high-end drives are a completely different price range. I was speaking more of consumer-grade equipment, where the build quality is often not as high. I have read of higher than average failure rates for the Raptor I series.

    My personal Raptor experience was a one-of, but the general statement of hi-performance parts OFTEN having higher failure rates is based on multiple hardware experiences.

    Perhaps I should have qualified it more with "consumer-grade hardware, particularly early in the products' life-cycle".

    Now the Raptor II series has fewer problem reports from what I have read, however for the extra money and fairly minimal performance improvement I perceived, I would invest the money elsewhere. Video card, CPU, RAM, more HD space, almost anything else.
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!