I'm trying to decide how best to set up my new HDD configuration in my primary workstation. Here are the vital stats:
5x 150GB WD Raptor 10krpm SATA
4x 1TB WD Green 7200rpm SATA
6-ports onboard (Intel SATA)
4-ports PCI-E x8 controller (Adaptec 5405 SAS/SATA)
Currently only one of those five Raptors are installed. I plan on leaving the storage drives as is on the onboard controller in JBOD. The 5405 can run up to 256 drives but I don't have backplanes for all that, just a 4-port SATA/SAS backplane with dedicated cooling. The case holds 6 other HDDs (where I plan on leaving the storage drives).
So with the five Raptors would I be better off doing:
RAID 1+0 with four
RAID 5 across four
RAID 6 across four
RAID 5 across three with a hot spare
There are advantages and disadvantages to both, but since I have a 5th drive available I'm thinking the RAID 10 option would give me the best performance. I generally use RAID 5 for systems at work (like my VMware View system) since they're running on 15k SAS drives, but these are only 10k SATA so I'm not sure I want to have that parity stripe running in tandem on them. This system is mostly dedicated to hosted VMs now since I don't do much video work, but I've been wanting to better utilize the Adaptec controller so I thought I'd give it a try. The RAID would be used for my boot/apps volume. I'm hoping to have the drives here this week and re-install Windows next weekend.
+ Reply to Thread
Results 1 to 30 of 42
-
FB-DIMM are the real cause of global warming
-
Originally Posted by rallynavvie
ocgw
peacei7 2700K @ 4.4Ghz 16GB DDR3 1600 Samsung Pro 840 128GB Seagate 2TB HDD EVGA GTX 650
https://forum.videohelp.com/topic368691.html -
You haven't mentioned your goals or software you want to run.
Why all the Raptors? Are you trying to edit uncompressed video? Expect high noise.Recommends: Kiva.org - Loans that change lives.
http://www.kiva.org/about -
Why?
Want my help? Ask here! (not via PM!)
FAQs: Best Blank Discs • Best TBCs • Best VCRs for capture • Restore VHS -
Honestly, I would use (2) RAID0 arrays for super fast processing from 1 array to the other (I did that once on a Gigabyte mobo w/ (2) RAID contollers
w/ the OS on a partition @ the head of 1 of the arrays
ocgw
peacei7 2700K @ 4.4Ghz 16GB DDR3 1600 Samsung Pro 840 128GB Seagate 2TB HDD EVGA GTX 650
https://forum.videohelp.com/topic368691.html -
Anyone who mentions simple RAID 0 can GTFO
Seriously, I don't want to increase the risk to my data which is exactly what RAID 0 does. It is a cheap, risky method of increasing performance which I do not condone for anything other than simple scratch volumes.
I mentioned my use case was with VMs. I also mentioned I wanted to make use of my Adaptec controller since the dual-core processor it has on board is doing nothing right now. Noise is going to be neutral or perhaps less since my old U320 SCSI drives are coming out and the additional Raptors are going in.
I'm looking for server admins and engineers who have used systems with each of the above-mentioned setups to give feedback on their experiences.FB-DIMM are the real cause of global warming -
Have only used a few of the different RAIDS and can never keep the numbers straight without looking it up.
Love Adaptec cards. Hate non-standard controllers. Mirroring just seems silly, the secondary drive has the same wear and tear when the primary fails, plus it's slow. The hot spare is the way to go.
Striping benefit falls off with more drives, 2 or 3 seems most effective Four didn't seem to be much faster. Hot spare is a gift from God.
Just to use all 5, I would go with a 3-way stripe, seperate parity drive, and hot spare. Then buy 2 or 3 replacement drives for the shelf. Or a 2-way stripe, hot spare, and the other two for the shelf.
Did I mention how much I really, really like the hot spare?
Also do not forget to check, double-check, and then check again that there are NO compatibility problems between the chosen controller and the drives. This is a nightmare you do NOT want to go through. Had an array that chewed through 6-8 drives over two months before one of the makers admitted to such a problem. Much unhappiness. -
I would do RAID 5 and save one for hot swap....However, are you sure your onboard controller can do hot swapping?
Also assuming that the one raptor is for your OS, why would you do that?
I would put 2 raptors as RAID1 for OS
I would put 3 raptors as RAID5 for whatever
I would put the 4 1TB's as RAID5 for whatever -
Originally Posted by stiltman
Unfortunately the breakout cable I have is 4-device, thus using the 4-drive backplane. The 5th Raptor would be kept around as a spare or sold to a friend. 4-drive RAID 5 with a hot-spare would be idea but I just can't seem to find the mSAS expanders that multiply out to more SATA devices. I thought the only way for that controller to support 256 drives was with backplane expanders?
I think I've exempted RAID 6 from the options since I don't need that much security. My expectation was that drive failures would be infrequent enough that if a failure happened that the rest would last until the RMA returned. With the hot spare I don't think there's any performance hit since it takes over for the dead drive, right? Then when I get the RMA return it essentially becomes the new hot spare when I install it?FB-DIMM are the real cause of global warming -
[quote="rallynavvie"]Anyone who mentions simple RAID 0 can GTFO
Seriously, I don't want to increase the risk to my data which is exactly what RAID 0 does. It is a cheap, risky method of increasing performance which I do not condone for anything other than simple scratch volumes.
I mentioned my use case was with VMs. I also mentioned I wanted to make use of my Adaptec controller since the dual-core processor it has on board is doing nothing right now. Noise is going to be neutral or perhaps less since my old U320 SCSI drives are coming out and the additional Raptors are going in.
I'm looking for server admins and engineers who have used systems with each of the above-mentioned setups to give feedback on their experiences.[/quote
If you do regular backups what is the harm??????????????????????????????????????????????
duh!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
properly cooled, properly powered systems don't lose drives anyway, unless the drives are chit
ps. @ ANY point did you indicate that your drives would be ANYTHING other than scratch drives, which is all I do other than OS w/ sub-terabyte drives?1?1?!?!?!
pfffffff........
ocgw
peacei7 2700K @ 4.4Ghz 16GB DDR3 1600 Samsung Pro 840 128GB Seagate 2TB HDD EVGA GTX 650
https://forum.videohelp.com/topic368691.html -
Originally Posted by ocgw
With a proper RAID solution this risk is greatly decreased and there are still performance gains over a single drive solution. In fact with all of the solutions I mentioned there is zero downtime and the rebuilds happen in-flight so that my OS and everything never realizes anything is amiss.
Everyone jumps instantly to the RAID 0 bandwagon because it's a cheap solution. You can start with only 2 drives and most of the controllers support it. More people should be looking into RAID 5 instead since it only requires on more drive and a controller to support it, but at much less risk as the RAID 0 solution. Performance is more than just benchmark numbers, uptime plays a big factor in performance of my machines.FB-DIMM are the real cause of global warming -
Originally Posted by ocgw
-
Originally Posted by rallynavvie
And why would you need 150GB for your OS let alone multiple 150GB drives
cheap?, bandwagon?, I have more than a dozen drives too rally, I use multiple RAID0 arrays to quickly process (demux-remux) data
You forget who you are talkin' to?, I have 15K SAS drives, 10K Raptors and a LSI Logic Controller too, and a big ass pile of TB+ drives
btw, how often do you lose drives?, if you lose drives on a regular basis you need to get to the root of the flaw in your workstation design theory
You know what? You are freakin' big ass mofo "big shot", buy 1 or 2 more Raptors and go RAID5 to RAID5 for superior thruput and redundancy
ocgw
peacei7 2700K @ 4.4Ghz 16GB DDR3 1600 Samsung Pro 840 128GB Seagate 2TB HDD EVGA GTX 650
https://forum.videohelp.com/topic368691.html -
Originally Posted by rallynavvieFB-DIMM are the real cause of global warming
-
I don't know that you'll get much of a boost. In fact, in my experience, RAID slows down apps.
Want my help? Ask here! (not via PM!)
FAQs: Best Blank Discs • Best TBCs • Best VCRs for capture • Restore VHS -
Originally Posted by lordsmurfFB-DIMM are the real cause of global warming
-
Originally Posted by lordsmurf
Cluster and stripe size have to be tailored for the job the RAID array is performing
OK, let's get down to "nuts and bolts"
I have done "exhaustive experimentation" w/ stripe and cluster sizes in RAID arrays
You want to format w/ large clusters and use large stripes for video files, you want small clusters and stripes for the many small files of an OS, if you use large clusters and stripes w/ an OS you lose perfomance and storage space because of "slack space"
32-64kb stripes w/ 4kb clusters for OS, 128-1024 stripes for video
You never said what apps??, what apps need a 750GB volume?
So you are going to seriously use 5 Raptors in a big 750GB RAID array for your "boot and apps"?, that is a lot of apps
In my opinion it would be just rediculus to use 5 raptors for a "boot/apps", once programs are stored in memory they run off the memory and cpu not the drives
You will get a extremely fast boot, but it won't really make your system feel so much faster, now if you are processing large video files going from 1 high performance RAID array to another it makes sense to me
ocgw
peacei7 2700K @ 4.4Ghz 16GB DDR3 1600 Samsung Pro 840 128GB Seagate 2TB HDD EVGA GTX 650
https://forum.videohelp.com/topic368691.html -
If you're injesting SMPTE-292M HD video off an SDI card you should put all the raptors on the SDI as Raid0 or RAID5/6 and test transfer. Maybe you have enough Raptors for two realtime RAIDs*. The OS/Apps will be happy with a single ATA/SATA (backed up). Why is fast boot a need? A proper install will be up a week or more.
This is a video forum not a bank transaction server forum.
* This will require realtime software.Recommends: Kiva.org - Loans that change lives.
http://www.kiva.org/about -
Originally Posted by edDV
LOL it will be a HORRENDOUSLY slow boot, but that probably doesn't matter if it's a workstation, or he's using workstation type workflows, he might be booting once a month
I had an Adaptec 5805 with similar dually setup, the initialization alone was 30-40sec. Most server boards have 20-30 sec initalization as well in addition to the controller initialization. Not sure about the Tyan he has, I use Supermicro for my builds, and they all boot slow. So if you are doing this for "boot reasons", forget about it and ditch the controller
@rallynavie - What types of workloads and apps will you be using? -
I still don't understand what work this server/workstation is doing. As said above by someone, you design the system to the problem you want to solve.
Recommends: Kiva.org - Loans that change lives.
http://www.kiva.org/about -
Originally Posted by edDV
-
Originally Posted by poisondeathray
ocgw
peacei7 2700K @ 4.4Ghz 16GB DDR3 1600 Samsung Pro 840 128GB Seagate 2TB HDD EVGA GTX 650
https://forum.videohelp.com/topic368691.html -
So his goal is to load several virtual OS? Strange requirement.
Recommends: Kiva.org - Loans that change lives.
http://www.kiva.org/about -
Originally Posted by rallynavvie
I know boot speed is out the window with this controller, it takes several seconds per drive for init just like the onboard Intel controller does. This system only reboots when required, and I do tend to batch my system updates monthly since stupid Windows updates seem to require rebooting more often than not these days
My single 150GB is getting full, running at about 15GB remaining, and I don't like running drives over 90% capacity (which it's obviously past that now). I could free up several GB here and there by cleaning off some games that I don't play as much anymore but I tend to only clean them off when I'm sick of them so I did need something bigger than 150GB at some point. Adobe Master Collection takes up about a tenth of that disk, too. I don't need 750GB which is another reason why I'm looking at a RAID 10 or RAID 5 solution. 300GB should be enough for quite a while but 450GB makes me consider moving my vdisks to this volume as well and see how it handles. Right now the vdisks are on the storage drives but I have more vdisks than drives so essentially there are multiple OS and their applications running on each 7200rpm SATA drive. There are many times you can hear them winding up for intensive I/O as the VMs vie for bandwidth. I'm thinking of moving the most active VMs to the RAID volume and see what happens. Most of those use 40GB vdisks (though I may not pre-allocate them if the new volume offers up enough performance).FB-DIMM are the real cause of global warming -
Originally Posted by edDV
I will pose the question as simple as humanly possible
1. Why are you hosting VM's?
2. What programs do your VM's run?
I admit to being curious for some time
ocgw
peacei7 2700K @ 4.4Ghz 16GB DDR3 1600 Samsung Pro 840 128GB Seagate 2TB HDD EVGA GTX 650
https://forum.videohelp.com/topic368691.html -
I don't need granular RAID settings, just pros and cons between the array configurations I have listed. Fine tuning comes later but the array configuration is the more important consideration now.
Many of those VMs sit idle or have little to no disk usage, but they get rebooted or recompiled on occasion which can slow down any others running on the same logical disk. Most of the performance is mitigated by memory since that's usually where VM performance lies (at least with VMware products).
Ideally I would offload these to an ESX box like I run at work but I don't really want to run more than one machine at home if I don't have to. I also prefer Workstation's host integration over console access via VI Client, it makes testing without a SAN so much easier. I tend to take my work home with me sometimes because of these enhancements over the ESX environment and then repackage the VMs to take back to work.
As to specific workload on these I cannot go into much detail due to a reason I just listed. I can say that the two Windows Server builds run SQL and JBoss for JRE application hosting. There are also very equivalent versions running on Ubuntu and RedHat. These communicate with clients running Windows XP, Windows Vista, Windows 7, and Ubuntu desktops. I'm keeping this high-level for a reason. I do run copies of the Windows and Linux for my own enjoyment as well such as an Ubuntu web browser that reverts to snapshot on reboot for more secure browsing, a Windows 7 machine to get familiar with the upcoming OS (I learned Linux running it as a VM as well), and a thin build called Untangle which is one of the best standalone firewalls I have come across. I'm also chipping away at an OSX VM (OSX is unsupported in VMware) which requires some custom KEXTs for graphics acceleration similar to what VMware Tools does for most other guest OS.FB-DIMM are the real cause of global warming -
Originally Posted by rallynavvie
ocgw
peacei7 2700K @ 4.4Ghz 16GB DDR3 1600 Samsung Pro 840 128GB Seagate 2TB HDD EVGA GTX 650
https://forum.videohelp.com/topic368691.html -
I think enough has been explained to get the general idea. Speed, size, and redundancy. Clearly some cash has been spent, with extra money for extra speed.
Three drive stripe with hot spare and cold spare on the shelf. Put the V-disks on the array, you paid the money for the hi-speed, might as well use it. I would not sell the fifth drive, unless you intend to upgrade the array again in the next few years. Keep your current drive out of the system after cloning it to the array. Keep it as a fallback for a while.
I would also run some live tests so as to get familiar with replacing a failed drive and rebuilding the array with both hot and cold spares. Good to have this down pat before you Need to Know, it also gives you a real nice warm fuzzy. The software has options to both re-create the array with all data safe and sound, and also to re-create a nice, clean, blank array, that's a distinction you want to be real clear on.
Most hi-end Acaptecs have significant caching RAM as well as their own processor. Depending on the type of HD access, the performance improvement can be dramatic. Almost gives me wood.
For those who don't know, comparing such a controller to most on-board RAID controllers is like comparing a racehorse to a plowhorse. Some mobo RAID in a simple performance stripe are actually slower than the standalone drives. Many expensive ones are not much better. Adaptecs are almost always tops in class. -
Originally Posted by Nelson37
I wanted more than the 6 onboard SATA devices and got a great price on the Adaptec (with the intention of moving it to a storage array when the 5396 retires) but I just didn't feel right using it to simple host 4 more SATA drives. And FWIW I got a killer deal on the four Raptors as well otherwise I wouldn't be entertaining such a lofty idea. At first I was simply going to replace the 150GB with a 300GB Raptor but then I saw these others for sale for less than the 300GB version and thought it was the perfect opportunity. And they're all still under warranty for another 2 yearsFB-DIMM are the real cause of global warming
Similar Threads
-
Streaming considerations - RTMP vs RTSP vs HLS vs MMS vs ?
By rezilient in forum Video Streaming DownloadingReplies: 3Last Post: 14th Apr 2016, 00:25 -
Bitrate/frame size considerations 4 selling downloadable videos as products
By sdsumike619 in forum Video ConversionReplies: 16Last Post: 12th Apr 2012, 13:08 -
Crucial M4 64gb raid or no raid?
By Stealth3si in forum ComputerReplies: 7Last Post: 20th Mar 2012, 04:13 -
Having a custom PC built: What are the considerations for parts?
By Canon GL-2 Guy in forum Newbie / General discussionsReplies: 9Last Post: 8th Apr 2011, 21:29 -
HDV PAL/NTSC considerations
By shaibt in forum Camcorders (DV/HDV/AVCHD/HD)Replies: 3Last Post: 15th May 2008, 17:23