VideoHelp Forum




+ Reply to Thread
Page 1 of 2
1 2 LastLast
Results 1 to 30 of 42
  1. contrarian rallynavvie's Avatar
    Join Date
    Sep 2002
    Location
    Minnesotan in Texas
    Search Comp PM
    I'm trying to decide how best to set up my new HDD configuration in my primary workstation. Here are the vital stats:
    5x 150GB WD Raptor 10krpm SATA
    4x 1TB WD Green 7200rpm SATA
    6-ports onboard (Intel SATA)
    4-ports PCI-E x8 controller (Adaptec 5405 SAS/SATA)

    Currently only one of those five Raptors are installed. I plan on leaving the storage drives as is on the onboard controller in JBOD. The 5405 can run up to 256 drives but I don't have backplanes for all that, just a 4-port SATA/SAS backplane with dedicated cooling. The case holds 6 other HDDs (where I plan on leaving the storage drives).

    So with the five Raptors would I be better off doing:
    RAID 1+0 with four
    RAID 5 across four
    RAID 6 across four
    RAID 5 across three with a hot spare

    There are advantages and disadvantages to both, but since I have a 5th drive available I'm thinking the RAID 10 option would give me the best performance. I generally use RAID 5 for systems at work (like my VMware View system) since they're running on 15k SAS drives, but these are only 10k SATA so I'm not sure I want to have that parity stripe running in tandem on them. This system is mostly dedicated to hosted VMs now since I don't do much video work, but I've been wanting to better utilize the Adaptec controller so I thought I'd give it a try. The RAID would be used for my boot/apps volume. I'm hoping to have the drives here this week and re-install Windows next weekend.
    FB-DIMM are the real cause of global warming
    Quote Quote  
  2. Member
    Join Date
    Feb 2009
    Location
    United States
    Search Comp PM
    Originally Posted by rallynavvie
    I'm trying to decide how best to set up my new HDD configuration in my primary workstation. Here are the vital stats:
    5x 150GB WD Raptor 10krpm SATA
    4x 1TB WD Green 7200rpm SATA
    6-ports onboard (Intel SATA)
    4-ports PCI-E x8 controller (Adaptec 5405 SAS/SATA)

    Currently only one of those five Raptors are installed. I plan on leaving the storage drives as is on the onboard controller in JBOD. The 5405 can run up to 256 drives but I don't have backplanes for all that, just a 4-port SATA/SAS backplane with dedicated cooling. The case holds 6 other HDDs (where I plan on leaving the storage drives).

    So with the five Raptors would I be better off doing:
    RAID 1+0 with four
    RAID 5 across four
    RAID 6 across four
    RAID 5 across three with a hot spare

    There are advantages and disadvantages to both, but since I have a 5th drive available I'm thinking the RAID 10 option would give me the best performance. I generally use RAID 5 for systems at work (like my VMware View system) since they're running on 15k SAS drives, but these are only 10k SATA so I'm not sure I want to have that parity stripe running in tandem on them. This system is mostly dedicated to hosted VMs now since I don't do much video work, but I've been wanting to better utilize the Adaptec controller so I thought I'd give it a try. The RAID would be used for my boot/apps volume. I'm hoping to have the drives here this week and re-install Windows next weekend.
    Do a 4 disc RAID0 for fun and post some benches lol

    ocgw

    peace
    i7 2700K @ 4.4Ghz 16GB DDR3 1600 Samsung Pro 840 128GB Seagate 2TB HDD EVGA GTX 650
    https://forum.videohelp.com/topic368691.html
    Quote Quote  
  3. Member edDV's Avatar
    Join Date
    Mar 2004
    Location
    Northern California, USA
    Search Comp PM
    You haven't mentioned your goals or software you want to run.

    Why all the Raptors? Are you trying to edit uncompressed video? Expect high noise.
    Recommends: Kiva.org - Loans that change lives.
    http://www.kiva.org/about
    Quote Quote  
  4. Video Restorer lordsmurf's Avatar
    Join Date
    Jun 2003
    Location
    dFAQ.us/lordsmurf
    Search Comp PM
    Why?
    Want my help? Ask here! (not via PM!)
    FAQs: Best Blank DiscsBest TBCsBest VCRs for captureRestore VHS
    Quote Quote  
  5. Member
    Join Date
    Feb 2009
    Location
    United States
    Search Comp PM
    Honestly, I would use (2) RAID0 arrays for super fast processing from 1 array to the other (I did that once on a Gigabyte mobo w/ (2) RAID contollers

    w/ the OS on a partition @ the head of 1 of the arrays

    ocgw

    peace
    i7 2700K @ 4.4Ghz 16GB DDR3 1600 Samsung Pro 840 128GB Seagate 2TB HDD EVGA GTX 650
    https://forum.videohelp.com/topic368691.html
    Quote Quote  
  6. contrarian rallynavvie's Avatar
    Join Date
    Sep 2002
    Location
    Minnesotan in Texas
    Search Comp PM
    Anyone who mentions simple RAID 0 can GTFO

    Seriously, I don't want to increase the risk to my data which is exactly what RAID 0 does. It is a cheap, risky method of increasing performance which I do not condone for anything other than simple scratch volumes.

    I mentioned my use case was with VMs. I also mentioned I wanted to make use of my Adaptec controller since the dual-core processor it has on board is doing nothing right now. Noise is going to be neutral or perhaps less since my old U320 SCSI drives are coming out and the additional Raptors are going in.

    I'm looking for server admins and engineers who have used systems with each of the above-mentioned setups to give feedback on their experiences.
    FB-DIMM are the real cause of global warming
    Quote Quote  
  7. Originally Posted by edDV
    You haven't mentioned your goals
    Bragging rights.
    Quote Quote  
  8. Have only used a few of the different RAIDS and can never keep the numbers straight without looking it up.

    Love Adaptec cards. Hate non-standard controllers. Mirroring just seems silly, the secondary drive has the same wear and tear when the primary fails, plus it's slow. The hot spare is the way to go.

    Striping benefit falls off with more drives, 2 or 3 seems most effective Four didn't seem to be much faster. Hot spare is a gift from God.

    Just to use all 5, I would go with a 3-way stripe, seperate parity drive, and hot spare. Then buy 2 or 3 replacement drives for the shelf. Or a 2-way stripe, hot spare, and the other two for the shelf.

    Did I mention how much I really, really like the hot spare?

    Also do not forget to check, double-check, and then check again that there are NO compatibility problems between the chosen controller and the drives. This is a nightmare you do NOT want to go through. Had an array that chewed through 6-8 drives over two months before one of the makers admitted to such a problem. Much unhappiness.
    Quote Quote  
  9. I would do RAID 5 and save one for hot swap....However, are you sure your onboard controller can do hot swapping?

    Also assuming that the one raptor is for your OS, why would you do that?
    I would put 2 raptors as RAID1 for OS
    I would put 3 raptors as RAID5 for whatever
    I would put the 4 1TB's as RAID5 for whatever
    tgpo famous MAC commercial, You be the judge?
    Originally Posted by jagabo
    I use the FixEverythingThat'sWrongWithThisVideo() filter. Works perfectly every time.
    Quote Quote  
  10. contrarian rallynavvie's Avatar
    Join Date
    Sep 2002
    Location
    Minnesotan in Texas
    Search Comp PM
    Originally Posted by stiltman
    However, are you sure your onboard controller can do hot swapping?
    I'm not planning on using the onboard for any RAID other than the JBOD I use for the storage drives. The Adaptec controller does just about anything you could ever want it to do.

    Unfortunately the breakout cable I have is 4-device, thus using the 4-drive backplane. The 5th Raptor would be kept around as a spare or sold to a friend. 4-drive RAID 5 with a hot-spare would be idea but I just can't seem to find the mSAS expanders that multiply out to more SATA devices. I thought the only way for that controller to support 256 drives was with backplane expanders?

    I think I've exempted RAID 6 from the options since I don't need that much security. My expectation was that drive failures would be infrequent enough that if a failure happened that the rest would last until the RMA returned. With the hot spare I don't think there's any performance hit since it takes over for the dead drive, right? Then when I get the RMA return it essentially becomes the new hot spare when I install it?
    FB-DIMM are the real cause of global warming
    Quote Quote  
  11. Member
    Join Date
    Feb 2009
    Location
    United States
    Search Comp PM
    [quote="rallynavvie"]Anyone who mentions simple RAID 0 can GTFO

    Seriously, I don't want to increase the risk to my data which is exactly what RAID 0 does. It is a cheap, risky method of increasing performance which I do not condone for anything other than simple scratch volumes.

    I mentioned my use case was with VMs. I also mentioned I wanted to make use of my Adaptec controller since the dual-core processor it has on board is doing nothing right now. Noise is going to be neutral or perhaps less since my old U320 SCSI drives are coming out and the additional Raptors are going in.

    I'm looking for server admins and engineers who have used systems with each of the above-mentioned setups to give feedback on their experiences.[/quote

    If you do regular backups what is the harm??????????????????????????????????????????????

    duh!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

    properly cooled, properly powered systems don't lose drives anyway, unless the drives are chit

    ps. @ ANY point did you indicate that your drives would be ANYTHING other than scratch drives, which is all I do other than OS w/ sub-terabyte drives?1?1?!?!?!

    pfffffff........

    ocgw

    peace
    i7 2700K @ 4.4Ghz 16GB DDR3 1600 Samsung Pro 840 128GB Seagate 2TB HDD EVGA GTX 650
    https://forum.videohelp.com/topic368691.html
    Quote Quote  
  12. contrarian rallynavvie's Avatar
    Join Date
    Sep 2002
    Location
    Minnesotan in Texas
    Search Comp PM
    Originally Posted by ocgw
    If you do regular backups what is the harm??????????????????????????????????????????????

    duh!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
    Backups aren't the problem. On average the backup of my OS disk (currently just a single 150GB drive) takes roughly an hour to complete. They also take roughly an hour to re-install. That isn't a big deal. The major impact is downtime and in-flight corruptions. That system runs 24/7 so if it goes down I lose whatever I'm currently working on (in-flight) and I also lose the time until I return to it and get the system restored (downtime). Worst case the DT occurs right as I'm leaving for work or right as I'm headed to bed. That's roughly 8 hours of DT that are possible, a third of a day wasted.

    With a proper RAID solution this risk is greatly decreased and there are still performance gains over a single drive solution. In fact with all of the solutions I mentioned there is zero downtime and the rebuilds happen in-flight so that my OS and everything never realizes anything is amiss.

    Everyone jumps instantly to the RAID 0 bandwagon because it's a cheap solution. You can start with only 2 drives and most of the controllers support it. More people should be looking into RAID 5 instead since it only requires on more drive and a controller to support it, but at much less risk as the RAID 0 solution. Performance is more than just benchmark numbers, uptime plays a big factor in performance of my machines.
    FB-DIMM are the real cause of global warming
    Quote Quote  
  13. Originally Posted by ocgw
    properly cooled, properly powered systems don't lose drives anyway, unless the drives are chit
    Tell that to Google. http://labs.google.com/papers/disk_failures.pdf
    Quote Quote  
  14. Member
    Join Date
    Feb 2009
    Location
    United States
    Search Comp PM
    Originally Posted by rallynavvie
    Originally Posted by ocgw
    If you do regular backups what is the harm??????????????????????????????????????????????

    duh!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
    Backups aren't the problem. On average the backup of my OS disk (currently just a single 150GB drive) takes roughly an hour to complete. They also take roughly an hour to re-install. That isn't a big deal. The major impact is downtime and in-flight corruptions. That system runs 24/7 so if it goes down I lose whatever I'm currently working on (in-flight) and I also lose the time until I return to it and get the system restored (downtime). Worst case the DT occurs right as I'm leaving for work or right as I'm headed to bed. That's roughly 8 hours of DT that are possible, a third of a day wasted.

    With a proper RAID solution this risk is greatly decreased and there are still performance gains over a single drive solution. In fact with all of the solutions I mentioned there is zero downtime and the rebuilds happen in-flight so that my OS and everything never realizes anything is amiss.

    Everyone jumps instantly to the RAID 0 bandwagon because it's a cheap solution. You can start with only 2 drives and most of the controllers support it. More people should be looking into RAID 5 instead since it only requires on more drive and a controller to support it, but at much less risk as the RAID 0 solution. Performance is more than just benchmark numbers, uptime plays a big factor in performance of my machines.
    Once again you did NOT say what you were going to do w/ the drives

    And why would you need 150GB for your OS let alone multiple 150GB drives

    cheap?, bandwagon?, I have more than a dozen drives too rally, I use multiple RAID0 arrays to quickly process (demux-remux) data

    You forget who you are talkin' to?, I have 15K SAS drives, 10K Raptors and a LSI Logic Controller too, and a big ass pile of TB+ drives

    btw, how often do you lose drives?, if you lose drives on a regular basis you need to get to the root of the flaw in your workstation design theory

    You know what? You are freakin' big ass mofo "big shot", buy 1 or 2 more Raptors and go RAID5 to RAID5 for superior thruput and redundancy

    ocgw

    peace
    i7 2700K @ 4.4Ghz 16GB DDR3 1600 Samsung Pro 840 128GB Seagate 2TB HDD EVGA GTX 650
    https://forum.videohelp.com/topic368691.html
    Quote Quote  
  15. contrarian rallynavvie's Avatar
    Join Date
    Sep 2002
    Location
    Minnesotan in Texas
    Search Comp PM
    Originally Posted by rallynavvie
    The RAID would be used for my boot/apps volume.
    From my first post
    FB-DIMM are the real cause of global warming
    Quote Quote  
  16. Video Restorer lordsmurf's Avatar
    Join Date
    Jun 2003
    Location
    dFAQ.us/lordsmurf
    Search Comp PM
    I don't know that you'll get much of a boost. In fact, in my experience, RAID slows down apps.
    Want my help? Ask here! (not via PM!)
    FAQs: Best Blank DiscsBest TBCsBest VCRs for captureRestore VHS
    Quote Quote  
  17. contrarian rallynavvie's Avatar
    Join Date
    Sep 2002
    Location
    Minnesotan in Texas
    Search Comp PM
    Originally Posted by lordsmurf
    I don't know that you'll get much of a boost. In fact, in my experience, RAID slows down apps.
    I've heard that sometimes with concurrent writes but most engineers I talk to say that's usually due to implementation or the controller hardware. I'd be curious to see CPU cycles on systems like that to see if the system isn't being taxed (thus taking away from the applications).
    FB-DIMM are the real cause of global warming
    Quote Quote  
  18. Member
    Join Date
    Feb 2009
    Location
    United States
    Search Comp PM
    Originally Posted by lordsmurf
    I don't know that you'll get much of a boost. In fact, in my experience, RAID slows down apps.
    If RAID slows down your apps you have the wrong cluster and stripe sizes

    Cluster and stripe size have to be tailored for the job the RAID array is performing

    OK, let's get down to "nuts and bolts"

    I have done "exhaustive experimentation" w/ stripe and cluster sizes in RAID arrays

    You want to format w/ large clusters and use large stripes for video files, you want small clusters and stripes for the many small files of an OS, if you use large clusters and stripes w/ an OS you lose perfomance and storage space because of "slack space"

    32-64kb stripes w/ 4kb clusters for OS, 128-1024 stripes for video

    You never said what apps??, what apps need a 750GB volume?

    So you are going to seriously use 5 Raptors in a big 750GB RAID array for your "boot and apps"?, that is a lot of apps

    In my opinion it would be just rediculus to use 5 raptors for a "boot/apps", once programs are stored in memory they run off the memory and cpu not the drives

    You will get a extremely fast boot, but it won't really make your system feel so much faster, now if you are processing large video files going from 1 high performance RAID array to another it makes sense to me

    ocgw

    peace
    i7 2700K @ 4.4Ghz 16GB DDR3 1600 Samsung Pro 840 128GB Seagate 2TB HDD EVGA GTX 650
    https://forum.videohelp.com/topic368691.html
    Quote Quote  
  19. Member edDV's Avatar
    Join Date
    Mar 2004
    Location
    Northern California, USA
    Search Comp PM
    If you're injesting SMPTE-292M HD video off an SDI card you should put all the raptors on the SDI as Raid0 or RAID5/6 and test transfer. Maybe you have enough Raptors for two realtime RAIDs*. The OS/Apps will be happy with a single ATA/SATA (backed up). Why is fast boot a need? A proper install will be up a week or more.

    This is a video forum not a bank transaction server forum.

    * This will require realtime software.
    Recommends: Kiva.org - Loans that change lives.
    http://www.kiva.org/about
    Quote Quote  
  20. Originally Posted by edDV
    . Why is fast boot a need?

    LOL it will be a HORRENDOUSLY slow boot, but that probably doesn't matter if it's a workstation, or he's using workstation type workflows, he might be booting once a month

    I had an Adaptec 5805 with similar dually setup, the initialization alone was 30-40sec. Most server boards have 20-30 sec initalization as well in addition to the controller initialization. Not sure about the Tyan he has, I use Supermicro for my builds, and they all boot slow. So if you are doing this for "boot reasons", forget about it and ditch the controller

    @rallynavie - What types of workloads and apps will you be using?
    Quote Quote  
  21. Member edDV's Avatar
    Join Date
    Mar 2004
    Location
    Northern California, USA
    Search Comp PM
    I still don't understand what work this server/workstation is doing. As said above by someone, you design the system to the problem you want to solve.
    Recommends: Kiva.org - Loans that change lives.
    http://www.kiva.org/about
    Quote Quote  
  22. Originally Posted by edDV
    I still don't understand what work this server/workstation is doing.
    That is a VERY relevant question and it has been asked a few times
    Quote Quote  
  23. Member
    Join Date
    Feb 2009
    Location
    United States
    Search Comp PM
    Originally Posted by poisondeathray
    Originally Posted by edDV
    . Why is fast boot a need?

    LOL it will be a HORRENDOUSLY slow boot, but that probably doesn't matter if it's a workstation, or he's using workstation type workflows, he might be booting once a month

    I had an Adaptec 5805 with similar dually setup, the initialization alone was 30-40sec. Most server boards have 20-30 sec initalization as well in addition to the controller initialization. Not sure about the Tyan he has, I use Supermicro for my builds, and they all boot slow. So if you are doing this for "boot reasons", forget about it and ditch the controller

    @rallynavie - What types of workloads and apps will you be using?
    Host Bus Controller cards are notorious for slow initalization, I should know, I have a LSI Logic MegaRAID SAS 8204ELP, but if he is loading multiple VM's into 16-32GB of RAM it might make for overall fast booting, but still a waste of drives if you ask me

    ocgw

    peace
    i7 2700K @ 4.4Ghz 16GB DDR3 1600 Samsung Pro 840 128GB Seagate 2TB HDD EVGA GTX 650
    https://forum.videohelp.com/topic368691.html
    Quote Quote  
  24. Member edDV's Avatar
    Join Date
    Mar 2004
    Location
    Northern California, USA
    Search Comp PM
    So his goal is to load several virtual OS? Strange requirement.
    Recommends: Kiva.org - Loans that change lives.
    http://www.kiva.org/about
    Quote Quote  
  25. contrarian rallynavvie's Avatar
    Join Date
    Sep 2002
    Location
    Minnesotan in Texas
    Search Comp PM
    Originally Posted by rallynavvie
    This system is mostly dedicated to hosted VMs now since I don't do much video work, but I've been wanting to better utilize the Adaptec controller so I thought I'd give it a try.
    Also from my first post, folks. Maybe I shouldn't use paragraphs for things anymore

    I know boot speed is out the window with this controller, it takes several seconds per drive for init just like the onboard Intel controller does. This system only reboots when required, and I do tend to batch my system updates monthly since stupid Windows updates seem to require rebooting more often than not these days

    My single 150GB is getting full, running at about 15GB remaining, and I don't like running drives over 90% capacity (which it's obviously past that now). I could free up several GB here and there by cleaning off some games that I don't play as much anymore but I tend to only clean them off when I'm sick of them so I did need something bigger than 150GB at some point. Adobe Master Collection takes up about a tenth of that disk, too. I don't need 750GB which is another reason why I'm looking at a RAID 10 or RAID 5 solution. 300GB should be enough for quite a while but 450GB makes me consider moving my vdisks to this volume as well and see how it handles. Right now the vdisks are on the storage drives but I have more vdisks than drives so essentially there are multiple OS and their applications running on each 7200rpm SATA drive. There are many times you can hear them winding up for intensive I/O as the VMs vie for bandwidth. I'm thinking of moving the most active VMs to the RAID volume and see what happens. Most of those use 40GB vdisks (though I may not pre-allocate them if the new volume offers up enough performance).
    FB-DIMM are the real cause of global warming
    Quote Quote  
  26. Member
    Join Date
    Feb 2009
    Location
    United States
    Search Comp PM
    Originally Posted by edDV
    So his goal is to load several virtual OS? Strange requirement.
    We still have no idea what his VM machines are doing and what apps they run, he insinuates we can't read, when he in fact "doesn't get it", the apps his VM's run and the file sizes they use dictate what type of RAID is best and how they should be "fine tuned" by cluster size and stripe size

    I will pose the question as simple as humanly possible

    1. Why are you hosting VM's?

    2. What programs do your VM's run?

    I admit to being curious for some time

    ocgw

    peace
    i7 2700K @ 4.4Ghz 16GB DDR3 1600 Samsung Pro 840 128GB Seagate 2TB HDD EVGA GTX 650
    https://forum.videohelp.com/topic368691.html
    Quote Quote  
  27. contrarian rallynavvie's Avatar
    Join Date
    Sep 2002
    Location
    Minnesotan in Texas
    Search Comp PM
    I don't need granular RAID settings, just pros and cons between the array configurations I have listed. Fine tuning comes later but the array configuration is the more important consideration now.

    Many of those VMs sit idle or have little to no disk usage, but they get rebooted or recompiled on occasion which can slow down any others running on the same logical disk. Most of the performance is mitigated by memory since that's usually where VM performance lies (at least with VMware products).

    Ideally I would offload these to an ESX box like I run at work but I don't really want to run more than one machine at home if I don't have to. I also prefer Workstation's host integration over console access via VI Client, it makes testing without a SAN so much easier. I tend to take my work home with me sometimes because of these enhancements over the ESX environment and then repackage the VMs to take back to work.

    As to specific workload on these I cannot go into much detail due to a reason I just listed. I can say that the two Windows Server builds run SQL and JBoss for JRE application hosting. There are also very equivalent versions running on Ubuntu and RedHat. These communicate with clients running Windows XP, Windows Vista, Windows 7, and Ubuntu desktops. I'm keeping this high-level for a reason. I do run copies of the Windows and Linux for my own enjoyment as well such as an Ubuntu web browser that reverts to snapshot on reboot for more secure browsing, a Windows 7 machine to get familiar with the upcoming OS (I learned Linux running it as a VM as well), and a thin build called Untangle which is one of the best standalone firewalls I have come across. I'm also chipping away at an OSX VM (OSX is unsupported in VMware) which requires some custom KEXTs for graphics acceleration similar to what VMware Tools does for most other guest OS.
    FB-DIMM are the real cause of global warming
    Quote Quote  
  28. Member
    Join Date
    Feb 2009
    Location
    United States
    Search Comp PM
    Originally Posted by rallynavvie
    I don't need granular RAID settings, just pros and cons between the array configurations I have listed. Fine tuning comes later but the array configuration is the more important consideration now.

    Many of those VMs sit idle or have little to no disk usage, but they get rebooted or recompiled on occasion which can slow down any others running on the same logical disk. Most of the performance is mitigated by memory since that's usually where VM performance lies (at least with VMware products).

    Ideally I would offload these to an ESX box like I run at work but I don't really want to run more than one machine at home if I don't have to. I also prefer Workstation's host integration over console access via VI Client, it makes testing without a SAN so much easier. I tend to take my work home with me sometimes because of these enhancements over the ESX environment and then repackage the VMs to take back to work.

    As to specific workload on these I cannot go into much detail due to a reason I just listed. I can say that the two Windows Server builds run SQL and JBoss for JRE application hosting. There are also very equivalent versions running on Ubuntu and RedHat. These communicate with clients running Windows XP, Windows Vista, Windows 7, and Ubuntu desktops. I'm keeping this high-level for a reason. I do run copies of the Windows and Linux for my own enjoyment as well such as an Ubuntu web browser that reverts to snapshot on reboot for more secure browsing, a Windows 7 machine to get familiar with the upcoming OS (I learned Linux running it as a VM as well), and a thin build called Untangle which is one of the best standalone firewalls I have come across. I'm also chipping away at an OSX VM (OSX is unsupported in VMware) which requires some custom KEXTs for graphics acceleration similar to what VMware Tools does for most other guest OS.
    I give up, maybe if he told us what he is doing he would have to kill us lol

    ocgw

    peace
    i7 2700K @ 4.4Ghz 16GB DDR3 1600 Samsung Pro 840 128GB Seagate 2TB HDD EVGA GTX 650
    https://forum.videohelp.com/topic368691.html
    Quote Quote  
  29. I think enough has been explained to get the general idea. Speed, size, and redundancy. Clearly some cash has been spent, with extra money for extra speed.

    Three drive stripe with hot spare and cold spare on the shelf. Put the V-disks on the array, you paid the money for the hi-speed, might as well use it. I would not sell the fifth drive, unless you intend to upgrade the array again in the next few years. Keep your current drive out of the system after cloning it to the array. Keep it as a fallback for a while.

    I would also run some live tests so as to get familiar with replacing a failed drive and rebuilding the array with both hot and cold spares. Good to have this down pat before you Need to Know, it also gives you a real nice warm fuzzy. The software has options to both re-create the array with all data safe and sound, and also to re-create a nice, clean, blank array, that's a distinction you want to be real clear on.

    Most hi-end Acaptecs have significant caching RAM as well as their own processor. Depending on the type of HD access, the performance improvement can be dramatic. Almost gives me wood.

    For those who don't know, comparing such a controller to most on-board RAID controllers is like comparing a racehorse to a plowhorse. Some mobo RAID in a simple performance stripe are actually slower than the standalone drives. Many expensive ones are not much better. Adaptecs are almost always tops in class.
    Quote Quote  
  30. contrarian rallynavvie's Avatar
    Join Date
    Sep 2002
    Location
    Minnesotan in Texas
    Search Comp PM
    Originally Posted by Nelson37
    Most hi-end Acaptecs have significant caching RAM as well as their own processor. Depending on the type of HD access, the performance improvement can be dramatic.
    256MB of RAM, BBU, and dual-core CPU on the 5405. I had read reviews of the stock heatsink not being quite up to the task without significant airflow so the one I picked up had a Zalman passive MCH cooler attached. I wasn't able to use the internal chipset fan that came with my Lian Li V1010 but I'm thinking I could drop it down to push a little more air over that card if it needs it.

    I wanted more than the 6 onboard SATA devices and got a great price on the Adaptec (with the intention of moving it to a storage array when the 5396 retires) but I just didn't feel right using it to simple host 4 more SATA drives. And FWIW I got a killer deal on the four Raptors as well otherwise I wouldn't be entertaining such a lofty idea. At first I was simply going to replace the 150GB with a 300GB Raptor but then I saw these others for sale for less than the 300GB version and thought it was the perfect opportunity. And they're all still under warranty for another 2 years
    FB-DIMM are the real cause of global warming
    Quote Quote  



Similar Threads

Visit our sponsor! Try DVDFab and backup Blu-rays!