This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

What's is your preferred RAID Level?

I'd say all things considered, but cost can be a huge factor in determining how much one is willing to give up in terms of storage space, with respect to gains in either performance or redundancy.

For example For a nice balance between sheer performance on the one hand, and high availability and failover on the other, I might be inclined to just choose RAID 10 as a matter of course, considering the increased read and write capabilities, but when cost is factored in, other alternatives like RAID 6 will yield much more in terms of storage with a sacrifice in performance.

When deploying HA storage schemes in server clusters, we've already invested in greater redundancy and removed the RAID controller, and to be sure, even the server, as a single point of failure, but on a single machine, where would your preferences tend to lie after you've factored in cost vs availability vs storage volume - do you strive for a happy medium? Maximum storage? Maximum performance? And which schema do you find yourself leaning towards more often than not?

Your thoughts?

Parents
  • I'm going to go back to the good-ole-days and channel some of my time as a Messaging Administrator.  In the days of Exchange 2013 (yes, those days), Microsoft recommended that you use physical machines with JBOD disk arrays thereby letting the Database Availability Groups (DAGs) be your failover mechanism.  I, personally, loved this idea for a few reasons.

    1. Easier management from the storage side - no RAID to worry about, no costly controllers, the disks could be whatever sizes
    2. No 'single point of failure' because the load was shared among different servers.
    3. Better raw disk : data ratio.  What you normally lost in building a RAID10 (or 1+0 or 0+1) you could use to just setup another server hosting copies of those databases, reducing your possible failure domains.
    4. In the event of disk failure, all 'transactions' just moved to another server and you could replace the failed disk and re-sync the changes.
    5. Power savings - Mass Storage Arrays (like those from HP, Dell, and others) are power hogs and throw of a considerable amount of heat.

    Still channeling those days still, there are some other considerations that should be considered in today's world.

    1. IOPS and Latency - if you need incredibly high IOPS, nothing beats a good RAID controller (with battery and flash-backed cache) and a bunch of identical disks
    2. Flash is here to stay - you can still get incredibly high performance using flash (or flash supplemented) arrays.  At my house, I have a NAS with a pair of flash disks to cache for a RAID 5 array and the speeds are nothing to sneeze at for a prosumer-level product.
    3. NME has been a game-changer (at least from my personal computing perspective) and I'm assuming the same from the data center storage as well.
    4. Data center-to-data center replication - people seem to downplay the idea that backups (and/or replication) are also heavy read workloads.  Be sure to scale for those as well.

    "The only constant in IT is change" has been said over and over again - and storage technologies are no exception.  This is where I used to lean heavily on our storage admins.  In our server provisioning process, we didn't fill out a form saying we needed 40GB Boot and 128 GB Data on RAID10, we said, we needed 40GB Boot and 128 GB with 7200 IOPS.  It was up to the storage admins to handle the math side of it.

    All that being said: if money is no object (we all know it is) and you are the one-person-IT shop (many of us work with and rely upon other informed, intelligent trustworthy professionals), then you can't really go wrong with RAID10.  Just don't think that throwing IOPS at badly written code will fix all of your problems, because it won't any more than throwing RAM or CPUs at it.

    Thanks for listening to a non-storage admin speak a little bit about his glory days learning from and working with good storage admins.

Reply
  • I'm going to go back to the good-ole-days and channel some of my time as a Messaging Administrator.  In the days of Exchange 2013 (yes, those days), Microsoft recommended that you use physical machines with JBOD disk arrays thereby letting the Database Availability Groups (DAGs) be your failover mechanism.  I, personally, loved this idea for a few reasons.

    1. Easier management from the storage side - no RAID to worry about, no costly controllers, the disks could be whatever sizes
    2. No 'single point of failure' because the load was shared among different servers.
    3. Better raw disk : data ratio.  What you normally lost in building a RAID10 (or 1+0 or 0+1) you could use to just setup another server hosting copies of those databases, reducing your possible failure domains.
    4. In the event of disk failure, all 'transactions' just moved to another server and you could replace the failed disk and re-sync the changes.
    5. Power savings - Mass Storage Arrays (like those from HP, Dell, and others) are power hogs and throw of a considerable amount of heat.

    Still channeling those days still, there are some other considerations that should be considered in today's world.

    1. IOPS and Latency - if you need incredibly high IOPS, nothing beats a good RAID controller (with battery and flash-backed cache) and a bunch of identical disks
    2. Flash is here to stay - you can still get incredibly high performance using flash (or flash supplemented) arrays.  At my house, I have a NAS with a pair of flash disks to cache for a RAID 5 array and the speeds are nothing to sneeze at for a prosumer-level product.
    3. NME has been a game-changer (at least from my personal computing perspective) and I'm assuming the same from the data center storage as well.
    4. Data center-to-data center replication - people seem to downplay the idea that backups (and/or replication) are also heavy read workloads.  Be sure to scale for those as well.

    "The only constant in IT is change" has been said over and over again - and storage technologies are no exception.  This is where I used to lean heavily on our storage admins.  In our server provisioning process, we didn't fill out a form saying we needed 40GB Boot and 128 GB Data on RAID10, we said, we needed 40GB Boot and 128 GB with 7200 IOPS.  It was up to the storage admins to handle the math side of it.

    All that being said: if money is no object (we all know it is) and you are the one-person-IT shop (many of us work with and rely upon other informed, intelligent trustworthy professionals), then you can't really go wrong with RAID10.  Just don't think that throwing IOPS at badly written code will fix all of your problems, because it won't any more than throwing RAM or CPUs at it.

    Thanks for listening to a non-storage admin speak a little bit about his glory days learning from and working with good storage admins.

Children
No Data