This discussion has been locked. The information referenced herein may be inaccurate due to age, software updates, or external references.
You can no longer post new replies to this discussion. If you have a similar question you can start a new discussion in this forum.

What's is your preferred RAID Level?

I'd say all things considered, but cost can be a huge factor in determining how much one is willing to give up in terms of storage space, with respect to gains in either performance or redundancy.

For example For a nice balance between sheer performance on the one hand, and high availability and failover on the other, I might be inclined to just choose RAID 10 as a matter of course, considering the increased read and write capabilities, but when cost is factored in, other alternatives like RAID 6 will yield much more in terms of storage with a sacrifice in performance.

When deploying HA storage schemes in server clusters, we've already invested in greater redundancy and removed the RAID controller, and to be sure, even the server, as a single point of failure, but on a single machine, where would your preferences tend to lie after you've factored in cost vs availability vs storage volume - do you strive for a happy medium? Maximum storage? Maximum performance? And which schema do you find yourself leaning towards more often than not?

Your thoughts?

Parents
  • Different RAID levels for different volumes on the server, depending on the function of each. For example, you might use RAID 1 on the OS partition and RAID 5 on a data partition...and then take things from there if you decide that you want/need higher levels of protection.

Reply
  • Different RAID levels for different volumes on the server, depending on the function of each. For example, you might use RAID 1 on the OS partition and RAID 5 on a data partition...and then take things from there if you decide that you want/need higher levels of protection.

Children
  • I think I would be somewhat hesitant to incorporate RAID 5 on modern storage architectures. In the olden days it was good, but there are issues approaching and exceeding the 2TB volume level on RAID 5 that would lead me to suggest using one of the various RAID 6 levels instead when exceeding 2TB.

    Another user,  above offered a good treatment on larger volumes, and then there are the following resources too which can serve to inform one whilst making such decisions when deploying infrastructure too Slight smile

    https://wintelguy.com/raidmttdl.pl

    https://www.baarf.dk/BAARF/RAID5_versus_RAID10.txt (The BAARF site itself is somewhat amusing, with other articles too).

    Note that in the latter article, the author discerns between RAID 0+1 and RAID 1+0 - many admins don't pay much attention to the difference because functionally, they deliver the same superior performance and redundancy of each other, but when it comes to rebuilds, one definitely outshines the other - I'll leave to the reader to digest that part in the article.

    We're definitely moving away from a model where hardware RAID is the GoTo choice for data integrity, with large volume SSDs and FS's like ZFS and Btrfs, amongst others requiring direct access to the drives introducing the the high likelihood of data  corruption (software RAID is fine with these file systems), but moreover, as storage continues to be more affordable, and SSD's exceeding 100TB commonplace even in consumer based hardware, OC (Erasure Coding) has become the darling of larger infrastructure implementations.

    Here's a couple of links on object and file level Erasure Coding:

    https://www.computerweekly.com/feature/Erasure-coding-vs-RAID-Data-protection-in-the-cloud-era

    https://blog.westerndigital.com/jbod-vs-raid-vs-erasure-coding/ 

    For my two cents, when it comes to RAID, I'm a RAID 10 kinda guy - to me, the loss of available space is [nowadays] negligible when one considers the cost of drives in the marketplace, but I can remember an era where I opted for RAID 5 because I wanted every ounce of available space while not being able to justify RAID 10 (1+0) in the budget, lolz.... but then again, the term, "Fools errand", comes to mind, because I also remember doing low-level RLL formats on MFM drives to squeeze a few extra MBytes out of them too (That's your queue to start flaming me).

    Bottom line, when we consider the MTBF of hardware, we should calculate that into rebuild times for arrays and at some point abandon RAID completely since it will become the "Fools Errand", at some not so arbitrary capacity that finds those rebuild times exceeding that of the occurrence of additional drive failures in the arrays. When approaching those sizes, EC starts to really shine, IMNSHO :) The first link I posted in this post can help a lot with such advanced planning.

    I would love to hear everyone's thoughts on where they feel those cross-over points might exist for them conceptually.

  • Absolutely agree. I was intending to start with a very basic response and then add more detail, but it looks like a big chunk of that didn't copy/paste correctly and I was on my phone, so didn't notice it. I would delete that waste of a post if THWACK would let me! Haha