Mixing RAID0 and RAID1 on the same set of drives

Posted on

A server stack is the collection of software that forms the operational infrastructure on a given machine. In a computing context, a stack is an ordered pile. A server stack is one type of solution stack — an ordered selection of software that makes it possible to complete a particular task. Like in this post about Mixing RAID0 and RAID1 on the same set of drives was one problem in server stack that need for a solution. Below are some tips in manage your linux server when you find problem about linux, software-raid, raid1, raid0, .

I am soon installing a new pair of drives in one of our machines that exists as a VMWare host box (running VMWare Server at the moment). They are going to be used as a RAID0 array for a couple of specific VMs that will impose a high I/O load when they are actively in use. The machine hosts a number of reasonably large VMs that are used for testing purposes.

As there is little point (cost wise) getting drives smaller than 500G the resulting array would be 1000G in size which it far more than is needed for this purpose, so I am considering using a chunk of the disks as a RAID1 array for storing VM backups and reference copies (freeing some space on the existing RAID1 array).

Would there be any harm in:

  1. splitting the drives into, say, 5 partitions
  2. setting one pair of partitions as the initial RAID0 array
  3. creating an LVM group using this new physical volume
  4. setting one pair of partitions as the initial RAID1 array
  5. creating an LVM group using this new physical volume
  6. when either volume group needs to expand
    1. creating a new R0/R1 array in a free partition pair
    2. expanding the relevant LVM group to include this new physical volume

I suspect that all of the above will work perfectly fine, but I was wondering if there are any issues that I’m not aware of. For instance, would splitting the drive into multiple arrays affect the kernel’s ability to cache I/O effectively at all?

I have considered instead rebuilding the machine with single RAID10 array over all the drives it will end up with, but taking the machine offline for as long as that will take is not an option and it would not allow the same separation of I/O load that having separate arrays does.

I see no problem in doing this. It’s cost-effective and it solves the issues you’re having, as long as the RAID1 I/O load is only in effect during controlled maintenance windows.

However, a big RAID1 might provide you with a simpler and more efficient setup, depending on the type of load.

For instance, would splitting the
drive into multiple arrays affect the
kernel’s ability to cache I/O
effectively at all?

As the cache is functioning at the block level, you’ll effectively destroy any cache hits you’re getting by the constant churn. I don’t recommend using same-drive partitions for RAID-anything, unless you’re just doing it to learn about how to set it up (i.e. it’s for experimentation and learning, and is temporary). You’ll regret doing this on a production machine.

I have considered instead rebuilding
the machine with single RAID10 array
over all the drives it will end up
with…

The merits of RAID 10 are discussed elsewhere on SF.

…but taking the machine offline for
as long as that will take is not an
option and it would not allow the same
separation of I/O load that having
separate arrays does.

Sometimes, it’s “no pain, no gain”. Frankly, this is your best option. Make sure your RAID arrangement is bootable. But in a 4-drive setup (the minimum), you’ll end up with decent protection and decent performance. Otherwise, if you have just 2 or 3 drives, just do RAID1 (mirror) only. If you have the 3-drive arrangement, look at making drive 3 a hot spare.

I have set up RAID0 and RAID1 on the same disk with seperate partition and it works perfectly.
I haven’t measured the performance though. Don’t know if this might cause any performance issue.

Why?

Raid is meant to be set up ACROSS DRIVES, not across partitions. Taking a single drive and making partitions and then raiding them provides zero benefit. If the drive goes, so does all your raid “protection”.

Even playing with raid across 2 drives (unless you just parallel the two drives) is wasting time and energy for no real benefit.

Drives are cheap, so iF you really want/need raid, then buy more drives and raid them.

Cheers,

-R

Leave a Reply

Your email address will not be published. Required fields are marked *