What is RAID and why should you want it?

From Linux Raid Wiki
Revision as of 15:42, 1 April 2019 by Timothy W. Gravier, Jr. (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search
Forward to Choosing your hardware, and what is a device?


What is(n't) RAID

RAID stands for either Redundant Array of Independent Disks, or Redundant Array of Inexpensive Disks, depending on who you talk to. The general consensus is that "Independent" came first, and "Inexpensive" followed. The intention of RAID is to spread your data across several disks, such that a single disk failure will not lose that data.

What RAID is not, is a substitute for decent backups and a proper monitoring regime. Not all modern RAID layouts will protect you against even a single disk failure, and with today's large drives, failures often cascade from one drive to another. Especially if you buy "inexpensive" disks, which are often not suitable for use in a RAID.

Why should you want RAID

Some forms of RAID allow you to combine disks together to increase the apparent contiguous size. They do not protect your data - indeed, they increase the risk because failure of one drive will lose the data on all the drives. In the author's opinion, unless you combine these with some other form of RAID, you're better off just buying a bigger disk.

Some forms of RAID store multiple copies of the data, so if you lose a disk, you have an identical copy elsewhere. This facility is sometimes used for backups - remove one of the disks from the array and store it safely, replacing it with another disk.

Because storing multiple copies can be very wasteful of space, other forms of raid store parity along with the data, so that if a drive fails, the contents of that drive can be calculated from the other drives.

For most versions of RAID you will see a performance boost. Obviously this depends on a lot of things, but this is another reason for going down the RAID route.

What is available besides RAID?

Modern filesystems such as btrfs have set out to obsolete traditional RAID. They're slowly getting there. At time of writing (2016) btrfs supports combining disks, mirroring, and snapshots (backups). Other features are available but are experimental. The idea is that the more the filesystem knows about the underlying hardware the more it can optimise access to the hardware. The downside is the loss of abstraction this entails.

The RAID levels

Here's a short description of what is supported in the Linux RAID drivers. Some of this is absolutely basic RAID information, but a few notices have been added about what's special in the Linux implementation of the levels. You can safely skip this section if you know RAID already.

The current RAID drivers in Linux support the following levels:

Linear Mode

  • Also known as as "span" or "JBOD (Just a Bunch of Disks)".
  • Two or more disks are combined into one volume. The disks are "appended" to each other, so writing linearly to the RAID device will fill up disk 0 first, then disk 1 and so on. The disks do not have to be of the same size. In fact, size doesn't matter at all here. :) The size of the volume will be (roughly) equal to the total size of all the disks.
  • There is no redundancy in this level. If one disk fails, you will most likely lose all your data. You may be able to recover some data if the failed disk is in the middle of the volume, since the filesystem will just be missing one large consecutive chunk of data. Most partitions save their structure at the beginning and/or end of the volume, so if first and/or last disk fails, the whole volume is almost assuredly lost.
  • The read and write performance will probably not increase for sequential or random reads/writes. If several users access the volume at the same time and happen to read/write data that is on different disks, the volume may be able to field these requests simultaneously. However, this would be completely by luck.

RAID0/Stripe Mode

  • Two or more disks are combined into one volume. Reads/writes are broken into blocks, and these blocks are alternated/cycled through all the disks in the volume.
  • There is no redundancy in this level. The failure of any one disk in a volume will absolutely result in the loss of all data on the volume (unless the failed disk is fixed and the blocks on it are recovered).
  • The devices should (but don't HAVE to) be the same size. Operations on the array will be split on the devices; for example, a large write could be split up as 64 kiB to disk 0, 64 kiB to disk 1, 64 kiB to disk 2, then 64 kiB to disk 0 again, and so on. Writes to each disk will go on at the same time. If one device is much larger than the other devices, that extra space is still utilized in the RAID device, but you will be accessing this larger disk alone, during writes in the high end of your RAID device. This of course hurts performance.
  • Reads/writes to the volume will happen much faster because they are done in parallel across all the disks. This is usually the main reason for running RAID-0. Read/write speed will be very close to the speed of a single disk x the number of disks in the volume. Put another way, if it takes 10 seconds to read/write data to a single disk, then it would only take 2 seconds to read/write the same data to a RAID-0 volume across 5 disks.

RAID1/Mirror Mode

  • Two or more disks are combined into one volume. Reads/writes are still broken into blocks, but ALL blocks are copied to ALL disks in the volume.
  • This is the first mode which actually has redundancy. This mode maintains an exact copy of the information on one disk across all the other disks. If any disk fails in the volume (or all disks but 1), the data is still available with no performance degradation.
  • The size of the volume will be the size of one of the disks in the volume. If one disk is smaller than another, your volume will be the size of the smallest disk.
  • Write performance is often worse than on a single device, because identical copies of the data written must be sent to every disk in the array. Write speed will be equal to the speed of the slowest disk. There are also performance implications regarding the HDD controller hardware. If the HDD controller is a discrete device (i.e., not packaged into the motherboard), the controller will do the work of copying all the data and issuing it to all the drives. However, if the HDD controller is packaged on the motherboard (or just leans heavily on a device driver), large writes to a large volume may cause the HDD controller to slow the whole system down. Read performance is no slower than the slowest disk. Some RAID controllers employ a rather good read-balancing algorithm that will simply let the disk whose heads are closest to the wanted disk position perform the read operation. Since seek operations are relatively expensive on modern disks (a seek time of 8 ms equals a read of 640 kB at 80 MB/sec), picking the disk that will have the shortest seek time does actually give a noticeable performance improvement. Additionally, some RAID drivers/controllers can be configured to read large amounts of data in parallel, as in RAID-0. This is possible because the data is still read in blocks. Particularly good controllers can read data from different blocks across multiple disks at the same time.
  • This mode also allows for a hot-spare disk. In this way, if a disk fails, the HDD controller will immediately drop the failed disk from the volume and bring the hot-spare online. It will then begin the process of "rebuilding" (copying) all data from the good disk to the "new" disk. Once the rebuild process is complete, RAID-1 redundancy is restored.


  • This RAID level is not used very often. It can be used on three or more disks. Instead of completely mirroring the information, it keeps parity information on one drive, and writes data to the other disks in a RAID-0 like way. Because one disk is reserved for parity information, the size of the array will be (N-1)*S, where S is the size of the smallest drive in the array. As in RAID-1, the disks should either be of equal size, or you will just have to accept that the S in the (N-1)*S formula above will be the size of the smallest drive in the array.
  • If one drive fails, the parity information can be used to reconstruct all the data. If two drives fail, all the data is lost.
  • The reason this level is not more frequently used, is because the parity information is kept on one drive. This information must be updated every time one of the other disks are written to. Thus, the parity disk will become a bottleneck, if it is not a lot faster than the other disks. However, if you just happen to have a lot of slow disks and a very fast one, this RAID level can be very useful.


  • This is perhaps the most useful RAID mode when one wishes to combine a larger number of physical disks, and still maintain some redundancy. RAID-5 can be (usefully) used on three or more disks, with zero or more spare disks. The resulting RAID-5 device size will be (N-1)*S, just like RAID-4. The big difference between RAID-5 and -4 is that the parity information is distributed evenly among the participating drives, avoiding the bottleneck problem in RAID-4, and also getting more performance out of the disk when reading, as all drives will then be used.
  • If one of the disks fails, all the data can be recovered, thanks to the parity information. If spare disks are available, reconstruction will begin immediately after the device failure. If two disks fail simultaneously, or before the raid is reconstructed, all the data is lost - RAID-5 can survive one disk failure, but not two or more.
  • Both read and write performance usually increase, but can be hard to predict how much. Reads are almost similar to RAID-0 reads, writes can be either rather expensive (requiring read-in prior to write, in order to be able to calculate the correct parity information, such as in database operations), or similar to RAID-1 writes (when larger sequential writes are performed, and parity can be calculated directly from the other blocks to be written). The write efficiency depends heavily on the amount of memory in the machine, and the usage pattern of the array. Heavily scattered writes are bound to be more expensive.


  • This is an extension of RAID-5 to provide more resilience. RAID-6 can be (usefully) used on four or more disks, with zero or more spare-disks. The resulting RAID-6 device size will be (N-2)*S. The big difference between RAID-5 and -6 is that there are two different parity information blocks, and these are distributed evenly among the participating drives.
  • Since there are two parity blocks; if one or two of the disks fail, all the data can be recovered. If spare disks are available, reconstruction will begin immediately after the device failure(s).
  • Read performance is almost similar to RAID-5 but write performance is worse.


  • RAID-10 is an "in-kernel" combination of RAID-1 and RAID-0 that is more efficient than simply layering RAID levels.
  • RAID-10 has a layout ("far") which can provide sequential read throughput that scales by number of drives, rather than number of RAID-1 pairs. You can get about 95 % of the performance of the RAID-0 with same amount of drives.
  • RAID-10 allows spare disk(s) to be shared amongst all the raid1 pairs.

RAID-1+0, 5+0, 6+0

These are all striped arrays, built on lower arrays. As your arrays get bigger, a layout like this will reduce the stress when a disk fails, as only the lower array needs to be rebuilt. For even greater protection, you could go 1+1, 5+1 or 6+1, which would protect you against a 2 or 3 disk failure, allowing the lower array to be rebuilt if it's destroyed by such a failure.

Just remember that while these are commonly abbreviated as raid-10, raid-50 and raid-60, they are not to be confused with linux md raid-10 above. And you could build say a mirror over a stripe which would be raid-0+1. When discussing complex raid setups, make sure you know which one you are discussing.


  • This is a special debugging RAID level. It only allows one device and simulates low level read/write failures.
  • Using a FAULTY device in another RAID level allows administrators to practice dealing with things like sector-failures as opposed to whole drive failures

Which raid is for me?

This really depends on the number, size and type of your drives, and whether speed matters to you.

Two Drives

With two drives, raids 0 and 1 are your only real options. Do you want to make maximal use of your drives, then go for raid-0, accepting that this has no redundancy, and losing one drive will lose all your data. Or if you want redundancy, go for raid-1, accepting that this will only give you drive space equal to the smaller drive, but means you can lose either drive and not lose any data.

Raid-1 will read from whichever disk it thinks will respond fastest, so it provides fast reads. Writes are slower because it has to write to both disks.

Raid-0 striped can read very quickly, because it's reading from two drives. Linear on the other hand may write faster because it's just streaming to one drive. But a lot depends on the size of your individual files.

Three to maybe Eight Drives

This is where it's hardest to decide which raid is best. All the simple raids make sense (well, maybe not raid-0), and raid-10 as well.

With just three drives, you can have raid-1 which adds the ability to detect corruption (not implemented in md, but if a disk is corrupted you could have majority vote). Raid-5 gives you both increased capacity and the ability to recover from a drive failure at the same time.

Adding a fourth drive, raid-1 no longer really makes sense. Raid-5 remains a good choice, and raid-6 adds the ability to recover from corruption. Raid-10 is a good choice, and raid-1+0 becomes an option.

Raid-10 is a good option for speed, especially for large files, because the read can be split across several drives.

This is also the point at which defining a drive as a spare starts making sense. Before this, you're better off adding redundancy rather than a spare. And don't add a spare to a raid-5 - go raid-6 instead. Or if one of the drives is an SSD, you might want to define it as your journal, to speed up read and write speeds.

As you increase the number of drives, raid-6 becomes more and more the only sensible option. The more drives you have, the greater the likelihood that the array will suffer a second failure during a rebuild, and the greater the need for two redundant disks to protect you.

At some point a simple raid loses its ability to protect you, the likelihood of a cascade of failures just becomes too much. It's a judgement call at which point this occurs for you, but the need for extra parity disks and spares probably cuts in at about ten drives.

Note that having spare drives will markedly reduce the load on an array during reconstruction provided the drive in question hasn't failed completely. If the drive is accessible but faulty, md will just copy the failing drive to the new drive, hammering the array only when the failing drive is unreadable. If the failed drive is dead, md needs to reconstruct it by hammering the entire array, massively increasing the risk of failure.

More than Eight Drives

At this point you really need a complex raid, ie raid-1+0, raid-5+0 or raid-6+0. And spare drives!

The problem here is how do you group your drives. Your best bet is probably raid-60, so you'll want your drives in groups of six to eight in a raid-6. And you won't want to change this after you've set them up! You can then group your raid-6s into a raid-0. The question there is which raid-0 - linear or striped? You're probably better off spreading the load over all the drives, so it's likely to be striped. And you do not want any of your arrays to fail, so you will need several spares configured to take over if a drive dies. You can configure the arrays in a pool so a spare can be automatically allocated as required.

The big difficulty with a setup like this is that the array will be very hard to grow - you really need to add a complete new matching raid-6 to your stripe set, which means adding a lot of disks in one hit every time. But if you really need an array that big, chances are it is a commercial setup and the cost of the drives is negligible when compared to everything else.

Forward to Choosing your hardware, and what is a device?
Personal tools