RAID setup

From Linux Raid Wiki
Revision as of 05:59, 18 September 2009 by Nyeates1 (Talk | contribs)

Jump to: navigation, search

Contents

RAID setup

General setup

This is what you need for any of the RAID levels:

  • A kernel with the appropriate md support either as modules or built-in. Preferably a kernel from the 2.6 series. Alternatively a stable 2.4 kernel (pre 2.4 kernels are no longer covered in this document).
  • The mdadm tool
  • Patience, Pizza, and your favorite caffeinated beverage.

The first two items are included as standard in most GNU/Linux distributions today.

If your system has RAID support, you should have a file called /proc/mdstat. Remember it, that file is your friend. If you do not have that file, maybe your kernel does not have RAID support.

If you're sure your kernel has raid support you may need to run run modprobe raid[RAID mode] to load raid support into your kernel. eg to support raid5:

modprobe raid5

See what the file contains, by doing a

cat /proc/mdstat

It should tell you that you have the right RAID personality (eg. RAID mode) registered, and that no RAID devices are currently active. See the /proc/mdstat page for more details.

Preparing and partitioning your disk devices

Arrays can be built on top of entire disks or on partitions.

This leads to 2 frequent questions:

  • Should I use entire device or a partition?
  • What partition type?

Which are discussed in Partition Types

Downloading and installing mdadm - the RAID management tool

mdadm is now the standard RAID management tool and should be found in any modern distribution.

You can download the most recent mdadm tarball at http://www.cse.unsw.edu.au/~neilb/source/mdadm/.

Use the normal distribution method for obtaining the package:

Debian, Ubuntu:

 apt-get install mdadm

Gentoo:

 emerge mdadm

RedHat:

 yum install mdadm

Mdadm modes of operation

mdadm is well documented in its manpage - well worth a read.

   man mdadm

mdadm has 7 major modes of operation. Normal operation just uses the 'Create', 'Assemble' and 'Monitor' commands - the rest come in handy when you're messing with your array; typically fixing it or changing it.

1. Create

Create a new array with per-device superblocks (normal creation).

2. Assemble

Assemble the parts of a previously created array into an active array. Components can be explicitly given or can be searched for. mdadm checks that the components do form a bona fide array, and can, on request, fiddle superblock information so as to assemble a faulty array. Typically you do this in the init scripts after rebooting.

3. Follow or Monitor

Monitor one or more md devices and act on any state changes. This is only meaningful for raid1, 4, 5, 6, 10 or multipath arrays as only these have interesting state. raid0 or linear never have missing, spare, or failed drives, so there is nothing to monitor. Typically you do this after rebooting too.

4. Build

Build an array that doesn't have per-device superblocks. For these sorts of arrays, mdadm cannot differentiate between initial creation and subsequent assembly of an array. It also cannot perform any checks that appropriate devices have been requested. Because of this, the Build mode should only be used together with a complete understanding of what you are doing.

5. Grow

Grow (or shrink) an array, or otherwise reshape it in some way. Currently supported growth options including changing the active size of component devices in RAID level 1/4/5/6 and changing the number of active devices in RAID1.

6. Manage

This is for doing things to specific components of an array such as adding new spares and removing faulty devices.

7. Misc

This is an 'everything else' mode that supports operations on active arrays, operations on component devices such as erasing old superblocks, and information gathering operations.


Create RAID device

Below we'll see how to create arrays of various types; the basic approach is:

   mdadm --create /dev/md0 <blah>
   mdadm --monitor /dev/md0

Linear mode

Ok, so you have two or more partitions which are not necessarily the same size (but of course can be), which you want to append to each other.

Spare-disks are not supported here. If a disk dies, the array dies with it. There's no information to put on a spare disk.

Using mdadm, a single command like

    mdadm --create --verbose /dev/md0 --level=linear --raid-devices=2 /dev/sdb6 /dev/sdc5

should create the array. The parameters talk for themselves. The out- put might look like this

   mdadm: chunk size defaults to 64K
   mdadm: array /dev/md0 started.

Have a look in /proc/mdstat. You should see that the array is running.

Now, you can create a filesystem, just like you would on any other device, mount it, include it in your /etc/fstab and so on.

RAID-0

You have two or more devices, of approximately the same size, and you want to combine their storage capacity and also combine their performance by accessing them in parallel.

    mdadm --create --verbose /dev/md0 --level=stripe --raid-devices=2 /dev/sdb6 /dev/sdc5

Like in Linear mode, spare disks are not supported here either. RAID-0 has no redundancy, so when a disk dies, the array goes with it.

Having run mdadm you have initialised the superblocks and started the raid device. Have a look in /proc/mdstat to see what's going on. You should see that your device is now running.

/dev/md0 is now ready to be formatted, mounted, used and abused.

RAID-1

You have two devices of approximately same size, and you want the two to be mirrors of each other. Eventually you have more devices, which you want to keep as stand-by spare-disks, that will automatically become a part of the mirror if one of the active devices break.

    mdadm --create --verbose /dev/md0 --level=mirror --raid-devices=2 /dev/sdb1 /dev/sdc1

If you have spare disks, you can add them to the end of the device specification like

    mdadm --create --verbose /dev/md0 --level=mirror --raid-devices=2 /dev/sdb1 /dev/sdc1 --spare-devices=1 /dev/sdd1

Ok, now we're all set to start initializing the RAID. The mirror must be constructed, eg. the contents (however unimportant now, since the device is still not formatted) of the two devices must be synchronized.

Check out the /proc/mdstat file. It should tell you that the /dev/md0 device has been started, that the mirror is being reconstructed, and an ETA of the completion of the reconstruction.

Reconstruction is done using idle I/O bandwidth. So, your system should still be fairly responsive, although your disk LEDs should be glowing nicely.

The reconstruction process is transparent, so you can actually use the device even though the mirror is currently under reconstruction.

Try formatting the device, while the reconstruction is running. It will work. Also you can mount it and use it while reconstruction is running. Of Course, if the wrong disk breaks while the reconstruction is running, you're out of luck.

RAID-5

You have three or more devices of roughly the same size, you want to combine them into a larger device, but still to maintain a degree of redundancy for data safety. Eventually you have a number of devices to use as spare-disks, that will not take part in the array before another device fails.

If you use N devices where the smallest has size S, the size of the entire array will be (N-1)*S. This "missing" space is used for parity (redundancy) information. Thus, if any disk fails, all data stay intact. But if two disks fail, all data is lost.

A chunk size of 32 kB is a good default for many general purpose filesystems of this size. The array on which the above raidtab is used, is a 7 times 6 GB = 36 GB (remember the (n-1)*s = (7-1)*6 = 36) device. It holds an ext2 filesystem with a 4 kB block size. You could go higher with both array chunk-size and filesystem block-size if your filesystem is either much larger, or just holds very large files. A recommended large-file chunk-size is 128kb.

Ok, enough talking. Let's see if raid-5 works. Run your command:

    mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1 --spare-devices=1 /dev/sde1

and see what happens. Hopefully your disks start working like mad, as they begin the reconstruction of your array. Have a look in /proc/mdstat to see what's going on.

If the device was successfully created, the reconstruction process has now begun. Your array is not consistent until this reconstruction phase has completed. However, the array is fully functional (except for the handling of device failures of course), and you can format it and use it even while it is reconstructing.

Now, you can create a filesystem. See the section on special options to mke2fs before formatting the filesystem. You can now mount it, include it in your /etc/fstab and so on.


Create and mount filesystem

Have a look in /proc/mdstat. You should see that the array is running.

Now, you can create a filesystem, just like you would on any other device, mount it, include it in your /etc/fstab, and so on.

Common filesystem creation commands are mk2fs and mkfs.ext3. Please see options for mke2fs for an example and details.


Using the Array

At this point you should be able to create a simple array of any flavour (hint: --level is your friend)

Ok, now when you have your RAID device running, you can always stop it:

   mdadm --stop /dev/md0

Starting is a little more complex; you may think that:

   mdadm --run /dev/md0

would work - but it doesn't.

Linux raid devices don't really exist on their own; they have to be assembled each time you want to use them. Assembly is like creation insofar as it pulls together devices

If you earlier ran:

mdadm --create /dev/md0 --level=5 --raid-devices=4 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1

then

mdadm --assemble /dev/md0 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1

would work.

However, the easy way to do this if you have a nice simple setup is:

  mdadm --assemble --scan 

For complex cases (ie you pull in disks from other machines that you're trying to repair) this has the potential to start arrays you don't really want started. A safer mechanism is to use the uuid parameter and run:

  mdadm --scan --assemble --uuid=a26bf396:31389f83:0df1722d:f404fe4c

This will only assemble the array that you want - but it will work no matter what has happened to the device names. This is particularly cool if, for example, you add in a new SATA controller card and all of a sudden /dev/sda becomes /dec/sde!!!

The Persistent Superblock

Back in "The Good Old Days" (TM), the raidtools would read your /etc/raidtab file, and then initialize the array. However, this would require that the filesystem on which /etc/raidtab resided was mounted. This was unfortunate if you want to boot on a RAID.

Also, the old approach led to complications when mounting filesystems on RAID devices. They could not be put in the /etc/fstab file as usual, but would have to be mounted from the init-scripts.

The persistent superblocks solve these problems. When an array is created with the persistent-superblock option (the default now), a special superblock is written to a location (different for different superblock versions) on all disks participating in the array. This allows the kernel to read the configuration of RAID devices directly from the disks involved, instead of reading from some configuration file that may not be available at all times.

It's not a bad idea to maintain a consistent /etc/mdadm.conf file, since you may need this file for later recovery of the array.

The persistent superblock is mandatory if you want auto-detection of your RAID devices upon system boot. This is described in the Autodetect section.

Superblock physical layouts are listed on RAID superblock formats .

Advanced Options

Chunk sizes

The chunk-size deserves an explanation. You can never write completely parallel to a set of disks. If you had two disks and wanted to write a byte, you would have to write four bits on each disk. Actually, every second bit would go to disk 0 and the others to disk 1. Hardware just doesn't support that. Instead, we choose some chunk- size, which we define as the smallest "atomic" mass of data that can be written to the devices. A write of 16 kB with a chunk size of 4 kB will cause the first and the third 4 kB chunks to be written to the first disk and the second and fourth chunks to be written to the second disk, in the RAID-0 case with two disks. Thus, for large writes, you may see lower overhead by having fairly large chunks, whereas arrays that are primarily holding small files may benefit more from a smaller chunk size.

Chunk sizes must be specified for all RAID levels, including linear mode. However, the chunk-size does not make any difference for linear mode.

For optimal performance, you should experiment with the chunk-size, as well as with the block-size of the filesystem you put on the array.

The argument to the chunk-size option in /etc/raidtab specifies the chunk-size in kilobytes. So "4" means "4 kB".

RAID-0

Data is written "almost" in parallel to the disks in the array. Actually, chunk-size bytes are written to each disk, serially.

If you specify a 4 kB chunk size, and write 16 kB to an array of three disks, the RAID system will write 4 kB to disks 0, 1 and 2, in parallel, then the remaining 4 kB to disk 0.

A 32 kB chunk-size is a reasonable starting point for most arrays. But the optimal value depends very much on the number of drives involved, the content of the file system you put on it, and many other factors. Experiment with it, to get the best performance.


RAID-0 with ext2

The following tip was contributed by michael@freenet-ag.de:

NOTE: this tip is no longer needed since the ext2 fs supports dedicated options: see "Options for mke2fs" below

There is more disk activity at the beginning of ext2fs block groups. On a single disk, that does not matter, but it can hurt RAID0, if all block groups happen to begin on the same disk.

Example:

With a raid using a chunk size of 4k (also called stride-size), and filesystem using a block size of 4k, each block occupies one stride. With two disks, the #disk * stride-size product (also called stripe-width) is 2*4k=8k. The default block group size is 32768 blocks, which is a multiple of the stripe-width of 2 blocks, so all block groups start on disk 0, which can easily become a hot spot, thus reducing overall performance. Unfortunately, the block group size can only be set in steps of 8 blocks (32k when using 4k blocks), which also happens to be a multiple of the stripe-width, so you can not avoid the problem by adjusting the blocks per group with the -g option of mkfs(8).

If you add a disk, the stripe-width (#disk * stride-size product) is 12k, so the first block group starts on disk 0, the second block group starts on disk 2 and the third on disk 1. The load caused by disk activity at the block group beginnings spreads over all disks.

In case you can not add a disk, try a stride size of 32k. The stripe-width (#disk * stride-size product) is then 64k. Since you can change the block group size in steps of 8 blocks (32k), using 32760 blocks per group solves the problem.

Additionally, the block group boundaries should fall on stride boundaries. The examples above get this right.

RAID-1

For writes, the chunk-size doesn't affect the array, since all data must be written to all disks no matter what. For reads however, the chunk-size specifies how much data to read serially from the participating disks. Since all active disks in the array contain the same information, the RAID layer has complete freedom in choosing from which disk information is read - this is used by the RAID code to improve average seek times by picking the disk best suited for any given read operation.

RAID-4

When a write is done on a RAID-4 array, the parity information must be updated on the parity disk as well.

The chunk-size affects read performance in the same way as in RAID-0, since reads from RAID-4 are done in the same way.


RAID-5

On RAID-5, the chunk size has the same meaning for reads as for RAID-0. Writing on RAID-5 is a little more complicated: When a chunk is written on a RAID-5 array, the corresponding parity chunk must be updated as well. Updating a parity chunk requires either

  • The original chunk, the new chunk, and the old parity block
  • Or, all chunks (except for the parity chunk) in the stripe

The RAID code will pick the easiest way to update each parity chunk as the write progresses. Naturally, if your server has lots of memory and/or if the writes are nice and linear, updating the parity chunks will only impose the overhead of one extra write going over the bus (just like RAID-1). The parity calculation itself is extremely efficient, so while it does of course load the main CPU of the system, this impact is negligible. If the writes are small and scattered all over the array, the RAID layer will almost always need to read in all the untouched chunks from each stripe that is written to, in order to calculate the parity chunk. This will impose extra bus-overhead and latency due to extra reads.

A reasonable chunk-size for RAID-5 is 128 kB, but as always, you may want to experiment with this.

Also see the section on special options to mke2fs. This affects RAID-5 performance.


Options for mke2fs

There are special options available when formatting RAID-4 or -5 devices with mke2fs or mkfs. The -E stride=nn,stripe-width=mm options will allow mke2fs to better place different ext2/ext3 specific data-structures in an intelligent way on the RAID device.

Note: The commands mkfs or mkfs.ext3 or mkfs.ext2 are all versions of the same command, with the same options; use whichever is supported, and decide whether you are using ext2 or ext3 (non-journaled vs journaled).

Here is an example, with its explanation below:

   mke2fs -v -m .1 -b 4096 -E stride=32,stripe-width=64 /dev/md0
   or
   mkfs.ext3 -v -m .1 -b 4096 -E stride=32,stripe-width=64 /dev/md0
   Options explained:
     -v verbose
     -m .1 leave .1% of disk to root (so it doesnt fill and cause problems)
     -b 4096 block size of 4kb (recommended above for large-file systems)
     -E stride=32,stripe-width=64 see below calculation

Calculation

  • chunk size = 128kB (set by mdadm cmd, see chunk size advise above)
  • block size = 4kB (recommended for large files, and most of time)
  • stride = chunk / block = 128kB / 4k = 32kB
  • stride-width = stride * ( (n disks in raid5) - 1 ) = 32kB * ( (3) - 1 ) = 32kB * 2 = 64kB

If the chunk-size is 128 kB, it means, that 128 kB of consecutive data will reside on one disk. If we want to build an ext2 filesystem with 4 kB block-size, we realize that there will be 32 filesystem blocks in one array chunk.

stripe-width=64 is calculated by multiplying the stride=32 value with the number of data disks in the array.

A raid5 with n disks has n-1 data disks, one being reserved for parity. (Note: the mke2fs man page incorrectly states n+1; this is a known bug in the man-page docs that is now fixed.) A raid10 (1+0) with n disks is actually a raid 0 of n/2 raid1 subarrays with 2 disks each.

Performance

RAID-{4,5,10} performance is severely influenced by the stride and stride-width options. It is uncertain how the stride option will affect other RAID levels. If anyone has information on this, please add to the knowledge.

The ext2fs blocksize severely influences the performance of the filesystem. You should always use 4kB block size on any filesystem larger than a few hundred megabytes, unless you store a very large number of very small files on it.

Changing after creation

It is possible to change the parameters with

   tune2fs -E stride=n,stripe-width=m /dev/mdx
Personal tools