Overview

From Linux Raid Wiki
Jump to: navigation, search

Contents

An Overview of Software-RAID

THIS AREA IS RAIDTOOLS ORIENTED - PLEASE EDIT TO CONVERT TO MDADM

This area of the wiki is based on "The Software RAID HowTo" by Jakob OEstergaard jakob@unthought.net and Emilio Bueso bueso@vives.org

This HOWTO describes how to use Software RAID under Linux. It addresses a specific version of the Software RAID layer, namely the 0.90 RAID layer currently maintained by Neil Brown. This is the RAID layer that is the standard in Linux-2.6 and Linux-2.4, and it is the version that is also used by Linux-2.2 kernels shipped from some vendors. The 0.90 RAID support is available as patches to Linux-2.0 and Linux-2.2, and is by many considered far more stable that the older RAID support already in those kernels.

Introduction

This Wiki focuses on the "new-style" RAID present in the 2.6 kernel series only. It does not describe the "old-style" RAID functionality present in 2.0 and 2.2 kernels although much of the functionality is available in the later 2.4 series kernels.

Disclaimer

The mandatory disclaimer:

All information herein is presented "as-is", with no warranties expressed nor implied. If you lose all your data, your job, get hit by a truck, whatever, it's not my fault, nor the developers'. Be aware, that you use the RAID software and this information at your own risk! There is no guarantee whatsoever, that any of the software, or this information, is in any way correct, nor suited for any use whatsoever. Back up all your data before experimenting with this. Better safe than sorry.

What is RAID?

In 1987, David A. Patterson, Garth Gibson and David H. Katz of the University of California, Berkeley, published a paper titled A Case for Redundant Arrays of Inexpensive Disks (RAID).[1] This paper described various types of disk arrays, referred to by the acronym RAID. The basic idea of RAID was to combine multiple small, independent disk drives into an array of disk drives, yielding performance exceeding that of a Single Large Expensive Drive (SLED). Additionally, this array of drives appears to the computer as a single logical storage unit or drive.

The Mean Time Between Failure (MTBF) of the array will be equal to the MTBF of an individual drive, divided by the number of drives in the array. Because of this, the MTBF of an array of drives would be too low for many application requirements. However, disk arrays can be made fault tolerant by redundantly storing information in various ways.

Five types of array architectures, RAID-1 through RAID-5, were defined by the Berkeley paper, each providing disk fault tolerance and each offering different trade-offs in features and performance. In addition to these five redundant array architectures, it has become popular to refer to a non-redundant array of disk drives as a RAID-0 array.

Some of the original RAID levels, namely level 2 and 3, are now only used in very specialized systems, and, in fact, not even supported by the Linux Software RAID drivers. Another level, "linear", has emerged, and especially RAID level 0 is often combined with RAID level 1 (RAID-1+0 or "RAID-10").

Terms

In this HOWTO the word "RAID" means "Linux Software RAID". This HOWTO does not treat any aspects of Hardware RAID. Furthermore, it does not treat any aspects of Software RAID in other operating system kernels.

When describing RAID setups, it is useful to refer to the number of disks and their sizes. At all times the letter N is used to denote the number of active disks in the array (not counting spare-disks). The letter S is the size of the smallest drive in the array, unless otherwise mentioned. The letter P is used as the performance of one disk in the array, in MB/s. When used, we assume that the disks are equally fast, which may not always be true in real-world scenarios.

Note that the words "device" and "disk" are supposed to mean about the same thing. Usually the devices that are used to build a RAID device are partitions on disks, not necessarily entire disks. But combining several partitions on one disk usually does not make sense, so the words devices and disks just mean "partitions on different disks".

The RAID levels

Here's a short description of what is supported in the Linux RAID drivers. Some of this is absolutely basic RAID information, but a few notices have been added about what's special in the Linux implementation of the levels. You can safely skip this section if you know RAID already.

The current RAID drivers in Linux support the following levels:

Linear mode

  • Two or more disks are combined into one physical device. The disks are "appended" to each other, so writing linearly to the RAID device will fill up disk 0 first, then disk 1 and so on. The disks does not have to be of the same size. In fact, size doesn't matter at all here. :)
  • There is no redundancy in this level. If one disk crashes you will most probably lose all your data. You can however be lucky to recover some data, since the filesystem will just be missing one large consecutive chunk of data.
  • The read and write performance will not increase for single reads/writes. But if several users use the device, you may be lucky that one user effectively is using the first disk, and the other user is accessing files which happen to reside on the second disk. If that happens, you will see a performance gain.

RAID-0

  • Also called "stripe" mode. The devices should (but need not) have the same size. Operations on the array will be split on the devices; for example, a large write could be split up as 64 kiB to disk 0, 64 kiB to disk 1, 64 kiB to disk 2, then 64 kiB to disk 0 again, and so on. Writes to each disk will go on at the same time. If one device is much larger than the other devices, that extra space is still utilized in the RAID device, but you will be accessing this larger disk alone, during writes in the high end of your RAID device. This of course hurts performance.
  • Like linear, there is no redundancy in this level either. Unlike linear mode, you will not be able to rescue any data if a drive fails. If you remove a drive from a RAID-0 set, the RAID device will not just miss one consecutive block of data, it will be filled with small holes all over the device. e2fsck or other filesystem recovery tools will probably not be able to recover much from such a device.
  • The read and write performance will increase, because reads and writes are done in parallel on the devices. This is usually the main reason for running RAID-0. If the busses to the disks are fast enough, you can get very close to N*P MB/sec.

RAID-1

  • This is the first mode which actually has redundancy. RAID-1 can be used on two or more disks with zero or more spare-disks. This mode maintains an exact mirror of the information on one disk on the other disk(s). Of Course, the disks must be of equal size. If one disk is larger than another, your RAID device will be the size of the smallest disk.
  • If up to N-1 disks are removed (or crashes), all data are still intact. If there are spare disks available, and if the system (eg. SCSI drivers or IDE chipset etc.) survived the crash, reconstruction of the mirror will immediately begin on one of the spare disks, after detection of the drive fault.
  • Write performance is often worse than on a single device, because identical copies of the data written must be sent to every disk in the array. With large RAID-1 arrays this can be a real problem, as you may saturate the PCI bus with these extra copies. This is in fact one of the very few places where Hardware RAID solutions can have an edge over Software solutions - if you use a hardware RAID card, the extra write copies of the data will not have to go over the PCI bus, since it is the RAID controller that will generate the extra copy. Read performance is good, especially if you have multiple readers or seek- intensive workloads. The RAID code employs a rather good read- balancing algorithm, that will simply let the disk whose heads are closest to the wanted disk position perform the read operation. Since seek operations are relatively expensive on modern disks (a seek time of 8 ms equals a read of 640 kB at 80 MB/sec), picking the disk that will have the shortest seek time does actually give a noticeable performance improvement.

RAID-4

  • This RAID level is not used very often. It can be used on three or more disks. Instead of completely mirroring the information, it keeps parity information on one drive, and writes data to the other disks in a RAID-0 like way. Because one disk is reserved for parity information, the size of the array will be (N-1)*S, where S is the size of the smallest drive in the array. As in RAID-1, the disks should either be of equal size, or you will just have to accept that the S in the (N-1)*S formula above will be the size of the smallest drive in the array.
  • If one drive fails, the parity information can be used to reconstruct all data. If two drives fail, all data is lost.
  • The reason this level is not more frequently used, is because the parity information is kept on one drive. This information must be updated every time one of the other disks are written to. Thus, the parity disk will become a bottleneck, if it is not a lot faster than the other disks. However, if you just happen to have a lot of slow disks and a very fast one, this RAID level can be very useful.

RAID-5

  • This is perhaps the most useful RAID mode when one wishes to combine a larger number of physical disks, and still maintain some redundancy. RAID-5 can be (usefully) used on three or more disks, with zero or more spare-disks. The resulting RAID-5 device size will be (N-1)*S, just like RAID-4. The big difference between RAID-5 and -4 is, that the parity information is distributed evenly among the participating drives, avoiding the bottleneck problem in RAID-4, and also getting more performance out of the disk when reading, as all drives will then be used.
  • If one of the disks fail, all data are still intact, thanks to the parity information. If spare disks are available, reconstruction will begin immediately after the device failure. If two disks fail simultaneously, or before the raid is reconstructed, all data are lost. RAID-5 can survive one disk failure, but not two or more.
  • Both read and write performance usually increase, but can be hard to predict how much. Reads are almost similar to RAID-0 reads, writes can be either rather expensive (requiring read-in prior to write, in order to be able to calculate the correct parity information, such as in database operations), or similar to RAID-1 writes (when larger sequential writes are performed, and parity can be calculated directly from the other blocks to be written). The write efficiency depends heavily on the amount of memory in the machine, and the usage pattern of the array. Heavily scattered writes are bound to be more expensive.

RAID-6

  • This is an extension of RAID-5 to provide more resilience. RAID-6 can be (usefully) used on four or more disks, with zero or more spare-disks. The resulting RAID-6 device size will be (N-2)*S. The big difference between RAID-5 and -6 is that there are two different parity information blocks, and these are distributed evenly among the participating drives.
  • Since there are two parity blocks; if one or two of the disks fail, all data is still intact. If spare disks are available, reconstruction will begin immediately after the device failure(s).
  • Read performance is almost similar to RAID-5 but write performance is worse.

RAID-10

  • RAID-10 is an "in-kernel" combination of RAID-1 and RAID-0 that is more efficient than simply layering RAID levels.
  • RAID-10 has a layout ("far") which can provide sequential read throughput that scales by number of drives, rather than number of RAID-1 pairs. You can get about 95 % of the performance of the RAID-0 with same amount of drives.
  • RAID-10 allows spare disk(s) to be shared amongst all the raid1 pairs.

FAULTY

  • This is a special debugging RAID level. It only allows one device and simulates low level read/write failures.
  • Using a FAULTY device in another RAID level allows administrators to practice dealing with things like sector-failures as opposed to whole drive failures

Requirements

This HOWTO assumes you are using Linux 2.6 or later and the latest tool set.

If you use and recent GNU/Linux distribution based on the 2.4 kernel or later, your system most likely already has a matching version of mdadm for your kernel.

Note: According to its homepage, http://people.redhat.com/mingo/raidtools/, raidtools hasn't been updated since January 2003 and is deprecated in favour of mdadm.

Forward to Devices

Why RAID?

There can be many good reasons for using RAID. A few are: the ability to combine several physical disks into one larger virtual device, performance improvements, and redundancy.

It is, however, very important to understand that RAID is not a general substitute for good backups. Some RAID levels will make your systems immune to data loss from one or two disk failures, but RAID will not allow you to recover from an accidental rm -rf /. RAID will also not help you preserve your data if the server holding the RAID itself is lost in one way or the other (theft, flooding, earthquake, Martian invasion etc.)

RAID is meant to allow you to keep systems up and running, in case of common hardware problems (disk failure). It is not in itself a complete data safety solution. This is very important to realize.

With modern huge SATA drives, however, it is still very easy to lose an array to a single drive failure. The wrong choice of raid level, for example. And it is common for the reconstruction of an array after one drive has failed, to be actively responsible for the failure of a second drive. Drives don't tend to fail at random - if you buy all the drives for your raid at the same time, then they are all likely to fail at roughly the same time. And the most common cause of anguished emails to the mailing list is using unsuitable drives for the raid.

Device and filesystem support

Linux RAID can work on most block devices. It doesn't matter whether you use SATA, USB, IDE or SCSI devices, or a mixture. Some people have also used the Network Block Device (NBD) with success.

Since a Linux Software RAID device is itself a block device, the above implies that you can actually create a RAID of other RAID devices. This in turn makes it possible to support RAID-1+0 (RAID-0 of multiple RAID-1 devices), simply by using the RAID-0 and RAID-1 functionality together. Other more exotic configurations, such a RAID-5 over RAID-5 "matrix" configurations are equally supported.

(Do not confuse RAID 1+0 with RAID-10. Although nominally identical, 1+0 is a raid array built on other raid arrays, while RAID-10 is actually a distinct linux raid level. Unfortunately, the terms are usually used interchangeably. If you see a "+" sign it should mean a raid array of raid arrays - the other common one is 5+0.)

The RAID layer has absolutely nothing to do with the filesystem layer. You can put any filesystem on a RAID device, just like any other block device.

Performance

Often RAID is employed as a solution to performance problems. While RAID can indeed often be the solution you are looking for, it is not a silver bullet. There can be many reasons for performance problems, and RAID is only the solution to a few of them.

See the Introduction#The_RAID_levels for a mention of the performance characteristics of each level.

See the Performance section for comparison of different levels of RAID.

Swapping on RAID

Swapping on a mirrored RAID can help you survive a failing disk. If a disk fails, then data for swapped processes would be inaccessible in a non-mirrored environment. If you run in a mirrored environment, then the system can go on running even if a disk fails in service. You can even have more than one copy of the data with a raid10 type array, preventing against multiple disks failing.

There's not much reason to use RAID0 for swap performance reasons. The kernel itself can stripe swapping on several devices, if you just give them the same priority in the /etc/fstab file.

A nice /etc/fstab could look like:

 /dev/sda2       swap           swap    defaults,pri=1   0 0
 /dev/sdb2       swap           swap    defaults,pri=1   0 0
 /dev/sdc2       swap           swap    defaults,pri=1   0 0
 /dev/sdd2       swap           swap    defaults,pri=1   0 0
 /dev/sde2       swap           swap    defaults,pri=1   0 0
 /dev/sdf2       swap           swap    defaults,pri=1   0 0
 /dev/sdg2       swap           swap    defaults,pri=1   0 0


This setup lets the machine swap in parallel on seven hard drives. No need for RAID0, since this has been a kernel feature for a long time.

A different reason to use RAID for swap is high availability. If you set up a system to boot on eg. a RAID-1 device, the system should be able to survive a disk crash. If a system without mirrored swapping has been swapping on the now faulty device, you will most likely be going down. Swapping on a mirrored RAID partition such as RAID-1, raid10,n2 or raid10,f2 type would solve this problem.

There has been a lot of discussion about whether swap was stable on RAID devices. This is a continuing debate, because it depends highly on other aspects of the kernel as well. As of this writing, it seems that swapping on RAID should be perfectly stable, you should however stress-test the system yourself until you are satisfied with the stability.

You can set up RAID in a swap file on a filesystem on your RAID device, or you can set up a RAID device as a swap partition, as you see fit. As usual, the RAID device is just a block device.

Why mdadm?

mdadm is now the standard software RAID management tool for Linux.

The mdadm tool was written by Neil Brown, who is a software engineer at the University of New South Wales and a kernel developer. See http://www.kernel.org/pub/linux/utils/raid/mdadm/ANNOUNCE for the latest version. It was based on and has obsoleted the raidtools suite.

  • mdadm can diagnose, monitor and gather detailed information about your arrays
  • mdadm is a single centralized program and provides a common syntax for every RAID management command
  • mdadm can perform almost all of its functions without having a configuration file and does not use one by default
  • Also, if a configuration file is needed, mdadm will help with management of its contents


Forward to Devices
Back to Why RAID? Forward to Hardware issues

Devices

Software RAID devices are so-called "block" devices, like ordinary disks or disk partitions. A RAID device is "built" from a number of other block devices - for example, a RAID-1 could be built from two ordinary disks, or from two disk partitions (on separate disks - please see the description of RAID-1 for details on this).

(It is recommended not build a RAID array directly on a disk. It's not a problem with RAID, but some disk utilities assume that a drive without a GPT or MBR is blank and will happily stomp all over it.)

There are no other special requirements to the devices from which you build your RAID devices - this gives you a lot of freedom in designing your RAID solution. For example, you can build a RAID from a mix of SATA, network and other RAID devices (this is useful for RAID-0+1, where you simply construct two RAID-1 devices from ordinary disks, and finally construct a RAID-0 device from those two RAID-1 devices). It is not adviseable to use USB devices, however, as these go to sleep and interact badly with the raid code.

Therefore, in the following text, we will use the word "device" as meaning "disk", "partition", or even "RAID device". A "device" in the following text simply refers to a "Linux block device". It could be anything from a SATA disk to a network block device. We will commonly refer to these "devices" simply as "disks", because that is what they will be in the common case.

However, there are several roles that devices can play in your arrays. A device could be a "spare disk", it could have failed and thus be a "faulty disk", or it could be a normally working and fully functional device actively used by the array.

In the following we describe two special types of devices; namely the "spare disks" and the "faulty disks".

It is worth mentioning the existence of the FAULTY RAID level - don't get confused - this is a special debugging level of RAID that uses a normal device and simulates faults.

Spare disks

Spare disks (often called hot spares) are disks that do not take part in the RAID set until one of the active disks fail. When a device failure is detected, that device is marked as "faulty" and reconstruction is immediately started on the first spare disk available.

Thus, spare disks add a nice extra safety to especially RAID-5 systems that perhaps are hard to get to (physically). One can allow the system to run for some time, with a faulty device, since the spare disk takes the place of the faulty device and all redundancy is restored.

It is also possible to have spare disks spin-down to save energy; obviously the spin-up time for these warm spares is insignificant compared to the resync time.

You cannot be sure that your system will keep running after a disk crash though. The RAID layer should handle device failures just fine, but SCSI drivers could be broken on error handling, or the IDE chipset could lock up, or a lot of other things could happen.

Also, once reconstruction to a hot-spare begins, the RAID layer will start reading from all the other disks to re-create the redundant information. If multiple disks have built up bad blocks over time, the reconstruction itself can actually trigger a failure on one of the "good" disks. This can lead to a complete RAID failure and is the major reason for using RAID-6 in preference to RAID-5 and a hot spare. Indeed, if using the wrong sort of disk it commonly leads to a complete raid failure. (It is usually possible to recover from this situation, however.)

If you do frequent backups of the entire filesystem on the RAID array, or scrub the array regularly, then it is highly unlikely that you would ever get in this situation - this is another very good reason for taking frequent backups. Remember, RAID is not a substitute for backups.


Faulty disks

When the RAID layer handles device failures just fine, crashed disks are marked as faulty, and reconstruction is immediately started on the first spare-disk available. If no spare is available then the array runs in 'degraded' mode.

Faulty disks still appear and behave as members of the array. The RAID layer just avoids reading/writing them.

If a device needs to be removed from an array for any reason (eg pro-active replacement due to SMART reports) then it must be marked as faulty before it can be removed.

The section on Detecting, querying and testing provides more information.

Back to Why RAID? Forward to Hardware issues
Back to Devices Forward to RAID setup

Hardware issues

This section will mention some of the hardware concerns involved when running software RAID. References to IDE and SCSI have been deleted, all recent drives are SATA.

If you are going after high performance, you should be using SSDs (or hybrid drives), and make sure you match the performance of the drives to the performance of the bus. Many motherboards come with 6 SATA connectors so setting up a RAID is easy and affordable.

See also the section on bottlenecks.

Drive Selection

Desktop and Enterprise drives

Disk drives now tend to come in two varieties, desktop drives from which most of the features needed for a decent raid have been deleted, and enterprise drives, which have the features but are designed to run 24/7. So if you want to run raid on a desktop system it's rather difficult to find a drive that is suitable.

TLER and SCT/ERC

TLER (Time Limited Error Recovery) is a WD creation, which means that drives will return within 7 seconds. Having introduced it, WD subsequently disabled it on most desktop drives, although it is enabled by default on enterprise drives.

SCT/ERC is the generic specification implemented by TLER.

If it's available this feature needs to be enabled. If it isn't enabled or available, the linux defaults will interact badly with the drive, and a single drive failure will usually take down the array.

smartctl -x

This command will tell you what the drive is capable of. If possible, it would be wise to see the output of it on the drive(s) you are thinking of buying. The following is the output from my laptop's Toshiba drive. Note especially where it says SCT Error Recovery Control is supported.

crappit:/home/anthony # smartctl -x /dev/sda
smartctl 6.2 2013-11-07 r3856 [x86_64-linux-4.1.27-27-default] (SUSE RPM)
Copyright (C) 2002-13, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Device Model:     ST2000LM003 HN-M201RAD
Serial Number:    S321J9DG805231
LU WWN Device Id: 5 0004cf 2106b38eb
Firmware Version: 2BC10001
User Capacity:    2,000,398,934,016 bytes [2.00 TB]
Sector Sizes:     512 bytes logical, 4096 bytes physical
Rotation Rate:    5400 rpm
Device is:        Not in smartctl database [for details use: -P showall]
ATA Version is:   ATA8-ACS T13/1699-D revision 6
SATA Version is:  SATA 3.0, 6.0 Gb/s (current: 3.0 Gb/s)
Local Time is:    Tue Sep 20 00:05:59 2016 BST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
AAM feature is:   Disabled
APM feature is:   Disabled
Rd look-ahead is: Enabled
Write cache is:   Enabled
ATA Security is:  Disabled, NOT FROZEN [SEC1]
Wt Cache Reorder: Enabled

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

General SMART Values:
Offline data collection status:  (0x00) Offline data collection activity
                                        was never started.
                                        Auto Offline Data Collection: Disabled.
Self-test execution status:      (   0) The previous self-test routine completed
                                        without error or no self-test has ever 
                                        been run.
Total time to complete Offline 
data collection:                (22740) seconds.
Offline data collection
capabilities:                    (0x5b) SMART execute Offline immediate.
                                        Auto Offline data collection on/off support.
                                        Suspend Offline collection upon new
                                        command.
                                        Offline surface scan supported.
                                        Self-test supported.
                                        No Conveyance Self-test supported.
                                        Selective Self-test supported.
SMART capabilities:            (0x0003) Saves SMART data before entering
                                        power-saving mode.
                                        Supports SMART auto save timer.
Error logging capability:        (0x01) Error logging supported.
                                        General Purpose Logging supported.
Short self-test routine 
recommended polling time:        (   1) minutes.
Extended self-test routine
recommended polling time:        ( 379) minutes.
SCT capabilities:              (0x003f) SCT Status supported.
                                        SCT Error Recovery Control supported.
                                        SCT Feature Control supported.
                                        SCT Data Table supported.


SATA Configuration (2011)

SATA is beginning to support a new feature called "port multipliers", which effectively multiplex several SATA disks onto the same host SATA port. this can decrease cabling concerns. it's also fairly common to see multi-port SATA controllers, which put 4 ports onto the connector originated by Infiniband; this makes it possible to create 24-port SATA controllers, for instance.

Hot Swap (2011)

Note: for description of Linux RAID hotplug support, see the Hotplug page.

Hot-swapping with SATA/SAS

SATA/SAS hotplug support is required by the SATA/SAS specifications, therefore SATA/SAS platform is the one where hotplug should be least problematic. But still, you can fall in non-compliance pitfalls, so read on before you start experimenting!

Hotplug support in mainboard/disk controller chipsets

Newer mainboard/disk controllers chipsets and their drivers usually support hotplug.

If the chipset is AHCI-compliant, it will be (probably) able to use the ahci kernel module providing hotplug and power managment support. The ahci module is present in the Linux kernel since 2.6.19.

But still, not all chipsets support hotplug. Also, some chipsets that could in theory support hotplug (but are not AHCI-compliant) don't have the necessary support in the linux kernel. For more information on SATA drivers' status, see http://ata.wiki.kernel.org/index.php/SATA_hardware_features.

Hotplug support in SATA/SAS disks

All current SATA and SAS drives that have the 15 pin SATA power connector are hotplug-ready. There might be some very old historical SATA disks with 4-pin Molex power connector which do not have the 15 pin SATA power connector. Such old drives should never be hotplugged directly (without a hotswap bay) otherwise you risk their damage.

Hotplug support by SATA/SAS cables

For protecting the disk circuitry during the hotplug, the 15-pin SATA/SAS power connector on the cable side must have 2 pins (pin nr. 4 and 12) longer than the others.

Explanation:

  • on cable/backplane connector ("receptacle") side, pins 4 and 12 are longer and are called "staggered pins". These pins bring the GND to the disk before the other pins get attached, ensuring that no sensitive circuitry is connected before there is a reliable system ground
  • on the device side, pins 3, 7, 13 are the staggered pins. These pins bring the 3.3V, 5V and 12V power to the precharge power electronics in the disk before the other power pins are atached.

Important warning Normal 15-pin SATA power cable receptacle, found in ordinary power supplies or computer cases, does not have pins 4 and 12 staggered! In fact, it is quite hard to find a hotplug-compatible SATA power receptacle. On the first sight, the difference is subtle, see pictures of several SATA receptacle types here before you try start playing hotplug games with your drive!

!!!! Please remember, that without the staggered GND pins on the SATA power cable receptacle, you risk the damage of your disk when doing hotplug/hot-unplug !!!!

The hotplug-compatible SATA power receptacle must be present in all SAS/SATA hotswap cages.

In case you don't have hotswap cage, but you do have 15pin hotplug-compatible SATA power receptacle, this should be the correct sequence for plugging and unplugging the disk[2]:

For hotplug:

  1. connect the 15pin power receptacle to the disk
  2. connect the 7pin data cable

For hot-unplug:

  1. unplug the data cable from the disk
  2. unplug the power cable

Hot-swapping with SCA

With SCA, it is possible to hot-plug devices. Unfortunately, this is not as simple as it should be, but it is both possible and safe.

Replace the RAID device, disk device, and host/channel/id/lun numbers with the appropriate values in the example below:


  • Dump the partition table from the drive, if it is still readable:
    sfdisk -d /dev/sdb > partitions.sdb


  • Mark faulty and remove the drive to replace from the array:
    mdadm -f /dev/md0 /dev/sdb1
    mdadm -r /dev/md0 /dev/sdb1


  • Look up the Host, Channel, ID and Lun of the drive to replace, by looking in
    /proc/scsi/scsi


  • Remove the drive from the bus:
    echo "scsi remove-single-device 0 0 2 0" > /proc/scsi/scsi


  • Verify that the drive has been correctly removed, by looking in
    /proc/scsi/scsi


  • Unplug the drive from your SCA bay, and insert a new drive
  • Add the new drive to the bus:
    echo "scsi add-single-device 0 0 2 0" > /proc/scsi/scsi


(this should spin up the drive as well)

  • Re-partition the drive using the previously dumped partition table:


    sfdisk /dev/sdb < partitions.sdb


  • Add the drive to your array:
    mdadm -a /dev/md0 /dev/sdb1


The arguments to the "scsi remove-single-device" commands are: Host, Channel, Id and Lun. These numbers are found in the "/proc/scsi/scsi" file.

The above steps have been tried and tested on a system with IBM SCA disks and an Adaptec SCSI controller. If you encounter problems or find easier ways to do this, please discuss this on the linux-raid mailing list.

Back to Devices Forward to RAID setup
Back to Hardware issues Forward to Detecting, querying and testing

RAID setup

General setup

This is what you need for any of the RAID levels:

  • A kernel with the appropriate md support either as modules or built-in. Preferably a kernel from the 4.x series. Although most of this should work fine with later 3.x kernels, too.
  • The mdadm tool
  • Patience, Pizza, and your favorite caffeinated beverage.

The first two items are included as standard in most GNU/Linux distributions today.

If your system has RAID support, you should have a file called /proc/mdstat. Remember it, that file is your friend. If you do not have that file, maybe your kernel does not have RAID support.

If you're sure your kernel has RAID support you may need to run run modprobe raid[RAID mode] to load raid support into your kernel. eg to support raid5:

modprobe raid456

See what the file contains, by doing a

cat /proc/mdstat

It should tell you that you have the right RAID personality (eg. RAID mode) registered, and that no RAID devices are currently active. See the /proc/mdstat page for more details.

Preparing and partitioning your disk devices

Arrays can be built on top of entire disks or on partitions.

This leads to 2 frequent questions:

  • Should I use entire device or a partition?
  • What partition type?

Which are discussed in Partition Types

Downloading and installing mdadm - the RAID management tool

mdadm is now the standard RAID management tool and should be found in any modern distribution.

You can retrieve the most recent version of mdadm with

git clone git://neil.brown.name/mdadm

In the absence of any other preferences, do that in the /usr/local/src directory. As a linux-specific program there is none of this autoconf stuff - just follow the instructions as per the INSTALL file.

Alternatively just use the normal distribution method for obtaining the package:

Debian, Ubuntu:

 apt-get install mdadm

Gentoo:

 emerge mdadm

RedHat:

 yum install mdadm

[open]SUSE:

 zypper in mdadm

Mdadm modes of operation

mdadm is well documented in its manpage - well worth a read.

   man mdadm

mdadm has 7 major modes of operation. Normal operation just uses the 'Create', 'Assemble' and 'Monitor' commands - the rest come in handy when you're messing with your array; typically fixing it or changing it.

1. Create

Create a new array with per-device superblocks (normal creation).

2. Assemble

Assemble the parts of a previously created array into an active array. Components can be explicitly given or can be searched for. mdadm checks that the components do form a bona fide array, and can, on request, fiddle superblock information so as to assemble a faulty array. Typically you do this in the init scripts after rebooting.

3. Follow or Monitor

Monitor one or more md devices and act on any state changes. This is only meaningful for raid1, 4, 5, 6, 10 or multipath arrays as only these have interesting state. raid0 or linear never have missing, spare, or failed drives, so there is nothing to monitor. Typically you do this after rebooting too.

4. Build

Build an array that doesn't have per-device superblocks. For these sorts of arrays, mdadm cannot differentiate between initial creation and subsequent assembly of an array. It also cannot perform any checks that appropriate devices have been requested. Because of this, the Build mode should only be used together with a complete understanding of what you are doing.

5. Grow

Grow, shrink or otherwise reshape an array in some way. Currently supported growth options including changing the active size of component devices in RAID level 1/4/5/6 and changing the number of active devices in RAID1.

6. Manage

This is for doing things to specific components of an array such as adding new spares and removing faulty devices.

7. Misc

This is an 'everything else' mode that supports operations on active arrays, operations on component devices such as erasing old superblocks, and information gathering operations.


Create RAID device

Below we'll see how to create arrays of various types; the basic approach is:

   mdadm --create /dev/md0 <blah>
   mdadm --monitor /dev/md0

If you want to access all the latest and upcoming features such as fully named RAID arrays so you no longer have to memorize which partition goes where, you'll want to make sure to use persistent metadata in the version 1.0 or higher format, as there is no way (currently or planned) to convert an array to a different metadata version. Current recommendations are to use metadata version 1.2 except when creating a boot partition, in which case use version 1.0 metadata and RAID-1.[3]

Booting from a 1.2 raid is only supported when booting with an initramfs, as the kernel can no longer assemble or recognise an array - it relies on userspace tools. Booting directly from 1.0 is supported because the metadata is at the end of the array, and the start of a mirrored 1.0 array just looks like a normal partition to the kernel.

NOTE: A work-around to upgrade metadata from version 0.90 to 1.0 is contained in the section RAID superblock formats.

To change the metadata version (the default is now version 1.2 metadata) add the --metadata option after the switch stating what you're doing in the first place. This will work:

   mdadm --create /dev/md0 --metadata 1.0 <blah>

This, however, will not work:

   mdadm --metadata 1.0 --create /dev/md0 <blah>

Linear mode

Ok, so you have two or more partitions which are not necessarily the same size (but of course can be), which you want to append to each other.

Spare-disks are not supported here. If a disk dies, the array dies with it. There's no information to put on a spare disk.

Using mdadm, a single command like

    mdadm --create --verbose /dev/md0 --level=linear --raid-devices=2 /dev/sdb6 /dev/sdc5

should create the array. The parameters talk for themselves. The out- put might look like this

   mdadm: chunk size defaults to 64K
   mdadm: array /dev/md0 started.

Have a look in /proc/mdstat. You should see that the array is running.

Now, you can create a filesystem, just like you would on any other device, mount it, include it in your /etc/fstab and so on.

RAID-0

You have two or more devices, of approximately the same size, and you want to combine their storage capacity and also combine their performance by accessing them in parallel.

    mdadm --create --verbose /dev/md0 --level=stripe --raid-devices=2 /dev/sdb6 /dev/sdc5

Like in Linear mode, spare disks are not supported here either. RAID-0 has no redundancy, so when a disk dies, the array goes with it.

Having run mdadm you have initialised the superblocks and started the raid device. Have a look in /proc/mdstat to see what's going on. You should see that your device is now running.

/dev/md0 is now ready to be formatted, mounted, used and abused.

RAID-1

You have two devices of approximately same size, and you want the two to be mirrors of each other. Eventually you have more devices, which you want to keep as stand-by spare-disks, that will automatically become a part of the mirror if one of the active devices break.

    mdadm --create --verbose /dev/md0 --level=mirror --raid-devices=2 /dev/sdb1 /dev/sdc1

If you have spare disks, you can add them to the end of the device specification like

    mdadm --create --verbose /dev/md0 --level=mirror --raid-devices=2 /dev/sdb1 /dev/sdc1 --spare-devices=1 /dev/sdd1

Ok, now we're all set to start initializing the RAID. The mirror must be constructed, eg. the contents (however unimportant now, since the device is still not formatted) of the two devices must be synchronized.

Check out the /proc/mdstat file. It should tell you that the /dev/md0 device has been started, that the mirror is being reconstructed, and an ETA of the completion of the reconstruction.

Reconstruction is done using idle I/O bandwidth. So, your system should still be fairly responsive, although your disk LEDs should be glowing nicely.

The reconstruction process is transparent, so you can actually use the device even though the mirror is currently under reconstruction.

Try formatting the device, while the reconstruction is running. It will work. Also you can mount it and use it while reconstruction is running. Of Course, if the wrong disk breaks while the reconstruction is running, you're out of luck.

RAID-4/5/6

You have three or more devices (four or more for RAID-6) of roughly the same size, you want to combine them into a larger device, but still to maintain a degree of redundancy for data safety. Eventually you have a number of devices to use as spare-disks, that will not take part in the array before another device fails.

If you use N devices where the smallest has size S, the size of the entire raid-5 array will be (N-1)*S, or (N-2)*S for raid-6. This "missing" space is used for parity (redundancy) information. Thus, if any disk fails, all the data stays intact. But if two disks fail on raid-5, or three on raid-6, all data is lost.

The default chunk-size is 128kb. That's the default io size on a spindle.

Ok, enough talking. Let's see if raid-5 works. Run your command:

    mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1 --spare-devices=1 /dev/sde1

and see what happens. Hopefully your disks start working like mad, as they begin the reconstruction of your array. Have a look in /proc/mdstat to see what's going on.

If the device was successfully created, the reconstruction process has now begun. Your array is not consistent until this reconstruction phase has completed. However, the array is fully functional (except for the handling of device failures of course), and you can format it and use it even while it is reconstructing.

The initial reconstruction will always appear as though the array is degraded and is being reconstructed onto a spare, even if only just enough devices were added with zero spares. This is to optimize the initial reconstruction process. This may be confusing or worrying; it is intended for good reason. For more information, please check this source, directly from Neil Brown.

Now, you can create a filesystem. See the section on special options to mke2fs before formatting the filesystem. You can now mount it, include it in your /etc/fstab and so on.

Saving your RAID configuration (2011)

After you've created your array, it's important to save the configuration in the proper mdadm configuration file. In Ubuntu, this is file /etc/mdadm/mdadm.conf. In some other distributions, this is file /etc/mdadm.conf. Check your distribution's documentation, or look at man mdadm.conf, to see what applies to your distribution.

To save the configuration information:

Ubuntu:

mdadm --detail --scan >> /etc/mdadm/mdadm.conf

Others (check your distribution's documentation):

mdadm --detail --scan >> /etc/mdadm.conf

Note carefully that if you do this before your array has finished initialization, you may have an inaccurate spares= clause.

In Ubuntu, if you neglect to save the RAID creation information, you will get peculiar errors when you try to assemble the RAID device (described below). There will be errors generated that the hard drive is busy, even though it seems to be unused. For example, the error might be similar to this: "mdadm: Cannot open /dev/sdd1: Device or resource busy". This happens because if there is no RAID configuration information in the mdadm.conf file, the system may create a RAID device from one disk in the array, activate it, and leave it unmounted. You can identify this problem by looking at the output of "cat /proc/mdstat". If it lists devices such as "md_d0" that are not part of your RAID setup, then first stop the extraneous device (for example: "mdadm --stop /dev/md_d0") and then try to assemble your RAID array as described below.

Create and mount filesystem

Have a look in /proc/mdstat. You should see that the array is running.

Now, you can create a filesystem, just like you would on any other device, mount it, include it in your /etc/fstab, and so on.

Common filesystem creation commands are mk2fs and mkfs.ext3. Please see options for mke2fs for an example and details.


Using the Array

Stopping a running RAID device is easy:

   mdadm --stop /dev/md0

Starting is a little more complex; you may think that:

   mdadm --run /dev/md0

would work - but it doesn't.

Linux raid devices don't really exist on their own; they have to be assembled each time you want to use them. Assembly is like creation insofar as it pulls together devices

If you earlier ran:

mdadm --create /dev/md0 --level=5 --raid-devices=4 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1

then

mdadm --assemble /dev/md0 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1

would work.

However, the easy way to do this if you have a nice simple setup is:

  mdadm --assemble --scan 

For complex cases (ie you pull in disks from other machines that you're trying to repair) this has the potential to start arrays you don't really want started. A safer mechanism is to use the uuid parameter and run:

  mdadm --scan --assemble --uuid=a26bf396:31389f83:0df1722d:f404fe4c

This will only assemble the array that you want - but it will work no matter what has happened to the device names. This is particularly cool if, for example, you add in a new SATA controller card and all of a sudden /dev/sda becomes /dev/sde!!!

The Persistent Superblock (2011)

Back in "The Good Old Days" (TM), the raidtools would read your /etc/raidtab file, and then initialize the array. However, this would require that the filesystem on which /etc/raidtab resided was mounted. This was unfortunate if you want to boot on a RAID.

Also, the old approach led to complications when mounting filesystems on RAID devices. They could not be put in the /etc/fstab file as usual, but would have to be mounted from the init-scripts.

The persistent superblocks solve these problems. When an array is created with the persistent-superblock option (the default now), a special superblock is written to a location (different for different superblock versions) on all disks participating in the array. This allows the kernel to read the configuration of RAID devices directly from the disks involved, instead of reading from some configuration file that may not be available at all times.

It's not a bad idea to maintain a consistent /etc/mdadm.conf file, since you may need this file for later recovery of the array, although this is pretty much totally unnecessary today.

A persistent superblock is mandatory for auto-assembly of your RAID devices upon system boot.

NOTE: Were persistent superblocks necessary for kernel raid support? This support has been moved into user space so this section may (or may not) be seriously out of date.

Superblock physical layouts are listed on RAID superblock formats .

External Metadata (2011)

MDRAID has always used its own metadata format. There are two different major formats for the MDRAID native metadata, the 0.90 and the version-1. The old 0.90 format limits the arrays to 28 components and 2 terabytes. With the latest mdadm, version 1.2 is the default.

Starting with Linux kernel v2.6.27 and mdadm v3.0, external metadata are supported. These formats have been long supported with DMRAID and allow the booting of RAID volumes from Option ROM depending on the vendor.

The first format is the DDF (Disk Data Format) defined by SNIA as the "Industry Standard" RAID metadata format. When a DDF array is constructed, a container is created in which normal RAID arrarys can be created within the container.

The second format is the Intel(r) Matrix Storage Manager metadata format. This also creates a container that is managed similar to DDF. And on some platforms (depending on vendor), this format is supported by option-ROM in order to allow booting. [4]


To report the RAID information from the Option ROM:

   mdadm --detail-platform
 Platform : Intel(R) Matrix Storage Manager
         Version : 8.9.0.1023
     RAID Levels : raid0 raid1 raid10 raid5
     Chunk Sizes : 4k 8k 16k 32k 64k 128k
       Max Disks : 6
     Max Volumes : 2
  I/O Controller : /sys/devices/pci0000:00/0000:00:1f.2
           Port0 : /dev/sda (3MT0585Z)
           Port1 : - non-disk device (ATAPI DVD D  DH16D4S) -
           Port2 : /dev/sdb (WD-WCANK2850263)
           Port3 : /dev/sdc (3MT005ML)
           Port4 : /dev/sdd (WD-WCANK2850441)
           Port5 : /dev/sde (WD-WCANK2852905)
           Port6 : - no device attached –

To create RAID volumes that are external metadata, we must first create a container:

   mdadm --create --verbose /dev/md/imsm /dev/sd[b-g] --raid-devices 4 --metadata=imsm

In this example we created an IMSM based container for 4 RAID devices. Now we can create volumes within the container.

   mdadm --create --verbose /dev/md/vol0 /dev/md/imsm --raid-devices 4 --level 5

Of course, the --size option can be used to limit the size of the disk space used in the volume during creation in order to create multiple volumes within the container. One important note is that the various volumes within the container MUST span the same disks. i.e. a RAID10 volume and a RAID5 volume spanning the same number of disks.

Advanced Options

Chunk sizes

The chunk-size deserves an explanation. You can never write completely parallel to a set of disks. If you had two disks and wanted to write a byte, you would have to write four bits on each disk. Actually, every second bit would go to disk 0 and the others to disk 1. Hardware just doesn't support that. Instead, we choose some chunk- size, which we define as the smallest "atomic" mass of data that can be written to the devices. A write of 16 kB with a chunk size of 4 kB will cause the first and the third 4 kB chunks to be written to the first disk and the second and fourth chunks to be written to the second disk, in the RAID-0 case with two disks. Thus, for large writes, you may see lower overhead by having fairly large chunks, whereas arrays that are primarily holding small files may benefit more from a smaller chunk size.

Chunk sizes must be specified for all RAID levels, including linear mode. However, the chunk-size does not make any difference for linear mode.

For optimal performance, you should experiment with the chunk-size, as well as with the block-size of the filesystem you put on the array. For others experiments and performance charts, check out our Performance page. You can get chunk-size graphs galore.

RAID-0

Data is written "almost" in parallel to the disks in the array. Actually, chunk-size bytes are written to each disk, serially.

If you specify a 4 kB chunk size, and write 16 kB to an array of three disks, the RAID system will write 4 kB to disks 0, 1 and 2, in parallel, then the remaining 4 kB to disk 0.

A 32 kB chunk-size is a reasonable starting point for most arrays. But the optimal value depends very much on the number of drives involved, the content of the file system you put on it, and many other factors. Experiment with it, to get the best performance.


RAID-0 with ext2

The following tip was contributed by michael@freenet-ag.de:

NOTE: this tip is no longer needed since the ext2 fs supports dedicated options: see "Options for mke2fs" below

There is more disk activity at the beginning of ext2fs block groups. On a single disk, that does not matter, but it can hurt RAID0, if all block groups happen to begin on the same disk.

Example:

With a raid using a chunk size of 4k (also called stride-size), and filesystem using a block size of 4k, each block occupies one stride. With two disks, the #disk * stride-size product (also called stripe-width) is 2*4k=8k. The default block group size is 32768 blocks, which is a multiple of the stripe-width of 2 blocks, so all block groups start on disk 0, which can easily become a hot spot, thus reducing overall performance. Unfortunately, the block group size can only be set in steps of 8 blocks (32k when using 4k blocks), which also happens to be a multiple of the stripe-width, so you can not avoid the problem by adjusting the blocks per group with the -g option of mkfs(8).

If you add a disk, the stripe-width (#disk * stride-size product) is 12k, so the first block group starts on disk 0, the second block group starts on disk 2 and the third on disk 1. The load caused by disk activity at the block group beginnings spreads over all disks.

In case you can not add a disk, try a stride size of 32k. The stripe-width (#disk * stride-size product) is then 64k. Since you can change the block group size in steps of 8 blocks (32k), using 32760 blocks per group solves the problem.

Additionally, the block group boundaries should fall on stride boundaries. The examples above get this right.

RAID-1

For writes, the chunk-size doesn't affect the array, since all data must be written to all disks no matter what. For reads however, the chunk-size specifies how much data to read serially from the participating disks. Since all active disks in the array contain the same information, the RAID layer has complete freedom in choosing from which disk information is read - this is used by the RAID code to improve average seek times by picking the disk best suited for any given read operation.

RAID-4

When a write is done on a RAID-4 array, the parity information must be updated on the parity disk as well.

The chunk-size affects read performance in the same way as in RAID-0, since reads from RAID-4 are done in the same way.


RAID-5

On RAID-5, the chunk size has the same meaning for reads as for RAID-0. Writing on RAID-5 is a little more complicated: When a chunk is written on a RAID-5 array, the corresponding parity chunk must be updated as well. Updating a parity chunk requires either

  • The original chunk, the new chunk, and the old parity block
  • Or, all chunks (except for the parity chunk) in the stripe

The RAID code will pick the easiest way to update each parity chunk as the write progresses. Naturally, if your server has lots of memory and/or if the writes are nice and linear, updating the parity chunks will only impose the overhead of one extra write going over the bus (just like RAID-1). The parity calculation itself is extremely efficient, so while it does of course load the main CPU of the system, this impact is negligible. If the writes are small and scattered all over the array, the RAID layer will almost always need to read in all the untouched chunks from each stripe that is written to, in order to calculate the parity chunk. This will impose extra bus-overhead and latency due to extra reads.

A reasonable chunk-size for RAID-5 is 128 kB. A study showed that with 4 drives (even-number-of-drives might make a difference) that large chunk sizes of 512-2048 kB gave superior results [5]. As always, you may want to experiment with this or check out our Performance page.

Also see the section on special options to mke2fs. This affects RAID-5 performance.


ext2, ext3, and ext4 (2011)

There are special options available when formatting RAID-4 or -5 devices with mke2fs or mkfs. The -E stride=nn,stripe-width=mm options will allow mke2fs to better place different ext2/ext3 specific data-structures in an intelligent way on the RAID device.

Note: The commands mkfs or mkfs.ext3 or mkfs.ext2 are all versions of the same command, with the same options; use whichever is supported, and decide whether you are using ext2 or ext3 (non-journaled vs journaled). See the two versions of the same command below; each makes a different filesystem type.

Note that ext3 no longer exists in the kernel - it has been subsumed into the ext4 driver, although ext3 filesystems can still be created and used.

Here is an example, with its explanation below:

   mke2fs -v -m .1 -b 4096 -E stride=32,stripe-width=64 /dev/md0
   or
   mkfs.ext3 -v -m .1 -b 4096 -E stride=32,stripe-width=64 /dev/md0
   Options explained:
     The first command makes a ext2 filesystem, the second makes a ext3 filesystem
     -v verbose
     -m .1 leave .1% of disk to root (so it doesnt fill and cause problems)
     -b 4096 block size of 4kb (recommended above for large-file systems)
     -E stride=32,stripe-width=64 see below calculation

Calculation

  • chunk size = 128kB (set by mdadm cmd, see chunk size advise above)
  • block size = 4kB (recommended for large files, and most of time)
  • stride = chunk / block = 128kB / 4k = 32
  • stripe-width = stride * ( (n disks in raid5) - 1 ) = 32 * ( (3) - 1 ) = 32 * 2 = 64

If the chunk-size is 128 kB, it means, that 128 kB of consecutive data will reside on one disk. If we want to build an ext2 filesystem with 4 kB block-size, we realize that there will be 32 filesystem blocks in one array chunk.

stripe-width=64 is calculated by multiplying the stride=32 value with the number of data disks in the array.

A raid5 with n disks has n-1 data disks, one being reserved for parity. (Note: the mke2fs man page incorrectly states n+1; this is a known bug in the man-page docs that is now fixed.) A raid10 (1+0) with n disks is actually a raid 0 of n/2 raid1 subarrays with 2 disks each.

Performance

RAID-{4,5,10} performance is severely influenced by the stride and stripe-width options. It is uncertain how the stride option will affect other RAID levels. If anyone has information on this, please add to the knowledge.

The ext2fs blocksize severely influences the performance of the filesystem. You should always use 4kB block size on any filesystem larger than a few hundred megabytes, unless you store a very large number of very small files on it.

Changing after creation

It is possible to change the parameters with

   tune2fs -E stride=n,stripe-width=m /dev/mdx

XFS

xfsprogs and the mkfs.xfs utility automatically select the best stripe size and stripe width for underlying devices that support it, such as Linux software RAID devices. Earlier versions of xfs used a built-in libdisk and the GET_ARRAY_INFO ioctl to gather the information; newer versions make use of enhanced geometry detection in libblkid. When using libblkid, accurate geometry may also be obtained from hardware RAID devices which properly export this information.

To create XFS filesystems optimized for RAID arrays manually, you'll need two parameters:

  • chunk size: same as used with mdadm
  • number of "data" disks: number of disks that store data, not disks used for parity or spares. For example:
    • RAID 0 with 2 disks: 2 data disks (n)
    • RAID 1 with 2 disks: 1 data disk (n/2)
    • RAID 10 with 10 disks: 5 data disks (n/2)
    • RAID 5 with 6 disks (no spares): 5 data disks (n-1)
    • RAID 6 with 6 disks (no spares): 4 data disks (n-2)

With these numbers in hand, you then want to use mkfs.xfs's su and sw parameters when creating your filesystem.

  • su: Stripe unit, which is the RAID chunk size, in bytes
  • sw: Multiplier of the stripe unit, i.e. number of data disks

If you've a 4-disk RAID 5 and are using a chunk size of 64 KiB, the command to use is:

mkfs -t xfs -d su=64k -d sw=3 /dev/md0

Alternately, you may use the sunit/swidth mkfs options to specify stripe unit and width in 512-byte-block units. For the array above, it could also be specified as:

mkfs -t xfs -d sunit=128 -d swidth=384 /dev/md0

The result is exactly the same; however, the su/sw combination is often simpler to remember. Beware that sunit/swidth are inconsistently used throughout XFS' utilities (see xfs_info below).

To check the parameters in use for an XFS filesystem, use xfs_info.

xfs_info /dev/md0
meta-data=/dev/md0               isize=256    agcount=32, agsize=45785440 blks
         =                       sectsz=4096  attr=2
data     =                       bsize=4096   blocks=1465133952, imaxpct=5
         =                       sunit=16     swidth=48 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =internal               bsize=4096   blocks=521728, version=2
         =                       sectsz=4096  sunit=1 blks, lazy-count=0
realtime =none                   extsz=196608 blocks=0, rtextents=0

Here, rather than displaying 512-byte units as used in mkfs.xfs, sunit and swidth are shown as multiples of the filesystem block size (bsize), another file system tunable. This inconsistency is for legacy reasons, and is not well-documented.

For the above example, sunit (sunit×bsize = su, 16×4096 = 64 KiB) and swidth (swidth×bsize = sw, 48×4096 = 192 KiB) are optimal and correctly reported.

While the stripe unit and stripe width cannot be changed after an XFS file system has been created, they can be overridden at mount time with the sunit/swidth options, similar to ones used by mkfs.xfs.

From Documentation/filesystems/xfs.txt in the kernel tree:

 sunit=value and swidth=value
       Used to specify the stripe unit and width for a RAID device or
       a stripe volume.  "value" must be specified in 512-byte block
       units.
       If this option is not specified and the filesystem was made on
       a stripe volume or the stripe width or unit were specified for
       the RAID device at mkfs time, then the mount system call will
       restore the value from the superblock.  For filesystems that
       are made directly on RAID devices, these options can be used
       to override the information in the superblock if the underlying
       disk layout changes after the filesystem has been created.
       The "swidth" option is required if the "sunit" option has been
       specified, and must be a multiple of the "sunit" value.

Source: Samat Says: Tuning XFS for RAID

Back to Hardware issues Forward to Detecting, querying and testing
Back to RAID setup Forward to Tweaking, tuning and troubleshooting

Detecting, querying and testing

This section is about life with a software RAID system, that's communicating with the arrays and tinkertoying them.

Note that when it comes to md devices manipulation, you should always remember that you are working with entire filesystems. So, although there could be some redundancy to keep your files alive, you must proceed with caution.

Detecting a drive failure

Firstly: mdadm has an excellent 'monitor' mode which will send an email when a problem is detected in any array (more about that later).

Of course the standard log and stat files will record more details about a drive failure.

It's always a must for /var/log/messages to fill screens with tons of error messages, no matter what happened. But, when it's about a disk crash, huge lots of kernel errors are reported. Some nasty examples, for the masochists,

    kernel: scsi0 channel 0 : resetting for second half of retries.
    kernel: SCSI bus is being reset for host 0 channel 0.
    kernel: scsi0: Sending Bus Device Reset CCB #2666 to Target 0
    kernel: scsi0: Bus Device Reset CCB #2666 to Target 0 Completed
    kernel: scsi : aborting command due to timeout : pid 2649, scsi0, channel 0, id 0, lun 0 Write (6) 18 33 11 24 00
    kernel: scsi0: Aborting CCB #2669 to Target 0
    kernel: SCSI host 0 channel 0 reset (pid 2644) timed out - trying harder
    kernel: SCSI bus is being reset for host 0 channel 0.
    kernel: scsi0: CCB #2669 to Target 0 Aborted
    kernel: scsi0: Resetting BusLogic BT-958 due to Target 0
    kernel: scsi0: *** BusLogic BT-958 Initialized Successfully ***

Most often, disk failures look like these,

    kernel: sidisk I/O error: dev 08:01, sector 1590410
    kernel: SCSI disk error : host 0 channel 0 id 0 lun 0 return code = 28000002

or these

    kernel: hde: read_intr: error=0x10 { SectorIdNotFound }, CHS=31563/14/35, sector=0
    kernel: hde: read_intr: status=0x59 { DriveReady SeekComplete DataRequest Error }


And, as expected, the classic /proc/mdstat look will also reveal problems,

    Personalities : [linear] [raid0] [raid1] [translucent]
    read_ahead not set
    md7 : active raid1 sdc9[0] sdd5[8] 32000 blocks [2/1] [U_]


Later on this section we will learn how to monitor RAID with mdadm so we can receive alert reports about disk failures. Now it's time to learn more about /proc/mdstat interpretation.

Querying the array status

You can always take a look at the array status by doing cat /proc/mdstat It won't hurt. Take a look at the /proc/mdstat page to learn how to read the file.

Finally, remember that you can also use mdadm to check the arrays out.

         mdadm --detail /dev/mdx

These commands will show spare and failed disks loud and clear.

Simulating a drive failure

If you plan to use RAID to get fault-tolerance, you may also want to test your setup, to see if it really works. Now, how does one simulate a disk failure?

The short story is, that you can't, except perhaps for putting a fire axe thru the drive you want to "simulate" the fault on. You can never know what will happen if a drive dies. It may electrically take the bus it is attached to with it, rendering all drives on that bus inaccessible. The drive may also just report a read/write fault to the SCSI/IDE/SATA layer, which, if done properly, in turn makes the RAID layer handle this situation gracefully. This is fortunately the way things often go.

Remember, that you must be running RAID-{1,4,5,6,10} for your array to be able to survive a disk failure. Linear- or RAID-0 will fail completely when a device is missing.

Force-fail by hardware

If you want to simulate a drive failure, you can just plug out the drive. If your HW does not support disk hot-unplugging, you should do this with the power off (if you are interested in testing whether your data can survive with a disk less than the usual number, there is no point in being a hot-plug cowboy here. Take the system down, unplug the disk, and boot it up again)

Look in the syslog, and look at /proc/mdstat to see how the RAID is doing. Did it work? Did you get an email from the mdadm monitor?

Faulty disks should appear marked with an (F) if you look at /proc/mdstat. Also, users of mdadm should see the device state as faulty.

When you've re-connected the disk again (with the power off, of course, remember), you can add the "new" device to the RAID again, with the mdadm --add' command.

Force-fail by software

You can just simulate a drive failure without unplugging things. Just running the command

     mdadm --manage --set-faulty /dev/md1 /dev/sdc2

should be enough to fail the disk /dev/sdc2 of the array /dev/md1.


Now things move up and fun appears. First, you should see something like the first line of this on your system's log. Something like the second line will appear if you have spare disks configured.

     kernel: raid1: Disk failure on sdc2, disabling device.
     kernel: md1: resyncing spare disk sdb7 to replace failed disk


Checking /proc/mdstat out will show the degraded array. If there was a spare disk available, reconstruction should have started.

Another useful command at this point is:

     mdadm --detail /dev/md1

Enjoy the view.

Now you've seen how it goes when a device fails. Let's fix things up.

First, we will remove the failed disk from the array. Run the command

     mdadm /dev/md1 -r /dev/sdc2

Note that mdadm cannot pull a disk out of a running array. For obvious reasons, only faulty disks can be hot-removed from an array (even stopping and unmounting the device won't help - if you ever want to remove a 'good' disk, you have to tell the array to put it into the 'failed' state as above).

Now we have a /dev/md1 which has just lost a device. This could be a degraded RAID or perhaps a system in the middle of a reconstruction process. We wait until recovery ends before setting things back to normal.

So the trip ends when we send /dev/sdc2 back home.

     mdadm /dev/md1 -a /dev/sdc2


As the prodigal son returns to the array, we'll see it becoming an active member of /dev/md1 if necessary. If not, it will be marked as a spare disk. That's management made easy.

Simulating data corruption

RAID (be it hardware or software), assumes that if a write to a disk doesn't return an error, then the write was successful. Therefore, if your disk corrupts data without returning an error, your data will become corrupted. This is of course very unlikely to happen, but it is possible, and it would result in a corrupt filesystem.

RAID cannot, and is not supposed to, guard against data corruption on the media. Therefore, it doesn't make any sense either, to purposely corrupt data (using dd for example) on a disk to see how the RAID system will handle that. It is most likely (unless you corrupt the RAID superblock) that the RAID layer will never find out about the corruption, but your filesystem on the RAID device will be corrupted.

This is the way things are supposed to work. RAID is not a guarantee for data integrity, it just allows you to keep your data if a disk dies (that is, with RAID levels above or equal one, of course).

Monitoring RAID arrays

You can run mdadm as a daemon by using the follow-monitor mode. If needed, that will make mdadm send email alerts to the system administrator when arrays encounter errors or fail. Also, follow mode can be used to trigger contingency commands if a disk fails, like giving a second chance to a failed disk by removing and reinserting it, so a non-fatal failure could be automatically solved.

Let's see a basic example. Running

    mdadm --monitor --daemonise --mail=root@localhost --delay=1800 /dev/md2

should release a mdadm daemon to monitor /dev/md2. The --daemonise switch tells mdadm to run as a deamon. The delay parameter means that polling will be done in intervals of 1800 seconds. Finally, critical events and fatal errors should be e-mailed to the system manager. That's RAID monitoring made easy.

Finally, the --program or --alert parameters specify the program to be run whenever an event is detected.

Note that, when supplying the -f switch, the mdadm daemon will never exit once it decides that there are arrays to monitor, so it should normally be run in the background. Remember that your are running a daemon, not a shell command. If mdadm is ran to monitor without the -f switch, it will behave as a normal shell command and wait for you to stop it.

Using mdadm to monitor a RAID array is simple and effective. However, there are fundamental problems with that kind of monitoring - what happens, for example, if the mdadm daemon stops? In order to overcome this problem, one should look towards "real" monitoring solutions. There are a number of free software, open source, and even commercial solutions available which can be used for Software RAID monitoring on Linux. A search on FreshMeat should return a good number of matches.

Back to RAID setup Forward to Tweaking, tuning and troubleshooting
Back to Detecting, querying and testing Forward to Growing

Tweaking, tuning and troubleshooting

Autodetection

In-kernel autodetection was a way to allow the RAID devices to be automatically recognized by the kernel at boot-time, right after the ordinary partition detection is done. Modern kernels do not recognise raid arrays and in order to boot off a version 1.2 array, you must use an initramfs to assemble the array.

It is possible to boot off a raid array without an initramfs but the following is necessary

  1. You must use metadata 0.9 or 1.0 that goes at the end of the array
  2. The array must be raid-1 - a mirror
  3. The kernel will not realise it's an array, so boot the partition as read-only, then remount / as the mirror read-write once the array has started.

Booting on RAID

LILO and (legacy) Grub 1

Pretty much all modern linux systems use Grub 2. Your install program should set it up correctly, but if you have to set it up manually, make sure that the raid driver is loaded. Also make sure when linux is loaded that the domdadm option is passed. An example boot entry is

menuentry 'Gentoo GNU/Linux, with Linux 4.4.6-gentoo' --class gentoo --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-4.4.6-gentoo-advanced-ab538350-d249-413b-86ef-4bd5280600b8' {
       load_video
       insmod gzio
       insmod part_gpt
       insmod diskfilter
       insmod mdraid1x
       insmod ext2
       set root='mduuid/69270eaca840f6e70199064bd5863c5d'
       if [ x$feature_platform_search_hint = xy ]; then
         search --no-floppy --fs-uuid --set=root --hint='mduuid/69270eaca840f6e70199064bd5863c5d'  ab538350-d249-413b-86ef-4bd5280600b8
       else
         search --no-floppy --fs-uuid --set=root ab538350-d249-413b-86ef-4bd5280600b8
       fi
       echo    'Loading Linux 4.4.6-gentoo ...'
       linux   /boot/vmlinuz-4.4.6-gentoo root=UUID=ab538350-d249-413b-86ef-4bd5280600b8 ro  domdadm
       echo    'Loading initial ramdisk ...'
       initrd  /boot/initramfs-genkernel-x86_64-4.4.6-gentoo
}


Converting the root filesystem to RAID

The time-honoured way of coping with this sort of thing is to have a small /boot partition at the start of the drive. This, however, means that your boot details are not protected by raid, unless you go to the trouble of manually copying them every time they change, or you mess about with old metadata formats.

And if you're doing a new install, most modern distros will set up raid for you. The ones that won't and expect you to do your own disk setup, will come with raid support enabled so you can create a raid device before installing.

Method 2016

This method assumes you are adding a new drive, and will set up a degraded array before converting it to a full working array. It's easier if you're adding two drives and can set up a fully working array. Note that, by default, a system will not boot from an array that has become degraded. [TODO: document how to make it boot. Hopefully it will boot from an array that has been set up in degraded mode]

  • First, make sure your kernel has raid compiled in, and that mdadm is installed. If you're

not using grub2, upgrade now.

  • Add the new disk. If it's the same size as the original, and you plan to mirror everything,

then make sure you can afford to lose a little disk space. Install grub2 on that drive, too.

  • Plan and create your new partitioning scheme. It doesn't have to be the same as on the old

disk, but if the new disk is larger and you use the extra space, you will not be able to raid everything.

  • Create your arrays using your new disk. Use a command similar to the following - note the

use of a named array - "root" - and the word "missing" which tells the create command to create a mirror with just one active device.

mdadm --create /dev/md/root --level raid1 --raid-disks 2 missing /dev/sdb1
  • Create file systems on your new arrays with a command like the following:
mkfs.ext4 /dev/md/root
  • Mount your new file system and copy the contents of your root file system to the new filesystem
mount /dev/md/root /mnt/newroot
cp -ax / /mnt/newroot

Note options ax to copy everything including permissions and links but not follow any mount points.

  • Run grub2-mkconfig, check that everything is okay, that it has detected /mnt/newroot as a boot partition,

and that when booting from it it loads the raid driver and passed the domdadm option to linux. Add this configuration to grub (making sure it's in both /boot/grub and /mnt/newroot/boot/grub), and reboot to boot on the raid drive.

  • make sure that grub is reading its configuration at boot time from your new drive [TODO: How?]
  • Copy the data from all the old partitions to the new ones the same way as with root, and update fstab or whatever. Reboot the system to make sure you have a working system all on the new drive.
  • If the new partitions are the same size as the old ones, you can now add them in and let the mirrors rebuild.
mdadm --grow /dev/mdroot --add /dev/sda1

It would be wise to just do root first, and then reboot to make sure we have not messed up grub. If we didn't successfully switch it to the new drive then the old drive no longer exists and it won't be able to find its configuration.

Method 1 (2011)

This method assumes you have a spare disk you can install the system on, which is not part of the RAID you will be configuring.

  • First, install a normal system on your extra disk.
  • Get the kernel you plan on running, get the raid-patches and the tools, and make your system boot with this new RAID-aware kernel. Make sure that RAID-support is in the kernel, and is not loaded as modules.
  • Ok, now you should configure and create the RAID you plan to use for the root filesystem. This is standard procedure, as described elsewhere in this document.
  • Just to make sure everything's fine, try rebooting the system to see if the new RAID comes up on boot. It should.
  • Put a filesystem on the new array (using mke2fs), and mount it under /mnt/newroot
  • Now, copy the contents of your current root-filesystem (the spare disk) to the new root-filesystem (the array). There are lots of ways to do this, one of them is
     cd /
     find . -xdev | cpio -pm /mnt/newroot

another way to copy everything from / to /mnt/newroot could be

   cp -ax / /mnt/newroot
  • You should modify the /mnt/newroot/etc/fstab file to use the correct device (the /dev/md? root device) for the root filesystem.
  • Now, unmount the current /boot filesystem, and mount the boot device on /mnt/newroot/boot instead. This is required for LILO to run successfully in the next step.
  • Update /mnt/newroot/etc/lilo.conf to point to the right devices. The boot device must still be a regular disk (non-RAID device), but the root device should point to your new RAID. When done, run
     lilo -r /mnt/newroot

complete with no errors.

  • Reboot the system, and watch everything come up as expected  :)

If you're doing this with IDE disks, be sure to tell your BIOS that all disks are "auto-detect" types, so that the BIOS will allow your machine to boot even when a disk is missing.

Method 2 (2011)

This method requires that your kernel and raidtools understand the failed-disk directive in the /etc/raidtab file - if you are working on a really old system this may not be the case, and you will need to upgrade your tools and/or kernel first.

You can only use this method on RAID levels 1 and above, as the method uses an array in "degraded mode" which in turn is only possible if the RAID level has redundancy. The idea is to install a system on a disk which is purposely marked as failed in the RAID, then copy the system to the RAID which will be running in degraded mode, and finally making the RAID use the no-longer needed "install-disk", zapping the old installation but making the RAID run in non-degraded mode.

  • First, install a normal system on one disk (that will later become part of your RAID). It is important that this disk (or partition) is not the smallest one. If it is, it will not be possible to add it to the RAID later on!
  • Then, get the kernel, the patches, the tools etc. etc. You know the drill. Make your system boot with a new kernel that has the RAID support you need, compiled into the kernel.
  • Now, set up the RAID with your current root-device as the failed-disk in the /etc/raidtab file. Don't put the failed-disk as the first disk in the raidtab, that will give you problems with starting the RAID. Create the RAID, and put a filesystem on it. If using mdadm, you can create a degraded array just by running something like
mdadm -C /dev/md0 --level raid1 --raid-disks 2 missing /dev/hdc1

note the missing parameter.

  • Try rebooting and see if the RAID comes up as it should
  • Copy the system files, and reconfigure the system to use the RAID as root-device, as described in the previous section.
  • When your system successfully boots from the RAID, you can modify the /etc/mdamd.conf file to include the previously failed-disk as a normal raid-disk. Now use mdadm /dev/md0 --add /dev/hd?? to add the disk to your RAID.
  • You should now have a system that can boot from a non-degraded RAID.

Making the system boot on RAID (2011)

For the kernel to be able to mount the root filesystem, all support for the device on which the root filesystem resides, must be present in the kernel. Therefore, in order to mount the root filesystem on a RAID device, the kernel must have RAID support.

The normal way of ensuring that the kernel can see the RAID device is to simply compile a kernel with all necessary RAID support compiled in. Make sure that you compile the RAID support into the kernel, and not as loadable modules. The kernel cannot load a module (from the root filesystem) before the root filesystem is mounted.

However, since RedHat-6.0 ships with a kernel that has new-style RAID support as modules, I here describe how one can use the standard RedHat-6.0 kernel and still have the system boot on RAID.


Booting with RAID as module

You will have to instruct LILO to use a RAM-disk in order to achieve this. Use the mkinitrd command to create a ramdisk containing all kernel modules needed to mount the root partition. This can be done as:

  mkinitrd --with=<module> <ramdisk name> <kernel>

For example:

  mkinitrd --preload raid5 --with=raid5 raid-ramdisk 2.2.5-22

This will ensure that the specified RAID module is present at boot- time, for the kernel to use when mounting the root device.

Modular RAID on Debian GNU/Linux after move to RAID

Debian users may encounter problems using an initrd to mount their root filesystem from RAID, if they have migrated a standard non-RAID Debian install to root on RAID.

If your system fails to mount the root filesystem on boot (you will see this in a "kernel panic" message), then the problem may be that the initrd filesystem does not have the necessary support to mount the root filesystem from RAID.

Debian seems to produce its initrd.img files on the assumption that the root filesystem to be mounted is the current one. This will usually result in a kernel panic if the root filesystem is moved to the raid device and you attempt to boot from that device using the same initrd image. The solution is to use the mkinitrd command but specifying the proposed new root filesystem. For example, the following commands should create and set up the new initrd on a Debian system:

 % mkinitrd -r /dev/md0 -o /boot/initrd.img-2.4.22raid
 % mv /initrd.img /initrd.img-nonraid
 % ln -s /boot/initrd.img-raid /initrd.img"


Converting a non-RAID RedHat System to run on Software RAID (2011)

This section was written and contributed by Mark Price, IBM. The text has undergone minor changes since his original work.

Notice: the following information is provided "AS IS" with no representation or warranty of any kind either express or implied. You may use it freely at your own risk, and no one else will be liable for any damages arising out of such usage.

Converting a non-RAID RedHat System to run on Software RAID

Sharing spare disks between different arrays

When running mdadm in the follow/monitor mode you can make different arrays share spare disks. That will surely make you save storage space without losing the comfort of fallback disks.

In the world of software RAID, this is a brand new never-seen-before feature: for securing things to the point of spare disk areas, you just have to provide one single idle disk for a bunch of arrays.

With mdadm is running as a daemon, you have an agent polling arrays at regular intervals. Then, as a disk fails on an array without a spare disk, mdadm removes an available spare disk from another array and inserts it into the array with the failed disk. The reconstruction process begins now in the degraded array as usual.

To declare shared spare disks, just use the spare-group parameter when invoking mdadm as a daemon.

Pitfalls (2011)

Never NEVER never re-partition disks that are part of a running RAID. If you must alter the partition table on a disk which is a part of a RAID, stop the array first, then repartition.

It is easy to put too many disks on a bus. A normal Fast-Wide SCSI bus can sustain 10 MB/s which is less than many disks can do alone today. Putting six such disks on the bus will of course not give you the expected performance boost. It is becoming equally easy to saturate the PCI bus - remember, a normal 32-bit 33 MHz PCI bus has a theoretical maximum bandwidth of around 133 MB/sec, considering command overhead etc. you will see a somewhat lower real-world transfer rate. Some disks today has a throughput in excess of 30 MB/sec, so just four of those disks will actually max out your PCI bus! When designing high-performance RAID systems, be sure to take the whole I/O path into consideration - there are boards with more PCI busses, with 64-bit and 66 MHz busses, and with PCI-X.

More SCSI controllers will only give you extra performance, if the SCSI busses are nearly maxed out by the disks on them. You will not see a performance improvement from using two 2940s with two old SCSI disks, instead of just running the two disks on one controller.

If you forget the persistent-superblock option, your array may not start up willingly after it has been stopped. Just re-create the array with the option set correctly in the raidtab. Please note that this will destroy the information on the array!

If a RAID-5 fails to reconstruct after a disk was removed and re- inserted, this may be because of the ordering of the devices in the raidtab. Try moving the first "device ..." and "raid-disk ..." pair to the bottom of the array description in the raidtab file.

Back to Detecting, querying and testing Forward to Growing
Back to Tweaking, tuning and troubleshooting Forward to Growing

If you have read the rest of this HOWTO, you should already have a pretty good idea about what reconstruction of a degraded RAID involves. Let us summarize:

  • Power down the system
  • Replace the failed disk
  • Power up the system once again.
  • Use raidhotadd /dev/mdX /dev/sdX to re-insert the disk in the array
  • Have coffee while you watch the automatic reconstruction running

If your system runs critical service (Web server, FTP server, banking, health care, etc.), you will not want to shut down the system. In this scenario it is best hot pluggable disks, inform yourself if your chipset supports this functionality (or take the risk).

A good idea is to check this functionality prior to commissioning of the critical system. If the chipset of your system does not support hot-plug could hang, or at worst you could fry something.

The SATA interface was designed with hotplug, unfortly not all chipset support it. The PATA interface was NOT designed with hotplug. In some cases the chipset was NOT designed for hot plugging and IN ANY WAY you can force the operating system to detect perfectly (SATA and PATA).

  • Remove all usage of the failed disk
    • mdadm --manage /dev/mdX --remove /dev/sdX
    • umount /dev/sdX*
  • (FIRST) Remove the data cable of the failed disk
  • (SECOND) Remove the power cable of the failed disk
  • Force system to re-scan
    • echo "- - -" > /sys/class/scsi_host/hostX/scan # For all "X"
    • tail -f /var/log/syslog # is a good idea
  • Replace the failed disk
  • (FIRST) Connect the power cable of the new disk (and wait some seconds)
  • (SECOND) Connect the data cable of the new disk
  • Force system to re-scan
    • echo "- - -" > /sys/class/scsi_host/hostX/scan # For all "X"
    • tail -f /var/log/syslog # is a good idea

In some cases, the "good" disk does not have a boot block, as might happen if the degraded disk is the "first" one, e.g. the hda or sda device. In this case you might not be able to boot the system. Try to reconstruct the MBR with the boot loader of choice. The installation disk of your linux distro might have a rescue mode and assist you in this task. There is also a bootable tool available called "super grub disk" (http://www.supergrubdisk.org/) which boots stray linux installations in seconds.

And that's it.

Well, it usually is, unless you're unlucky and your RAID has been rendered unusable because more disks than the ones redundant failed. This can actually happen if a number of disks reside on the same bus, and one disk takes the bus with it as it crashes. The other disks, however fine, will be unreachable to the RAID layer, because the bus is down, and they will be marked as faulty. On a RAID-5 where you can spare one disk only, losing two or more disks can be fatal.

See also Recovering a failed software RAID.

Recovery from a multiple disk failure

This section is the explanation that Martin Bene gave to me, and describes a possible recovery from the scary scenario outlined above. It involves using the failed-disk directive in your /etc/raidtab (so for people running patched 2.2 kernels, this will only work on kernels 2.2.10 and later).


The scenario is:

  • A controller dies and takes two disks offline at the same time,
  • All disks on one scsi bus can no longer be reached if a disk dies,
  • A cable comes loose...

In short: quite often you get a temporary failure of several disks at once; afterwards the RAID superblocks are out of sync and you can no longer init your RAID array.

If using mdadm, you could first try to run:

   mdadm --assemble --force

If not, there's one thing left: rewrite the RAID superblocks by mkraid --force

To get this to work, you'll need to have an up to date /etc/raidtab - if it doesn't EXACTLY match devices and ordering of the original disks this will not work as expected, but will most likely completely obliterate whatever data you used to have on your disks.

Look at the sylog produced by trying to start the array, you'll see the event count for each superblock; usually it's best to leave out the disk with the lowest event count, i.e the oldest one.

If you mkraid without failed-disk, the recovery thread will kick in immediately and start rebuilding the parity blocks - not necessarily what you want at that moment.

With failed-disk you can specify exactly which disks you want to be active and perhaps try different combinations for best results. BTW, only mount the filesystem read-only while trying this out... This has been successfully used by at least two guys I've been in contact with.

recovery and resync

The following is a recollection of what Neil Brown and others have written on the linux-raid mailing list.

"resync" and "recovery" are handled very differently in raid10. "check" and "repair" are special cases of "resync".

recovery

The purpose of the recovery process is to fill a new disk with the relevant information from a running array.

The assumption is that all data on the new disk needs to be written, and that the other data on the running array is correct.

"recovery" walks addresses from the start to the end of the component drives. Thus only data for the specific component drive is adressed.

At each address, it considers each drive which is being recovered and finds a place on a different device to read the block for the current (drive,address) from. It schedules a read and when the read request completes it schedules the write.

On an f2 layout, this will read one drive from halfway to the end, then from the start to halfway, and will write the other drive sequentially.

resync

The purpose of resync is to ensure that all data on the array is syncronized.

There is an assumption that most, if not all, of the data is allready OK.

For raid10 "resync" walks the addresses from the start to end of the array. (For all other raid types "resync" follows the component drives).

At each address it reads every device block which stores that array block. When all the reads complete the results are compared. If they are not all the same, the "first" block is written out to the others.

Here "first" means (I think) the block with the earliest device address, and if there are several of those, the block with the least device index.

So for f2, this will read from both the start and the middle of both devices. It will read 64K (the chunk size) at a time, so you should get at least a 32K read at each position before a seek (more with a larger chunk size).

Clearly this won't be fast.

The reason this algorithm was chosen was that it makes sense for every possible raid10 layout, even though it might not be optimal for some of them.

Were I to try to make it fast for f2, I would probably shuffle the bits in each request so that it did all the 'odd' chunks first, then all the even chunks. e.g. map

 0 1 2 3 4 5 6 7 8 ...

to

 0 1 4 5 8 9 .....  2 3 6 7 10 11 ....

(assuming a chunk size of '2').

The problem with this is that if you shutdown while part way though a resync, and then boot into a kernel which used a different sequence, it would finish the resync checking the wrong blocks. This is annoying but should not be insurmountable.

This way we leave the basic algorithm the same, but introduce variations in the sequence for different specific layouts.

Another idea would be to read a number of chunks from one part of the f2 mirror, say 10 MB, and then read then corresponding 10 MB from the other half of the f2 array. This would on current disk technology (80 MB/s) mean 125 ms spent reading, and then 8 ms spent moving heads.

raid1 does resync simply by reading one device and writing all the others, and this is conceptually easiest.

When repairing, there is no "good" block - if they are different, then all are wrong. md/raid just tries to return a consistent value, and leave it up to the filesystem to find and correct any errors. md/raid does not try to take advantage of information on failed CRC on disk hardware, should that info be available to the kernel.

If any inconsistency is found during a resync of raid4/5/6 the parity blocks are changed to remove the inconsistency. This may not be "right", but it is least likely to be "wrong".

Back to Tweaking, tuning and troubleshooting Forward to Growing
Back to Growing Forward to Related tools


Performance of raids with 2 disks

I have made some testing of performance of different types of RAIDs, with 2 disks involved. I have used my own home grown testing methods, which are quite simple, to test sequential and random reading and writing of 200 files of 40 MB. The tests were meant to see what performance I could get out of a system mostly oriented towards file serving, such as a mirror site.

My configuration was

   1800 MHz AMD Sempron(tm) Processor 3100+
   1500 MB RAM
   nVidia Corporation CK804 Serial ATA Controller
   2 x  Hitachi Ultrastar A7K 1000 SCSI-II 1 TB.
   Linux version 2.6.12-26mdk
   Tester: Keld Simonsen, keld@dkuug.dk

Figures are in MB/s, and the file system was ext3. The chunk size was 256 kiB. Times were measured with iostat, and an estimate for steady performance was taken. The times varied quite a lot over the different 10 second intervals, for example the estimate 155 MB/s ranged from 135 MB/s to 163 MB/s. I then looked at the average over the period when a test was running in full scale (for example all processes started, and none stopped).

   RAID type      sequential read     random read    sequential write   random write
   Ordinary disk       82                 34                 67                56
   RAID0              155                 80                 97                80
   RAID1               80                 35                 72                55
   RAID10,n2           79                 56                 69                48
   RAID10,f2          150                 79                 70                55

Random read for RAID1 and RAID10,n2 were quite unbalanced, almost only coming out of one of the disks.

The results are quite as expected:

RAID0 and RAID10,f2 reads are double speed compared to ordinary file system for sequential reads (155 vs 82) and more than double for random reads (80 vs 35).

Writes (both sequential and random) are roughly the same for ordinary disk, RAID1, RAID10 and RAID10,f2, around 70 MB/s for sequential, and 55 MB/s for random.

Sequential reads are about the same (80 MB/s) for ordinary partition, RAID1 and RAID10.

Random reads for ordinary partition and RAID1 is about the same (35 MB/s) and about 50 % higher for RAID10. I am puzzled why RAID10 is faster than RAID1 here.

All in all RAID10,f2 is the fastest mirrored RAID for both sequential and random reading for this test, while it is about equal with the other mirrored RAIDs when writing.

My kernel did not allow me to test RAID10,o2 as this is only supported from kernel 2.6.18.

New benchmarks from 2011

Remark from keld: The tests reported by Mathias B below is actually carried out in an environment with almost 100 % CPU utilization, so I am not sure how enlightening the numbers are.

Mathias B posted some benchmarks to the mailing list with this setup:

   Motherboard: Zotac ION Synergy DDR2 (Atom 330 overclocked to 2GHz, 667 FSB)
   RAM: 4GB DDR2 PC5300
   SATA controller: 05:00.0 SCSI storage controller: HighPoint Technologies, Inc. RocketRAID 230x 4 Port SATA-II             Controller (rev 02)
   SATA controller: nVidia Corporation MCP79 AHCI Controller (rev b1)
   Hard drives:
   Model=WDC WD20EARS-00MVWB0, FwRev=51.0AB51
   Model=WDC WD20EARS-00MVWB0, FwRev=50.0AB50
   Model=WDC WD20EARS-00MVWB0, FwRev=50.0AB50
   Model=SAMSUNG HD204UI, FwRev=1AQ10003
   Model=WDC WD20EARS-00MVWB0, FwRev=51.0AB51
   Model=SAMSUNG HD204UI, FwRev=1AQ10003

3 of these are connected to the PCI-E (1.0) SATA HBA and thereby bottle necked. OS is Archlinux 64-bit, kernel 2.6.37.1 & mdadm 3.1.4.

md details:

   /dev/md0:
   Version : 1.2
   Creation Time : Tue Oct 19 08:58:41 2010
   Raid Level : raid5
   Array Size : 9751756800 (9300.00 GiB 9985.80 GB)
   Used Dev Size : 1950351360 (1860.00 GiB 1997.16 GB)
   Raid Devices : 6
   [...]
   Layout : left-symmetric
   Chunk Size : 64K

ext4 details:

   RAID stride:              16
   RAID stripe width:        80
   rw,noatime,barrier=1,stripe=80,data=writeback

block details:

   md0 block readahead 65536
   lvm lv block readahead 16384
   /sys/block/md0/md/stripe_cache_size 16384
   /sys/block/md0/queue/read_ahead_kb 65536

The 6 HDDs are in a RAID5 array, with LVM on top and then ext4 on top of that. This is a very low end system so many results are bottlenecked by the CPU. NCQ enabled on all drives. Here are some bonnie++ results:

   		------Sequential Output------			--Sequential Input-	    --Random-		------Sequential Create------			--------Random Create--------
   		-Per Chr-	--Block--	-Rewrite-	-Per Chr-	--Block--   --Seeks--		-Create--	--Read---	-Delete--	-Create--	--Read---	-Delete--
   Machine Size	K/sec	%CP	K/sec	%CP	K/sec	%CP	K/sec	%CP	K/sec	%CP   /sec	%CP	/sec	%CP	/sec	%CP	/sec	%CP	/sec	%CP	/sec	%CP	/sec	%CP
   ion 7G	16086	90	161172	78	69515	34	17887	97	258229   43   424	2	25336	98	+++++	+++	31240	99	26047	99	+++++	+++	31072	99
   ion 7G	17434	98	184348	89	147077	73	18282	99	382536   51   465.3	2	25401	99	+++++	+++	28185	90	26125	99	+++++	+++	31352	100
   ion 7G	17303	98	186702	90	142494	70	18310	99	356491   49   467.7	2	25447	99	+++++	+++	28446	90	25677	99	+++++	+++	31305	99
   ion 7G	17322	98	171309	89	146962	74	18348	99	365177   51   456.9	2	20419	80	+++++	+++	31455	100	25966	99	+++++	+++	31096	99
   ion 7G	17359	98	184704	91	123822	57	18282	99	375295   49   463.2	2	24908	98	+++++	+++	31465	99	25969	99	+++++	+++	31285	99
   ion 7G	17310	98	182821	90	124661	58	18347	99	385963   52   459.5	2	24710	98	+++++	+++	31840	99	26162	98	+++++	+++	31502	99

They vary a bit because I played with readahead and cache settings, ultimately I ended up with the settings posted above the results.

Other benchmarks from 2007-2010

Durval Menezes repeated the above benchmark in Oct 2008 with a different configuration (3 500GB SATA disks) and a newer kernel (2.6.24), and also included the RAID10,o2 mode, reaching very similar conclusions (nothing beats RAID10,f2 overall).

Nat Makarevitch made an extensive benchmark for database with 6 and 10 spindles.

Justin Piszcz made a comparison in March 2008 with bonnie++ test of raid10,f2 n2 o2 raid5 of 10 Raptor drives, in a vanilla and an optimized version.

Conway S. Smith made a bonnie++ comparison in March 2008 raid5 and raid6 with 4 drives and varying Chunk Sizes.

Justin Piszcz made a comparison in May 2008 with bonnie++ test of raid levels 0 1 4 5 6 10,f2 10,n2 10,o2 of 6 SATA drives.

In Dec 2007 Jon Nelson made a test of raid levels 0, 5 and 10 f2 n2 o2 for sequential read and write, also in degraded mode, on 3 SATA drives.

Bill Davidsen reported Feb 2008: This is what I measure running an E6600 CPU and 3xSeagate 320 with Recent FC7 kernel. All reads and writes to the raw array using dd, 1MB buffer, 1GB i/o to/from /dev/{zero,null} for raw speed. Units are MB/s, 64k chunks, speed as reported by dd.

RAID lvl        read        write
0               110         143
1                52.1        49.5
10               79.6        76.3
10f2            145          64.5
raw one disk     53.5        54.7

Keld's remarks: the raid0 read figure of 110 MB/s is not consistent with other benchmarks that report about cumulative performance for sequential reads for raid0. I would have expected a figure around 150 MB/s here.

In July 2008, Jon Nelson made test of levels 5, 6, and 10, with differing chunk sizes from 64 kiB to 2 MiB on 4 SATA drives. This was for sequential read and writes on the raw raid types. With a file system, differences in write performance would probably smoothen out write differences due to the effect of the IO scheduler (elevator). Results include high performance of raid10,f2 - around 3.80, and high performance of raid5 and raid6. Especially with bigger chunk sizes 512 kiB - 2 MiB, raid5 obtained maximum 3.44 times the speed of a single drive, and raid6 got a factor 2.96. This is probably due to the even distribution of parity chunks, that means that reads are distributed evenly on all involved (4) drives.

In July 2008, Ben Martin made a test comparing HW and SW raid with 6 disks. The HW raid was a quite expensive USD 800 Adaptec SAS-31205 PCI Express 12-SATA-port PCI-E x8 hardware RAID card. Compared raid types included 5, 6 and 10,n2. Some conclusions: The difference is not big between the expensive HW raid controller and Linux SW RAID. For raid5 Linux was 30 % faster (440 MB/s vs 340 MB/s) for reads. For writes Adaptec was about 25 % faster (220 MB/s vs 175 MB/s). Keld's remarks: The Adaptec controller actually slowed down disk reading, the single disk read on the motherboard was 90 MB/s while via the Adaptec controller it was only 70 MB/s. Writing was faster, tho. raid10,f2 and raid10,o2 was not included in the test. Ben reported the Adaptec controller to give around 310 MB/s read performance for raid1, while a raid10,f2 would probably have given around 400 MB/s via the Adaptec controller, and around 600 MB/s with the 6 disks on the motherboard SATA controller and a reasonable extra controller, given other benchmarks as noted in this section, giving raid10,f2 a 95 % utilization of the cumulated IO bandwidth. For raid5 the read/write difference could possibly be explained by which chunk size to use, in Linux raid5 reading improves with bigger chunk sizes, while writing degrades.

In 2009 A Comparison of Chunk Size for Software RAID-5 was done by Rik Faith with chunk sizes of 4 KiB to 64 MiB. It was found that chunk sizes of 128 KiB gave the best overall performance. The test was done on a Supermicro AOC-SAT2-MV8 controller with 8 SATA II ports, and connected to a 32-bit PCI slot, which could explain the 130 MB/s max found.

A benchmark comparing chunk sizes from 4 to 1024 KiB on various RAID types (0, 5, 6, 10) was made in May 2010. For RAID types 5 and 6 it seems like a chunk size of 64 KiB is optimal, while for the other RAID types a chunk size of 512 KiB seemed to give the best results. The test were done on a controller which had an upper limit on about 350 MB/s. It is unclear which layout was used with RAID-10.

Some problem solving for benchmarking

Sometimes there are apparent pauses in the stream of IO requests to the array component devices. The usual workaround is trying 'blockdev --setra 65536 /dev/mdN' and see if sequential reads improve. Also the stripe-cache_size is important for raid5 and raid6, and NCQ of the controller can interfere with the Linux kernel optimizations.

Here are some commands to alter default settings:

# Set read-ahead.
echo "Setting read-ahead to 32 MiB for /dev/md3"
blockdev --setra 65536 /dev/md3
# Set stripe-cache_size for RAID5.
echo "Setting stripe_cache_size to 16 MiB for /dev/md3"
echo 16384 > /sys/block/md3/md/stripe_cache_size
# Disable NCQ on all disks. (for raptors it increases the speed 30-40MiB/s)
echo "Disabling NCQ on all disks..."
for i in $DISKS
do
  echo "Disabling NCQ on $i"
  echo 1 > /sys/block/"$i"/device/queue_depth
done

One good way to see what is actually happening is to use either 'watch iostat -k 1 2' and look at the load on the individual MD array component devices, or use 'sysctl vm/block_dump=1' and look at the addresses being read or written.

Bottlenecks

There can be a number of bottlenecks other than the disk subsystem that hinders you in getting full performance out of your disks.

One is the PCI bus. Older PCI bus has a 33 MHz cycle and a 32 bit width, giving a maximum bandwidth of about 1 Gbit/s, or 133 MB/s. This will easily cause trouble with newer SATA or PATA disks which easily gives 70-90 MB/s each. So do not put your SATA controllers on a 33 MHz PCI bus.

The 66 MHz 64-bit PCI bus is capable of handling about 4 Gbit/s, or about 500 MB/s. This can also be a bottleneck with bigger arrays, eg a 6 drive array will be able to deliver about 500 MB/s, and maybe you want also to feed a gigabit ethernet card - 125 MB/s, totalling potentially 625 MB/s on the PCI bus.

The PCI (and PCI-X) bus is shared bandwidth, and may operate at lowest common denominator. Put a 33Mhz card in the PCI bus, and not only does everything operate at 33 Mhz, but all of the cards compete. Grossly simplified, if you have a 133 Mhz card and a 33 Mhz card in the same PCI bus, then that card may operate at 16 Mhz. Your motherboards' embedded Ethernet chip and disk controllers may be "on" the PCI bus, so even if you have a single PCI controller card, and a multiple-bus motherboard, then it may make a difference what slot you put the controller in.

If this isn't bad enough, then consider the consequences of arbitration. All of the PCI devices have to constantly negotiate between themselves to get a chance to compete against all of the other devices attached to other PCI busses to get a chance to talk to the CPU and RAM. As such, every packet your Ethernet card picks up could temporarily suspend disk I/O if you don't configure things wisely.

The PCI-Express bus v1.1 has a limit of 250 MB/s per lane per direction, and that limit can easily be hit eg by a 4-drive array, or even just 2 velociraptor disks.

Many newer SATA controllers are on-board and do not use the PCI bus, but are rather connected directly to the southbridge, even for the cheapest motherboards. Anyway bandwidth is limited, but it is probably different from motherboard to motherboard. On-board disk controllers most likely have a bigger bandwidth than IO controllers on a 32-bit PCI 33 MHz, 64-bit PCI 66 MHz, or PCI-E x1 bus. Some motherboards are reported to have a bidirectional 20 gigabit bus between the southbridge and the northbridge. Anyway most PCI-busses are connected via the southbridge.

Having a RAID connected over the LAN can be a bottleneck, if the LAN speed is only 1 Gbit/s - this limits the speed of the IO system to 125 MB/s by itself.

There may be some bottlenecks in the software. The software may be written to have an unbalanced access to the media, for example mostly using just one of the drives involved, or having a bias on which drives to use. It is a good idea to monitor the use of each of the drives' performance, for example via iostat. Threading, and asyncroneous IO may also enhance the performance. Related is the use of multicore CPU - are the CPUs used in a balanced way?

Compiler optimization may not have been done properly.

Classical bottlenecks are PATA drives placed on the same DMA channel, or the same PATA cable. This will of cause limit performance, but it should work, given you have no other means of connecting your disks by. Also placing more than one element of an array on the same disk hurts performace seriously, and also gives redundancy problems.

A classical problem is also not to have enabled DMA transfer, or having lost this setting due to some problem, including not well connected cables, or setting the transfer speed to less than optimal.

CPU usage may be a bottleneck, also combined with slow RAM.

BIOS settings may also impede your performance.

Old performance benchmark

The information is quite dated, as can be seen from both the hardware and software specifications.

This section contains a number of benchmarks from a real-world system using software RAID. There is some general information about benchmarking software too.

Benchmark samples were done with the bonnie program, and at all times on files twice- or more the size of the physical RAM in the machine.

The benchmarks here only measures input and output bandwidth on one large single file. This is a nice thing to know, if it's maximum I/O throughput for large reads/writes one is interested in. However, such numbers tell us little about what the performance would be if the array was used for a news spool, a web-server, etc. etc. Always keep in mind, that benchmarks numbers are the result of running a "synthetic" program. Few real-world programs do what bonnie does, and although these I/O numbers are nice to look at, they are not ultimate real-world-appliance performance indicators. Not even close.

For now, I only have results from my own machine. The setup is:

  • Dual Pentium Pro 150 MHz
  • 256 MB RAM (60 MHz EDO)
  • Three IBM UltraStar 9ES 4.5 GB, SCSI U2W
  • Adaptec 2940U2W
  • One IBM UltraStar 9ES 4.5 GB, SCSI UW
  • Adaptec 2940 UW
  • Kernel 2.2.7 with RAID patches

The three U2W disks hang off the U2W controller, and the UW disk off the UW controller.

It seems to be impossible to push much more than 30 MB/s thru the SCSI busses on this system, using RAID or not. My guess is, that because the system is fairly old, the memory bandwidth sucks, and thus limits what can be sent thru the SCSI controllers.


RAID-0

Read is Sequential block input, and Write is Sequential block output. File size was 1GB in all tests. The tests where done in single-user mode. The SCSI driver was configured not to use tagged command queuing.


From this it seems that the RAID chunk-size doesn't make that much of a difference. However, the ext2fs block-size should be as large as possible, which is 4kB (eg. the page size) on IA-32.

       |            |              |             |              |
       |Chunk size  |  Block size  |  Read kB/s  |  Write kB/s  |
       |            |              |             |              |
       |4k          |  1k          |  19712      |  18035       |
       |4k          |  4k          |  34048      |  27061       |
       |8k          |  1k          |  19301      |  18091       |
       |8k          |  4k          |  33920      |  27118       |
       |16k         |  1k          |  19330      |  18179       |
       |16k         |  2k          |  28161      |  23682       |
       |16k         |  4k          |  33990      |  27229       |
       |32k         |  1k          |  19251      |  18194       |
       |32k         |  4k          |  34071      |  26976       |

RAID-0 with TCQ

This time, the SCSI driver was configured to use tagged command queuing, with a queue depth of 8. Otherwise, everything's the same as before.

       |            |              |             |              |
       |Chunk size  |  Block size  |  Read kB/s  |  Write kB/s  |
       |            |              |             |              |
       |32k         |  4k          |  33617      |  27215       |


No more tests where done. TCQ seemed to slightly increase write performance, but there really wasn't much of a difference at all.


RAID-5

The array was configured to run in RAID-5 mode, and similar tests where done.

       |            |              |             |              |
       |Chunk size  |  Block size  |  Read kB/s  |  Write kB/s  |
       |            |              |             |              |
       |8k          |  1k          |  11090      |  6874        |
       |8k          |  4k          |  13474      |  12229       |
       |32k         |  1k          |  11442      |  8291        |
       |32k         |  2k          |  16089      |  10926       |
       |32k         |  4k          |  18724      |  12627       |


Now, both the chunk-size and the block-size seems to actually make a difference.


RAID-1+0

RAID-1+0 is "mirrored stripes", or, a RAID-1 array of two RAID-0 arrays. The chunk-size is the chunk sizes of both the RAID-1 array and the two RAID-0 arrays. I did not do test where those chunk-sizes differ, although that should be a perfectly valid setup.

       |            |              |             |              |
       |Chunk size  |  Block size  |  Read kB/s  |  Write kB/s  |
       |            |              |             |              |
       |32k         |  1k          |  13753      |  11580       |
       |32k         |  4k          |  23432      |  22249       |


No more tests where done. The file size was 900MB, because the four partitions involved where 500 MB each, which doesn't give room for a 1G file in this setup (RAID-1 on two 1000MB arrays).

Fresh benchmarking tools

To check out speed and performance of your RAID systems, do NOT use hdparm. It won't do real benchmarking of the arrays.

Instead of hdparm, take a look at the tools described here: IOzone and Bonnie++.

IOzone is a small, versatile and modern tool to use. It benchmarks file I/O performance for read, write, re-read, re-write, read backwards, read strided, fread, fwrite, random read, pread, mmap, aio_read and aio_write operations. Don't worry, it can run on any of the ext2, ext3, reiserfs, JFS, or XFS filesystems in OSDL STP.

You can also use IOzone to show throughput performance as a function of number of processes and number of disks used in a filesystem, something interesting when it's about RAID striping.

Although documentation for IOzone is available in Acrobat/PDF, PostScript, nroff, and MS Word formats, we are going to cover here a nice example of IOzone in action:

 iozone -s 4096

This would run a test using a 4096KB file size.

And this is an example of the output quality IOzone gives

         File size set to 4096 KB
         Output is in Kbytes/sec
         Time Resolution = 0.000001 seconds.
         Processor cache size set to 1024 Kbytes.
         Processor cache line size set to 32 bytes.
         File stride size set to 17 * record size.
                                                             random  random    bkwd  record  stride
               KB  reclen   write rewrite    read    reread    read   write    read rewrite    read   fwrite frewrite   fread  freread
             4096       4   99028  194722   285873   298063  265560  170737  398600  436346  380952    91651   127212  288309   292633


Now you just need to know about the feature that makes IOzone useful for RAID benchmarking: the file operations involving RAID are the read strided. The example above shows a 380.952kB/sec. for the read strided, so you can go figure.

Bonnie++ seems to be more targeted at benchmarking single drives that at RAID, but it can test more than 2TB of storage on 32-bit machines, and tests for file creat, stat, unlink operations.


Back to Growing Forward to Related tools
Back to Performance Forward to Partitioning RAID / LVM on RAID

Related tools

While not described in this HOWTO, some useful tools for Software-RAID systems have been developed.


RAID resizing and conversion

Now supported in the kernel modules and triggered via mdadm - raidreconf may be used on older systems although it is not known to be reliable.

It is not easy to add another disk to an existing array. A tool to allow for just this operation has been developed, and is available from http://unthought.net/raidreconf. The tool will allow for conversion between RAID levels, for example converting a two-disk RAID-1 array into a four-disk RAID-5 array. It will also allow for chunk-size conversion, and simple disk adding.

Please note that this tool is not really "production ready". It seems to have worked well so far, but it is a rather time-consuming process that, if it fails, will absolutely guarantee that your data will be irrecoverably scattered over your disks. You absolutely must keep good backups prior to experimenting with this tool.


Backup

Remember, RAID is no substitute for good backups. No amount of redundancy in your RAID configuration is going to let you recover week or month old data, nor will a RAID survive fires, earthquakes, or other disasters.

It is imperative that you protect your data, not just with RAID, but with regular good backups. One excellent system for such backups, is the Amanda backup system.

Back to Performance Forward to Partitioning RAID / LVM on RAID
Back to Related tools

Partitioning RAID / LVM on RAID

RAID devices can be partitioned, like ordinary disks can. This can be a real benefit on systems where one wants to run, for example, two disks in a RAID-1, but divide the system onto multiple different filesystems:

FIXME : This is the 'non-partitioned' approach:

 # df -h
 Filesystem            Size  Used Avail Use% Mounted on
 /dev/md2              3.8G  640M  3.0G  18% /
 /dev/md1               97M   11M   81M  12% /boot
 /dev/md5              3.8G  1.1G  2.5G  30% /usr
 /dev/md6              9.6G  8.5G  722M  93% /var/www
 /dev/md7              3.8G  951M  2.7G  26% /var/lib
 /dev/md8              3.8G   38M  3.6G   1% /var/spool
 /dev/md9              1.9G  231M  1.5G  13% /tmp
 /dev/md10             8.7G  329M  7.9G   4% /var/www/html

Partitions on a RAID device

A RAID device can only be partitioned if it was created with an --auto option given to the mdadm tool. This option is not well documented, but here is a working example that would result in a partitionable device made of two disks -- sda and sdb:

 mdadm --create --auto=mdp --verbose /dev/md_d0 --level=mirror --raid-devices=2 /dev/sda /dev/sdb

Issuing this command will result in a /dev/md_d0 device that can be partitioned with fdisk or parted. The partitions will be available as /dev/md_d0p1, /dev/md_d0p2 etc.

LVM on RAID

An alternative solution to the partitioning problem is LVM, Logical Volume Management. LVM has been in the stable Linux kernel series for a long time now - LVM2 in the 2.6 kernel series is a further improvement over the older LVM support from the 2.4 kernel series. While LVM has traditionally scared some people away because of its complexity, it really is something that an administrator could and should consider if he wishes to use more than a few filesystems on a server.

We will not attempt to describe LVM setup in this HOWTO, as there already is a fine HOWTO for exactly this purpose. A small example of a RAID + LVM setup will be presented though. Consider the df output below, of such a system:

 # df -h
 Filesystem            Size  Used Avail Use% Mounted on
 /dev/md0              942M  419M  475M  47% /
 /dev/vg0/backup        40G  1.3M   39G   1% /backup
 /dev/vg0/amdata       496M  237M  233M  51% /var/lib/amanda
 /dev/vg0/mirror        62G   56G  2.9G  96% /mnt/mirror
 /dev/vg0/webroot       97M  6.5M   85M   8% /var/www
 /dev/vg0/local        2.0G  458M  1.4G  24% /usr/local
 /dev/vg0/netswap      3.0G  2.1G 1019M  67% /mnt/netswap


"What's the difference" you might ask... Well, this system has only two RAID-1 devices - one for the root filesystem, and one that cannot be seen on the df output - this is because /dev/md1 is used as a "physical volume" for LVM. What this means is, that /dev/md1 acts as "backing store" for all "volumes" in the "volume group" named vg0. All this "volume" terminology is explained in the LVM HOWTO - if you do not completely understand the above, there is no need to worry - the details are not particularly important right now (you will need to read the LVM HOWTO anyway if you want to set up LVM). What matters is the benefits that this setup has over the many-md-devices setup:

  • No need to reboot just to add a new filesystem (this would otherwise be required, as the kernel cannot re-read the partition table from the disk that holds the root filesystem, and re-partitioning would be required in order to create the new RAID device to hold the new filesystem)
  • Resizing of filesystems: LVM supports hot-resizing of volumes (with RAID devices resizing is difficult and time consuming - but if you run LVM on top of RAID, all you need in order to resize a filesystem is to resize the volume, not the underlying RAID device). With a filesystem such as XFS, you can even resize the filesystem without un-mounting it first (!). Ext3 Hot-resizing is also supported (growing only).
  • Adding new disks: Need more storage? Easy! Simply insert two new disks in your system, create a RAID-1 on top of them, make your new /dev/md2 device a physical volume and add it to your volume group. That's it! You now have more free space in your volume group for either growing your existing logical volumes, or for adding new ones.
  • Ability to take LVM snapshots to enable consistent backup operations.

All in all - for servers with many filesystems, LVM (and LVM2) is definitely a fairly simple solution which should be considered for use on top of Software RAID. Read on in the LVM HOWTO if you want to learn more about LVM.

Back to Related tools

Credits

The Original HowTo

A great deal of the wiki is based on "The Software RAID HowTo" by Jakob OEstergaard jakob@unthought.net and Emilio Bueso bueso@vives.org

Here is the relevant section from an email I received from Jakob OEstergaard on 31/ July 2006

Emilio and I have agreed that we will license the *current* (31st of
july 2006) version of the Software RAID HOWTO in English, under
the GNU Free Documentation License version 1.2.

You can download a copy and use it as you see fit, in accordance with
the GFDL   :) 

If you have any questions or if there's anything else you need, please
let us know.

Good luck with the project!

-- / jakob 

It's now my understanding that I've abided by the GFDL and we can now edit/format/restructure the text on the article page and use it anywhere else in the wiki. Big thanks to Jakob and Emilio. DavidGreaves 12:45, 31 July 2006 (PDT)

The following people contributed to the creation of the original HowTo documentation:

  • Mark Price and IBM
  • Steve Boley of Dell
  • Damon Hoggett
  • Ingo Molnar
  • Jim Warren
  • Louis Mandelstam
  • Allan Noah
  • Yasunori Taniike
  • Martin Bene
  • Bennett Todd
  • Kevin Rolfes
  • Darryl Barlow
  • Brandon Knitter
  • Hans van Zijst
  • Matthew Mcglynn
  • Jimmy Hedman
  • Tony den Haan
  • The Linux-RAID mailing list people
  • The ones I forgot, sorry :)

Changelog

Personal tools