Introduction

From Linux Raid Wiki
(Difference between revisions)
Jump to: navigation, search
(RAID-5)
m (punctuation, grammar)
 
(7 intermediate revisions by 4 users not shown)
Line 1: Line 1:
 
=Introduction=
 
=Introduction=
 
+
This Wiki focuses on the "new-style" RAID present in the 2.6 kernel series only. It does not describe the "old-style" RAID functionality present in 2.0 and 2.2 kernels although much of the functionality is available in the later 2.4 series kernels.
This Wiki focuses on the "new-style" RAID present in the 2.6
+
kernel series only. It does not describe the "old-style" RAID
+
functionality present in 2.0 and 2.2 kernels although much of
+
the functionality is available in the later 2.4 series kernels.
+
  
 
==Disclaimer==
 
==Disclaimer==
 
 
The mandatory disclaimer:
 
The mandatory disclaimer:
  
All information herein is presented "as-is", with no warranties
+
All information herein is presented "as-is", with no warranties expressed nor implied.  If you lose all your data, your job, get hit by a truck, whatever, it's not my fault, nor the developers'.  Be aware, that you use the RAID software and this information at your own risk!  There is no guarantee whatsoever, that any of the software, or this information, is in any way correct, nor suited for any use whatsoever. Back up all your data before experimenting with this. Better safe than sorry.
expressed nor implied.  If you lose all your data, your job, get hit
+
by a truck, whatever, it's not my fault, nor the developers'.  Be
+
aware, that you use the RAID software and this information at your own
+
risk!  There is no guarantee whatsoever, that any of the software, or
+
this information, is in any way correct, nor suited for any use
+
whatsoever. Back up all your data before experimenting with this.
+
Better safe than sorry.
+
  
 
==What is RAID?==
 
==What is RAID?==
 +
In 1987, David A. Patterson, Garth Gibson and David H. Katz of the University of California, Berkeley, published a paper titled ''A Case for Redundant Arrays of Inexpensive Disks (RAID)''.[http://www.cs.cmu.edu/~garth/RAIDpaper/Patterson88.pdf] This paper described various types of disk arrays, referred to by the acronym RAID. The basic idea of RAID was to combine multiple small, independent disk drives into an array of disk drives, yielding performance exceeding that of a Single Large Expensive Drive (SLED). Additionally, this array of drives appears to the computer as a single logical storage unit or drive.
  
In 1987, the University of California Berkeley, published an article
+
The Mean Time Between Failure (MTBF) of the array will be equal to the MTBF of an individual drive, divided by the number of drives in the array. Because of this, the MTBF of an array of drives would be too low for many application requirements. However, disk arrays can be made fault tolerant by redundantly storing information in various ways.
entitled A Case for Redundant Arrays of Inexpensive Disks (RAID).
+
This article described various types of disk arrays, referred to by
+
the acronym RAID. The basic idea of RAID was to combine multiple
+
small, independent disk drives into an array of disk drives which
+
yields performance exceeding that of a Single Large Expensive Drive
+
(SLED). Additionally, this array of drives appears to the computer as
+
a single logical storage unit or drive.
+
  
The Mean Time Between Failure (MTBF) of the array will be equal to the
+
Five types of array architectures, RAID-1 through RAID-5, were defined by the Berkeley paper, each providing disk fault tolerance and each offering different trade-offs in features and performance. In addition to these five redundant array architectures, it has become popular to refer to a non-redundant array of disk drives as a RAID-0 array.
MTBF of an individual drive, divided by the number of drives in the
+
array.  Because of this, the MTBF of an array of drives would be too
+
low for many application requirements. However, disk arrays can be
+
made fault-tolerant by redundantly storing information in various
+
ways.
+
 
+
Five types of array architectures, RAID-1 through RAID-5, were defined
+
by the Berkeley paper, each providing disk fault-tolerance and each
+
offering different trade-offs in features and performance. In addition
+
to these five redundant array architectures, it has become popular to
+
refer to a non-redundant array of disk drives as a RAID-0 array.
+
 
+
Today some of the original RAID levels (namely level 2 and 3) are only
+
used in very specialized systems (and in fact not even supported by
+
the Linux Software RAID drivers). Another level, "linear" has emerged,
+
and especially RAID level 0 is often combined with RAID level 1.
+
  
 +
Some of the original RAID levels, namely level 2 and 3, are now only used in very specialized systems, and, in fact, not even supported by the Linux Software RAID drivers.  Another level, "linear", has emerged, and especially RAID level 0 is often combined with RAID level 1 (RAID-1+0 or "RAID-10").
  
 
==Terms==
 
==Terms==
 +
In this HOWTO the word "RAID" means "Linux Software RAID". This HOWTO does not treat any aspects of Hardware RAID. Furthermore, it does not treat any aspects of Software RAID in other operating system kernels.
  
In this HOWTO the word "RAID" means "Linux Software RAID". This HOWTO
+
When describing RAID setups, it is useful to refer to the number of disks and their sizes. At all times the letter N is used to denote the number of active disks in the array (not counting spare-disks). The letter S is the size of the smallest drive in the array, unless otherwise mentioned. The letter P is used as the performance of one disk in the array, in MB/s. When used, we assume that the disks are equally fast, which may not always be true in real-world scenarios.
does not treat any aspects of Hardware RAID. Furthermore, it does not
+
treat any aspects of Software RAID in other operating system kernels.
+
 
+
When describing RAID setups, it is useful to refer to the number of
+
disks and their sizes. At all times the letter N is used to denote the
+
number of active disks in the array (not counting spare-disks). The
+
letter S is the size of the smallest drive in the array, unless
+
otherwise mentioned. The letter P is used as the performance of one
+
disk in the array, in MB/s. When used, we assume that the disks are
+
equally fast, which may not always be true in real-world scenarios.
+
 
+
Note that the words "device" and "disk" are supposed to mean about the
+
same thing.  Usually the devices that are used to build a RAID device
+
are partitions on disks, not necessarily entire disks.  But combining
+
several partitions on one disk usually does not make sense, so the
+
words devices and disks just mean "partitions on different disks".
+
  
 +
Note that the words "device" and "disk" are supposed to mean about the same thing.  Usually the devices that are used to build a RAID device are partitions on disks, not necessarily entire disks.  But combining several partitions on one disk usually does not make sense, so the words devices and disks just mean "partitions on different disks".
  
 
==The RAID levels==
 
==The RAID levels==
 +
Here's a short description of what is supported in the Linux RAID drivers. Some of this is absolutely basic RAID information, but a few notices have been added about what's special in the Linux implementation of the levels.  You can safely skip this section if you know RAID already.
  
Here's a short description of what is supported in the Linux RAID
+
The current RAID drivers in Linux support the following levels:
drivers. Some of this information is absolutely basic RAID info, but
+
I've added a few notices about what's special in the Linux
+
implementation of the levels.  You can safely skip this section if you
+
know RAID already.
+
 
+
The current RAID drivers in Linux supports the following levels:
+
  
 
===Linear mode===
 
===Linear mode===
* Two or more disks are combined into one physical device. The disks are "appended" to each other, so writing linearly to the RAID device will fill up disk 0 first, then disk 1 and so on. The disks does not have to be of the same size. In fact, size doesn't matter at all here :)
+
* Two or more disks are combined into one physical device. The disks are "appended" to each other, so writing linearly to the RAID device will fill up disk 0 first, then disk 1 and so on. The disks does not have to be of the same size. In fact, size doesn't matter at all here. :)
 
* There is no redundancy in this level. If one disk crashes you will most probably lose all your data.  You can however be lucky to recover some data, since the filesystem will just be missing one large consecutive chunk of data.
 
* There is no redundancy in this level. If one disk crashes you will most probably lose all your data.  You can however be lucky to recover some data, since the filesystem will just be missing one large consecutive chunk of data.
 
* The read and write performance will not increase for single reads/writes. But if several users use the device, you may be lucky that one user effectively is using the first disk, and the other user is accessing files which happen to reside on the second disk. If that happens, you will see a performance gain.
 
* The read and write performance will not increase for single reads/writes. But if several users use the device, you may be lucky that one user effectively is using the first disk, and the other user is accessing files which happen to reside on the second disk. If that happens, you will see a performance gain.
 
  
 
===RAID-0===
 
===RAID-0===
 
* Also called "stripe" mode. The devices should (but need not) have the same size. Operations on the array will be split on the devices; for example, a large write could be split up as 64 kiB to disk 0, 64 kiB to disk 1, 64 kiB to disk 2, then 64 kiB to disk 0 again, and so on. Writes to each disk will go on at the same time. If one device is much larger than the other devices, that extra space is still utilized in the RAID device, but you will be accessing this larger disk alone, during writes in the high end of your RAID device. This of course hurts performance.
 
* Also called "stripe" mode. The devices should (but need not) have the same size. Operations on the array will be split on the devices; for example, a large write could be split up as 64 kiB to disk 0, 64 kiB to disk 1, 64 kiB to disk 2, then 64 kiB to disk 0 again, and so on. Writes to each disk will go on at the same time. If one device is much larger than the other devices, that extra space is still utilized in the RAID device, but you will be accessing this larger disk alone, during writes in the high end of your RAID device. This of course hurts performance.
 
 
* Like linear, there is no redundancy in this level either. Unlike linear mode, you will not be able to rescue any data if a drive fails. If you remove a drive from a RAID-0 set, the RAID device will not just miss one consecutive block of data, it will be filled with small holes all over the device. e2fsck or other filesystem recovery tools will probably not be able to recover much from such a device.
 
* Like linear, there is no redundancy in this level either. Unlike linear mode, you will not be able to rescue any data if a drive fails. If you remove a drive from a RAID-0 set, the RAID device will not just miss one consecutive block of data, it will be filled with small holes all over the device. e2fsck or other filesystem recovery tools will probably not be able to recover much from such a device.
 
 
* The read and write performance will increase, because reads and writes are done in parallel on the devices. This is usually the main reason for running RAID-0. If the busses to the disks are fast enough, you can get very close to N*P MB/sec.
 
* The read and write performance will increase, because reads and writes are done in parallel on the devices. This is usually the main reason for running RAID-0. If the busses to the disks are fast enough, you can get very close to N*P MB/sec.
  
Line 102: Line 47:
 
* If one drive fails, the parity information can be used to reconstruct all data.  If two drives fail, all data is lost.
 
* If one drive fails, the parity information can be used to reconstruct all data.  If two drives fail, all data is lost.
 
* The reason this level is not more frequently used, is because the parity information is kept on one drive. This information must be updated every time one of the other disks are written to. Thus, the parity disk will become a bottleneck, if it is not a lot faster than the other disks.  However, if you just happen to have a lot of slow disks and a very fast one, this RAID level can be very useful.
 
* The reason this level is not more frequently used, is because the parity information is kept on one drive. This information must be updated every time one of the other disks are written to. Thus, the parity disk will become a bottleneck, if it is not a lot faster than the other disks.  However, if you just happen to have a lot of slow disks and a very fast one, this RAID level can be very useful.
 
  
 
===RAID-5===
 
===RAID-5===
 
* This is perhaps the most useful RAID mode when one wishes to combine a larger number of physical disks, and still maintain some redundancy. RAID-5 can be (usefully) used on three or more disks, with zero or more spare-disks. The resulting RAID-5 device size will be (N-1)*S, just like RAID-4. The big difference between RAID-5 and -4 is, that the parity information is distributed evenly among the participating drives, avoiding the bottleneck problem in RAID-4, and also getting more performance out of the disk when reading, as all drives will then be used.
 
* This is perhaps the most useful RAID mode when one wishes to combine a larger number of physical disks, and still maintain some redundancy. RAID-5 can be (usefully) used on three or more disks, with zero or more spare-disks. The resulting RAID-5 device size will be (N-1)*S, just like RAID-4. The big difference between RAID-5 and -4 is, that the parity information is distributed evenly among the participating drives, avoiding the bottleneck problem in RAID-4, and also getting more performance out of the disk when reading, as all drives will then be used.
 
 
* If one of the disks fail, all data are still intact, thanks to the parity information. If spare disks are available, reconstruction will begin immediately after the device failure. If two disks fail simultaneously, or before the raid is reconstructed, all data are lost. RAID-5 can survive one disk failure, but not two or more.
 
* If one of the disks fail, all data are still intact, thanks to the parity information. If spare disks are available, reconstruction will begin immediately after the device failure. If two disks fail simultaneously, or before the raid is reconstructed, all data are lost. RAID-5 can survive one disk failure, but not two or more.
 
 
* Both read and write performance usually increase, but can be hard to predict how much. Reads are almost similar to RAID-0 reads, writes can be either rather expensive (requiring read-in prior to write, in order to be able to calculate the correct parity information, such as in database operations), or similar to RAID-1 writes (when larger sequential writes are performed, and parity can be calculated directly from the other blocks to be written). The write efficiency depends heavily on the amount of memory in the machine, and the usage pattern of the array. Heavily scattered writes are bound to be more expensive.
 
* Both read and write performance usually increase, but can be hard to predict how much. Reads are almost similar to RAID-0 reads, writes can be either rather expensive (requiring read-in prior to write, in order to be able to calculate the correct parity information, such as in database operations), or similar to RAID-1 writes (when larger sequential writes are performed, and parity can be calculated directly from the other blocks to be written). The write efficiency depends heavily on the amount of memory in the machine, and the usage pattern of the array. Heavily scattered writes are bound to be more expensive.
  
 
===RAID-6===
 
===RAID-6===
* This is an extension of RAID-5 to provide more resilience. RAID-6 can be (usefully) used on four or more disks, with zero or more spare-disks. The resulting RAID-6 device size will be (N-2)*S. The big difference between RAID-5 and -6 is that the parity information is duplicated and distributed evenly among the participating drives.
+
* This is an extension of RAID-5 to provide more resilience. RAID-6 can be (usefully) used on four or more disks, with zero or more spare-disks. The resulting RAID-6 device size will be (N-2)*S. The big difference between RAID-5 and -6 is that there are two different parity information blocks, and these are distributed evenly among the participating drives.
* Since the parity is duplicated; if one or two of the disks fail, all data is still intact. If spare disks are available, reconstruction will begin immediately after the device failure(s).
+
* Since there are two parity blocks; if one or two of the disks fail, all data is still intact. If spare disks are available, reconstruction will begin immediately after the device failure(s).
* Read performance is similar to RAID-5 but write performance is worse.
+
* Read performance is almost similar to RAID-5 but write performance is worse.
 
+
  
 
===RAID-10===
 
===RAID-10===
* RAID-10 is an 'in-kernel' combination of RAID-1 and RAID-0 that is more efficient than simply layering RAID levels.
+
* RAID-10 is an "in-kernel" combination of RAID-1 and RAID-0 that is more efficient than simply layering RAID levels.
* RAID-10 has a layout ('far') which can provide sequential read throughput that scales by number of drives, rather than number of RAID-1 pairs.  
+
* RAID-10 has a layout ("far") which can provide sequential read throughput that scales by number of drives, rather than number of RAID-1 pairs. You can get about 95 % of the performance of the RAID-0 with same amount of drives.
 
* RAID-10 allows spare disk(s) to be shared amongst all the raid1 pairs.
 
* RAID-10 allows spare disk(s) to be shared amongst all the raid1 pairs.
  
Line 127: Line 68:
  
 
==Requirements==
 
==Requirements==
 +
This HOWTO assumes you are using Linux 2.6 or later and the latest tool set.
  
This HOWTO assumes you are using Linux 2.6 or later and the latest toolset.
+
If you use and recent GNU/Linux distribution based on the 2.4 kernel or later, your system most likely already has a matching version of mdadm for your kernel.
 
+
If you use and recent GNU/Linux distribution based on the 2.4 kernel
+
or later, your system most likely already has a matching version of
+
mdadm for your kernel.
+
  
Note that according to it's homepage: http://people.redhat.com/mingo/raidtools/ raidtools hasn't been updated since Jan 2003 and is deprecated in favour of mdadm.
+
Note: According to its homepage, http://people.redhat.com/mingo/raidtools/, raidtools hasn't been updated since January 2003 and is deprecated in favour of mdadm.

Latest revision as of 20:30, 8 April 2009

Contents

[edit] Introduction

This Wiki focuses on the "new-style" RAID present in the 2.6 kernel series only. It does not describe the "old-style" RAID functionality present in 2.0 and 2.2 kernels although much of the functionality is available in the later 2.4 series kernels.

[edit] Disclaimer

The mandatory disclaimer:

All information herein is presented "as-is", with no warranties expressed nor implied. If you lose all your data, your job, get hit by a truck, whatever, it's not my fault, nor the developers'. Be aware, that you use the RAID software and this information at your own risk! There is no guarantee whatsoever, that any of the software, or this information, is in any way correct, nor suited for any use whatsoever. Back up all your data before experimenting with this. Better safe than sorry.

[edit] What is RAID?

In 1987, David A. Patterson, Garth Gibson and David H. Katz of the University of California, Berkeley, published a paper titled A Case for Redundant Arrays of Inexpensive Disks (RAID).[1] This paper described various types of disk arrays, referred to by the acronym RAID. The basic idea of RAID was to combine multiple small, independent disk drives into an array of disk drives, yielding performance exceeding that of a Single Large Expensive Drive (SLED). Additionally, this array of drives appears to the computer as a single logical storage unit or drive.

The Mean Time Between Failure (MTBF) of the array will be equal to the MTBF of an individual drive, divided by the number of drives in the array. Because of this, the MTBF of an array of drives would be too low for many application requirements. However, disk arrays can be made fault tolerant by redundantly storing information in various ways.

Five types of array architectures, RAID-1 through RAID-5, were defined by the Berkeley paper, each providing disk fault tolerance and each offering different trade-offs in features and performance. In addition to these five redundant array architectures, it has become popular to refer to a non-redundant array of disk drives as a RAID-0 array.

Some of the original RAID levels, namely level 2 and 3, are now only used in very specialized systems, and, in fact, not even supported by the Linux Software RAID drivers. Another level, "linear", has emerged, and especially RAID level 0 is often combined with RAID level 1 (RAID-1+0 or "RAID-10").

[edit] Terms

In this HOWTO the word "RAID" means "Linux Software RAID". This HOWTO does not treat any aspects of Hardware RAID. Furthermore, it does not treat any aspects of Software RAID in other operating system kernels.

When describing RAID setups, it is useful to refer to the number of disks and their sizes. At all times the letter N is used to denote the number of active disks in the array (not counting spare-disks). The letter S is the size of the smallest drive in the array, unless otherwise mentioned. The letter P is used as the performance of one disk in the array, in MB/s. When used, we assume that the disks are equally fast, which may not always be true in real-world scenarios.

Note that the words "device" and "disk" are supposed to mean about the same thing. Usually the devices that are used to build a RAID device are partitions on disks, not necessarily entire disks. But combining several partitions on one disk usually does not make sense, so the words devices and disks just mean "partitions on different disks".

[edit] The RAID levels

Here's a short description of what is supported in the Linux RAID drivers. Some of this is absolutely basic RAID information, but a few notices have been added about what's special in the Linux implementation of the levels. You can safely skip this section if you know RAID already.

The current RAID drivers in Linux support the following levels:

[edit] Linear mode

  • Two or more disks are combined into one physical device. The disks are "appended" to each other, so writing linearly to the RAID device will fill up disk 0 first, then disk 1 and so on. The disks does not have to be of the same size. In fact, size doesn't matter at all here. :)
  • There is no redundancy in this level. If one disk crashes you will most probably lose all your data. You can however be lucky to recover some data, since the filesystem will just be missing one large consecutive chunk of data.
  • The read and write performance will not increase for single reads/writes. But if several users use the device, you may be lucky that one user effectively is using the first disk, and the other user is accessing files which happen to reside on the second disk. If that happens, you will see a performance gain.

[edit] RAID-0

  • Also called "stripe" mode. The devices should (but need not) have the same size. Operations on the array will be split on the devices; for example, a large write could be split up as 64 kiB to disk 0, 64 kiB to disk 1, 64 kiB to disk 2, then 64 kiB to disk 0 again, and so on. Writes to each disk will go on at the same time. If one device is much larger than the other devices, that extra space is still utilized in the RAID device, but you will be accessing this larger disk alone, during writes in the high end of your RAID device. This of course hurts performance.
  • Like linear, there is no redundancy in this level either. Unlike linear mode, you will not be able to rescue any data if a drive fails. If you remove a drive from a RAID-0 set, the RAID device will not just miss one consecutive block of data, it will be filled with small holes all over the device. e2fsck or other filesystem recovery tools will probably not be able to recover much from such a device.
  • The read and write performance will increase, because reads and writes are done in parallel on the devices. This is usually the main reason for running RAID-0. If the busses to the disks are fast enough, you can get very close to N*P MB/sec.

[edit] RAID-1

  • This is the first mode which actually has redundancy. RAID-1 can be used on two or more disks with zero or more spare-disks. This mode maintains an exact mirror of the information on one disk on the other disk(s). Of Course, the disks must be of equal size. If one disk is larger than another, your RAID device will be the size of the smallest disk.
  • If up to N-1 disks are removed (or crashes), all data are still intact. If there are spare disks available, and if the system (eg. SCSI drivers or IDE chipset etc.) survived the crash, reconstruction of the mirror will immediately begin on one of the spare disks, after detection of the drive fault.
  • Write performance is often worse than on a single device, because identical copies of the data written must be sent to every disk in the array. With large RAID-1 arrays this can be a real problem, as you may saturate the PCI bus with these extra copies. This is in fact one of the very few places where Hardware RAID solutions can have an edge over Software solutions - if you use a hardware RAID card, the extra write copies of the data will not have to go over the PCI bus, since it is the RAID controller that will generate the extra copy. Read performance is good, especially if you have multiple readers or seek- intensive workloads. The RAID code employs a rather good read- balancing algorithm, that will simply let the disk whose heads are closest to the wanted disk position perform the read operation. Since seek operations are relatively expensive on modern disks (a seek time of 8 ms equals a read of 640 kB at 80 MB/sec), picking the disk that will have the shortest seek time does actually give a noticeable performance improvement.

[edit] RAID-4

  • This RAID level is not used very often. It can be used on three or more disks. Instead of completely mirroring the information, it keeps parity information on one drive, and writes data to the other disks in a RAID-0 like way. Because one disk is reserved for parity information, the size of the array will be (N-1)*S, where S is the size of the smallest drive in the array. As in RAID-1, the disks should either be of equal size, or you will just have to accept that the S in the (N-1)*S formula above will be the size of the smallest drive in the array.
  • If one drive fails, the parity information can be used to reconstruct all data. If two drives fail, all data is lost.
  • The reason this level is not more frequently used, is because the parity information is kept on one drive. This information must be updated every time one of the other disks are written to. Thus, the parity disk will become a bottleneck, if it is not a lot faster than the other disks. However, if you just happen to have a lot of slow disks and a very fast one, this RAID level can be very useful.

[edit] RAID-5

  • This is perhaps the most useful RAID mode when one wishes to combine a larger number of physical disks, and still maintain some redundancy. RAID-5 can be (usefully) used on three or more disks, with zero or more spare-disks. The resulting RAID-5 device size will be (N-1)*S, just like RAID-4. The big difference between RAID-5 and -4 is, that the parity information is distributed evenly among the participating drives, avoiding the bottleneck problem in RAID-4, and also getting more performance out of the disk when reading, as all drives will then be used.
  • If one of the disks fail, all data are still intact, thanks to the parity information. If spare disks are available, reconstruction will begin immediately after the device failure. If two disks fail simultaneously, or before the raid is reconstructed, all data are lost. RAID-5 can survive one disk failure, but not two or more.
  • Both read and write performance usually increase, but can be hard to predict how much. Reads are almost similar to RAID-0 reads, writes can be either rather expensive (requiring read-in prior to write, in order to be able to calculate the correct parity information, such as in database operations), or similar to RAID-1 writes (when larger sequential writes are performed, and parity can be calculated directly from the other blocks to be written). The write efficiency depends heavily on the amount of memory in the machine, and the usage pattern of the array. Heavily scattered writes are bound to be more expensive.

[edit] RAID-6

  • This is an extension of RAID-5 to provide more resilience. RAID-6 can be (usefully) used on four or more disks, with zero or more spare-disks. The resulting RAID-6 device size will be (N-2)*S. The big difference between RAID-5 and -6 is that there are two different parity information blocks, and these are distributed evenly among the participating drives.
  • Since there are two parity blocks; if one or two of the disks fail, all data is still intact. If spare disks are available, reconstruction will begin immediately after the device failure(s).
  • Read performance is almost similar to RAID-5 but write performance is worse.

[edit] RAID-10

  • RAID-10 is an "in-kernel" combination of RAID-1 and RAID-0 that is more efficient than simply layering RAID levels.
  • RAID-10 has a layout ("far") which can provide sequential read throughput that scales by number of drives, rather than number of RAID-1 pairs. You can get about 95 % of the performance of the RAID-0 with same amount of drives.
  • RAID-10 allows spare disk(s) to be shared amongst all the raid1 pairs.

[edit] FAULTY

  • This is a special debugging RAID level. It only allows one device and simulates low level read/write failures.
  • Using a FAULTY device in another RAID level allows administrators to practice dealing with things like sector-failures as opposed to whole drive failures

[edit] Requirements

This HOWTO assumes you are using Linux 2.6 or later and the latest tool set.

If you use and recent GNU/Linux distribution based on the 2.4 kernel or later, your system most likely already has a matching version of mdadm for your kernel.

Note: According to its homepage, http://people.redhat.com/mingo/raidtools/, raidtools hasn't been updated since January 2003 and is deprecated in favour of mdadm.

Personal tools