https://raid.wiki.kernel.org/api.php?action=feedcontributions&user=Rpnabar&feedformat=atomLinux Raid Wiki - User contributions [en]2024-03-28T23:19:27ZUser contributionsMediaWiki 1.19.24https://raid.wiki.kernel.org/index.php/Partitioning_RAID_/_LVM_on_RAIDPartitioning RAID / LVM on RAID2010-08-01T23:34:50Z<p>Rpnabar: /* LVM on RAID */</p>
<hr />
<div>=Partitioning RAID / LVM on RAID=<br />
<br />
RAID devices can be partitioned, like ordinary disks can. This can<br />
be a real benefit on systems where one wants to run, for example,<br />
two disks in a RAID-1, but divide the system onto multiple different<br />
filesystems:<br />
<br />
FIXME : This is the 'non-partitioned' approach:<br />
<br />
# df -h<br />
Filesystem Size Used Avail Use% Mounted on<br />
/dev/md2 3.8G 640M 3.0G 18% /<br />
/dev/md1 97M 11M 81M 12% /boot<br />
/dev/md5 3.8G 1.1G 2.5G 30% /usr<br />
/dev/md6 9.6G 8.5G 722M 93% /var/www<br />
/dev/md7 3.8G 951M 2.7G 26% /var/lib<br />
/dev/md8 3.8G 38M 3.6G 1% /var/spool<br />
/dev/md9 1.9G 231M 1.5G 13% /tmp<br />
/dev/md10 8.7G 329M 7.9G 4% /var/www/html<br />
<br />
==Partitions on a RAID device==<br />
<br />
A RAID device can only be partitioned if it was created with an --auto<br />
option given to the mdadm tool. This option is not well documented, but<br />
here is a working example that would result in a partitionable device<br />
made of two disks -- sda and sdb:<br />
<br />
mdadm --create --auto=mdp --verbose /dev/md_d0 --level=mirror --raid-devices=2 /dev/sda /dev/sdb<br />
<br />
Issuing this command will result in a /dev/md_d0 device that can be partitioned<br />
with fdisk or parted. The partitions will be available as /dev/md_d0p1, /dev/md_d0p2 etc.<br />
<br />
==LVM on RAID==<br />
<br />
An alternative solution to the partitioning problem is LVM, Logical Volume<br />
Management. LVM has been in the stable Linux kernel series for a long<br />
time now - LVM2 in the 2.6 kernel series is a further improvement over<br />
the older LVM support from the 2.4 kernel series. While LVM has<br />
traditionally scared some people away because of its complexity, it<br />
really is something that an administrator could and should consider if<br />
he wishes to use more than a few filesystems on a server.<br />
<br />
We will not attempt to describe LVM setup in this HOWTO, as there<br />
already is a fine HOWTO for exactly this purpose. A small example of a<br />
RAID + LVM setup will be presented though. Consider the df output<br />
below, of such a system:<br />
<br />
# df -h<br />
Filesystem Size Used Avail Use% Mounted on<br />
/dev/md0 942M 419M 475M 47% /<br />
/dev/vg0/backup 40G 1.3M 39G 1% /backup<br />
/dev/vg0/amdata 496M 237M 233M 51% /var/lib/amanda<br />
/dev/vg0/mirror 62G 56G 2.9G 96% /mnt/mirror<br />
/dev/vg0/webroot 97M 6.5M 85M 8% /var/www<br />
/dev/vg0/local 2.0G 458M 1.4G 24% /usr/local<br />
/dev/vg0/netswap 3.0G 2.1G 1019M 67% /mnt/netswap<br />
<br />
<br />
"What's the difference" you might ask... Well, this system has only<br />
two RAID-1 devices - one for the root filesystem, and one that cannot<br />
be seen on the df output - this is because /dev/md1 is used as a<br />
"physical volume" for LVM. What this means is, that /dev/md1 acts as<br />
"backing store" for all "volumes" in the "volume group" named vg0.<br />
All this "volume" terminology is explained in the LVM HOWTO - if you<br />
do not completely understand the above, there is no need to worry -<br />
the details are not particularly important right now (you will need to<br />
read the LVM HOWTO anyway if you want to set up LVM). What matters is<br />
the benefits that this setup has over the many-md-devices setup:<br />
<br />
* No need to reboot just to add a new filesystem (this would otherwise be required, as the kernel cannot re-read the partition table from the disk that holds the root filesystem, and re-partitioning would be required in order to create the new RAID device to hold the new filesystem)<br />
<br />
* Resizing of filesystems: LVM supports hot-resizing of volumes (with RAID devices resizing is difficult and time consuming - but if you run LVM on top of RAID, all you need in order to resize a filesystem is to resize the volume, not the underlying RAID device). With a filesystem such as XFS, you can even resize the filesystem without un-mounting it first (!). [[Ext3 Hot-resizing]] is also supported (growing only).<br />
<br />
* Adding new disks: Need more storage? Easy! Simply insert two new disks in your system, create a RAID-1 on top of them, make your new /dev/md2 device a physical volume and add it to your volume group. That's it! You now have more free space in your volume group for either growing your existing logical volumes, or for adding new ones.<br />
<br />
*Ability to take LVM snapshots to enable consistent backup operations.<br />
<br />
All in all - for servers with many filesystems, LVM (and LVM2) is<br />
definitely a fairly simple solution which should be considered for use<br />
on top of Software RAID. Read on in the LVM HOWTO if you want to learn<br />
more about LVM.</div>Rpnabarhttps://raid.wiki.kernel.org/index.php/RAID_setupRAID setup2010-07-31T08:13:33Z<p>Rpnabar: /* Using the Array */</p>
<hr />
<div>=RAID setup=<br />
<br />
==General setup==<br />
<br />
This is what you need for any of the RAID levels:<br />
<br />
* A kernel with the appropriate md support either as modules or built-in. Preferably a kernel from the 2.6 series. Alternatively a stable 2.4 kernel (pre 2.4 kernels are no longer covered in this document).<br />
<br />
* The mdadm tool<br />
<br />
* Patience, Pizza, and your favorite caffeinated beverage.<br />
<br />
The first two items are included as standard in most GNU/Linux distributions<br />
today.<br />
<br />
If your system has RAID support, you should have a file called<br />
[[mdstat|/proc/mdstat]]. Remember it, that file is your friend. If you do not<br />
have that file, maybe your kernel does not have RAID support. <br />
<br />
If you're sure your kernel has raid support you may need to run run modprobe raid[RAID mode] to load raid support into your kernel. <br />
eg to support raid5:<br />
modprobe raid456<br />
<br />
See what the file contains, by doing a<br />
cat /proc/mdstat<br />
It should tell you that you have the right RAID personality (eg. RAID mode) registered, and that<br />
no RAID devices are currently active. See the [[mdstat|/proc/mdstat]] page for more details.<br />
<br />
==Preparing and partitioning your disk devices==<br />
<br />
Arrays can be built on top of entire disks or on partitions.<br />
<br />
This leads to 2 frequent questions:<br />
* Should I use entire device or a partition?<br />
* What partition type?<br />
Which are discussed in [[Partition Types]]<br />
<br />
==Downloading and installing mdadm - the RAID management tool==<br />
<br />
mdadm is now the standard RAID management tool and should be found in any modern distribution.<br />
<br />
You can download the most recent mdadm tarball at<br />
http://www.cse.unsw.edu.au/~neilb/source/mdadm/.<br />
<br />
Use the normal distribution method for obtaining the package:<br />
<br />
Debian, Ubuntu:<br />
apt-get install mdadm<br />
<br />
Gentoo:<br />
emerge mdadm<br />
<br />
RedHat:<br />
yum install mdadm<br />
<br />
==Mdadm modes of operation==<br />
<br />
mdadm is well documented in its manpage - well worth a read.<br />
<br />
man mdadm<br />
<br />
mdadm has 7 major modes of operation. Normal operation just uses the 'Create', 'Assemble' and 'Monitor' commands - the rest come in handy when you're messing with your array; typically fixing it or changing it.<br />
<br />
===1. Create===<br />
Create a new array with per-device superblocks (normal creation).<br />
<br />
===2. Assemble===<br />
Assemble the parts of a previously created array into an active array. Components can be explicitly<br />
given or can be searched for. mdadm checks that the components do form a bona fide array, and can, on<br />
request, fiddle superblock information so as to assemble a faulty array. Typically you do this in the<br />
init scripts after rebooting.<br />
<br />
===3. Follow or Monitor===<br />
Monitor one or more md devices and act on any state changes. This is only meaningful for raid1, 4, 5,<br />
6, 10 or multipath arrays as only these have interesting state. raid0 or linear never have missing,<br />
spare, or failed drives, so there is nothing to monitor. Typically you do this after rebooting too.<br />
<br />
===4. Build===<br />
Build an array that doesn't have per-device superblocks. For these sorts of arrays, mdadm cannot<br />
differentiate between initial creation and subsequent assembly of an array. It also cannot perform any<br />
checks that appropriate devices have been requested. Because of this, the Build mode should only be<br />
used together with a complete understanding of what you are doing.<br />
<br />
===5. Grow===<br />
[[Growing|Grow]], shrink or otherwise reshape an array in some way. Currently supported growth options including changing the active size of component devices in RAID level 1/4/5/6 and changing the number of active devices in RAID1.<br />
<br />
===6. Manage===<br />
This is for doing things to specific components of an array such as adding new spares and removing<br />
faulty devices.<br />
<br />
===7. Misc===<br />
This is an 'everything else' mode that supports operations on active arrays, operations on component devices such as erasing old superblocks, and information gathering operations.<br />
<br />
<br />
==Create RAID device==<br />
Below we'll see how to create arrays of various types; the basic approach is:<br />
<br />
mdadm --create /dev/md0 <blah><br />
mdadm --monitor /dev/md0<br />
<br />
If you want to access all the latest and upcoming features such as fully named RAID arrays so you no longer have to memorize which partition goes where, you'll want to make sure to use persistant metadata's in the version 1.0 or higher format, as there is no way (currently or planned) to convert an array to a different metadata version. Current recommendations are to use metadata version 1.2 except when creating a boot partition, in which case use version 1.0 metadata and RAID-1.[http://neil.brown.name/blog/20100519043730-002]<br />
<br />
To use newer metadata versions (current tools default to version 0.9 metadata) add the --metadata option '''after''' the switch stating what you're doing in the first place. This will work:<br />
<br />
mdadm --create /dev/md0 --metadata 1.2 <blah><br />
<br />
This, however, will not work:<br />
<br />
mdadm --metadata 1.2 --create /dev/md0 <blah><br />
<br />
===Linear mode===<br />
<br />
Ok, so you have two or more partitions which are not necessarily the<br />
same size (but of course can be), which you want to append to each<br />
other.<br />
<br />
Spare-disks are not supported here. If a disk dies, the array dies<br />
with it. There's no information to put on a spare disk.<br />
<br />
Using mdadm, a single command like<br />
<br />
mdadm --create --verbose /dev/md0 --level=linear --raid-devices=2 /dev/sdb6 /dev/sdc5<br />
<br />
should create the array. The parameters talk for themselves. The out-<br />
put might look like this<br />
<br />
mdadm: chunk size defaults to 64K<br />
mdadm: array /dev/md0 started.<br />
<br />
Have a look in [[mdstat|/proc/mdstat]]. You should see that the array is running.<br />
<br />
Now, you can create a filesystem, just like you would on any other<br />
device, mount it, include it in your /etc/fstab and so on.<br />
<br />
===RAID-0===<br />
<br />
You have two or more devices, of approximately the same size, and you<br />
want to combine their storage capacity and also combine their<br />
performance by accessing them in parallel.<br />
<br />
mdadm --create --verbose /dev/md0 --level=stripe --raid-devices=2 /dev/sdb6 /dev/sdc5<br />
<br />
Like in Linear mode, spare disks are not supported here either. RAID-0<br />
has no redundancy, so when a disk dies, the array goes with it.<br />
<br />
Having run mdadm you have initialised the superblocks and<br />
started the raid device. Have a look in [[mdstat|/proc/mdstat]] to see what's<br />
going on. You should see that your device is now running.<br />
<br />
/dev/md0 is now ready to be formatted, mounted, used and abused.<br />
<br />
===RAID-1===<br />
<br />
You have two devices of approximately same size, and you want the two<br />
to be mirrors of each other. Eventually you have more devices, which<br />
you want to keep as stand-by spare-disks, that will automatically<br />
become a part of the mirror if one of the active devices break.<br />
<br />
mdadm --create --verbose /dev/md0 --level=mirror --raid-devices=2 /dev/sdb1 /dev/sdc1<br />
<br />
If you have spare disks, you can add them to the end of the device<br />
specification like<br />
<br />
mdadm --create --verbose /dev/md0 --level=mirror --raid-devices=2 /dev/sdb1 /dev/sdc1 --spare-devices=1 /dev/sdd1<br />
<br />
Ok, now we're all set to start initializing the RAID. The mirror must<br />
be constructed, eg. the contents (however unimportant now, since the<br />
device is still not formatted) of the two devices must be<br />
synchronized.<br />
<br />
Check out the [[mdstat|/proc/mdstat]] file. It should tell you that the /dev/md0<br />
device has been started, that the mirror is being reconstructed, and<br />
an ETA of the completion of the reconstruction.<br />
<br />
Reconstruction is done using idle I/O bandwidth. So, your system<br />
should still be fairly responsive, although your disk LEDs should be<br />
glowing nicely.<br />
<br />
The reconstruction process is transparent, so you can actually use the<br />
device even though the mirror is currently under reconstruction.<br />
<br />
Try formatting the device, while the reconstruction is running. It<br />
will work. Also you can mount it and use it while reconstruction is<br />
running. Of Course, if the wrong disk breaks while the reconstruction<br />
is running, you're out of luck.<br />
<br />
===RAID-4/5/6===<br />
<br />
You have three or more devices (four or more for RAID-6) of roughly the same size, you want to<br />
combine them into a larger device, but still to maintain a degree of<br />
redundancy for data safety. Eventually you have a number of devices to<br />
use as spare-disks, that will not take part in the array before<br />
another device fails.<br />
<br />
If you use N devices where the smallest has size S, the size of the<br />
entire array will be (N-1)*S. This "missing" space is used for parity<br />
(redundancy) information. Thus, if any disk fails, all data stay<br />
intact. But if two disks fail, all data is lost.<br />
<br />
A chunk size of 32 kB is a good default for many general purpose<br />
filesystems of this size. The array on which the above raidtab is<br />
used, is a 7 times 6 GB = 36 GB (remember the (n-1)*s = (7-1)*6 = 36)<br />
device. It holds an ext2 filesystem with a 4 kB block size. You could<br />
go higher with both array chunk-size and filesystem block-size if your<br />
filesystem is either much larger, or just holds very large files. A<br />
recommended large-file chunk-size is 128kb.<br />
<br />
Ok, enough talking. Let's see if raid-5 works. Run your command:<br />
<br />
mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1 --spare-devices=1 /dev/sde1<br />
<br />
and see what happens. Hopefully your disks start working<br />
like mad, as they begin the reconstruction of your array. Have a look<br />
in [[mdstat|/proc/mdstat]] to see what's going on.<br />
<br />
If the device was successfully created, the reconstruction process has<br />
now begun. Your array is not consistent until this reconstruction<br />
phase has completed. However, the array is fully functional (except<br />
for the handling of device failures of course), and you can format it<br />
and use it even while it is reconstructing.<br />
<br />
The initial reconstruction will always appear as though the array is degraded and is being reconstructed onto a spare, even if only just enough devices were added with zero spares. This is to optimize the initial reconstruction process. This may be confusing or worrying; it is intended for good reason. For more information, please check this [http://marc.info/?l=linux-raid&m=112044009718483&w=2 source, directly from Neil Brown].<br />
<br />
Now, you can create a filesystem. See the section on special [[#Options for mke2fs|options to mke2fs]] before formatting the filesystem. You can now mount it, include it in your /etc/fstab and so on.<br />
<br />
==Create and mount filesystem==<br />
Have a look in [[mdstat|/proc/mdstat]]. You should see that the array is running.<br />
<br />
Now, you can create a filesystem, just like you would on any other device, mount it, include it in your /etc/fstab, and so on.<br />
<br />
Common filesystem creation commands are mk2fs and mkfs.ext3. Please see [[#Options for mke2fs|options for mke2fs]] for an example and details.<br />
<br />
<br />
==Using the Array==<br />
<br />
At this point you should be able to create a simple array of any flavour (hint: --level is your friend)<br />
<br />
Ok, now when you have your RAID device running, you can always stop it:<br />
<br />
mdadm --stop /dev/md0<br />
<br />
Starting is a little more complex; you may think that:<br />
mdadm --run /dev/md0<br />
would work - but it doesn't.<br />
<br />
Linux raid devices don't really exist on their own; they have to be assembled<br />
each time you want to use them. Assembly is like creation insofar as it pulls<br />
together devices<br />
<br />
If you earlier ran:<br />
mdadm --create /dev/md0 --level=5 --raid-devices=4 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1<br />
then<br />
mdadm --assemble /dev/md0 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1<br />
would work.<br />
<br />
However, the easy way to do this if you have a nice simple setup is:<br />
mdadm --assemble --scan <br />
<br />
[<br />
You might need to do this step earlier<br />
<br />
mdadm --detail --scan >> /etc/mdadm.conf<br />
<br />
]<br />
<br />
For complex cases (ie you pull in disks from other machines that you're trying to repair) <br />
this has the potential to start arrays you don't really want started. A safer mechanism is to<br />
use the uuid parameter and run:<br />
mdadm --scan --assemble --uuid=a26bf396:31389f83:0df1722d:f404fe4c<br />
<br />
This will only assemble the array that you want - but it will work no matter<br />
what has happened to the device names. This is particularly cool if, for example,<br />
you add in a new SATA controller card and all of a sudden /dev/sda becomes /dec/sde!!!<br />
<br />
==The Persistent Superblock==<br />
<br />
Back in "The Good Old Days" (TM), the raidtools would read your<br />
/etc/raidtab file, and then initialize the array. However, this would<br />
require that the filesystem on which /etc/raidtab resided was mounted.<br />
This was unfortunate if you want to boot on a RAID.<br />
<br />
Also, the old approach led to complications when mounting filesystems<br />
on RAID devices. They could not be put in the /etc/fstab file as<br />
usual, but would have to be mounted from the init-scripts.<br />
<br />
The persistent superblocks solve these problems. When an array is<br />
created with the persistent-superblock option (the default now),<br />
a special superblock is written to a location (different for <br />
different superblock versions) on all disks<br />
participating in the array. This allows the kernel to read the<br />
configuration of RAID devices directly from the disks involved,<br />
instead of reading from some configuration file that may not be<br />
available at all times.<br />
<br />
It's not a bad idea to maintain a consistent /etc/mdadm.conf file,<br />
since you may need this file for later recovery of the array.<br />
<br />
The persistent superblock is mandatory if you want auto-detection of<br />
your RAID devices upon system boot. This is described in the<br />
[[Autodetect]] section.<br />
<br />
Superblock physical layouts are listed on [[RAID superblock formats]] .<br />
<br />
== External Metadata ==<br />
MDRAID has always used its own metadata format. There are two different major formats for the MDRAID native metadata, the 0.90 and the version-1. Th old 0.90 format limits the arrays to 28 components and 2 terabytes. With the latest mdadm, version 1.2 is the default.<br />
<br />
Starting with Linux kernel v2.6.27 and mdadm v3.0, external metadata are supported. These formats have been long supported with DMRAID and allow the booting of RAID volumes from OptionROM depending on the vendor.<br />
<br />
The first format is the DDF (Disk Data Format) defined by SNIA as the "Industry Standard" RAID metadata format. When a DDF array is constructed, a [[container]] is created in which normal RAID arrarys can be created within the container.<br />
<br />
The second format is the Intel(r) Matrix Storage Manager metadata format. This also creates<br />
a [[container]] that is managed similar to DDF. And on some platforms (depending on vendor), this<br />
format is supported by option-ROM in order to allow booting.<br />
[http://www.intel.com/design/chipsets/matrixstorage_sb.htm]<br />
<br />
<br />
To report the RAID information from the Option ROM:<br />
<br />
mdadm --detail-platform<br />
<br />
Platform : Intel(R) Matrix Storage Manager<br />
Version : 8.9.0.1023<br />
RAID Levels : raid0 raid1 raid10 raid5<br />
Chunk Sizes : 4k 8k 16k 32k 64k 128k<br />
Max Disks : 6<br />
Max Volumes : 2<br />
I/O Controller : /sys/devices/pci0000:00/0000:00:1f.2<br />
Port0 : /dev/sda (3MT0585Z)<br />
Port1 : - non-disk device (ATAPI DVD D DH16D4S) -<br />
Port2 : /dev/sdb (WD-WCANK2850263)<br />
Port3 : /dev/sdc (3MT005ML)<br />
Port4 : /dev/sdd (WD-WCANK2850441)<br />
Port5 : /dev/sde (WD-WCANK2852905)<br />
Port6 : - no device attached –<br />
<br />
To create RAID volumes that are external metadata, we must first create a container:<br />
<br />
mdadm --create --verbose /dev/md/imsm /dev/sd[b-g] --raid-devices 4 --metadata=imsm<br />
<br />
In this example we created an IMSM based container for 4 RAID devices. Now we can create volumes within the container.<br />
<br />
mdadm --create --verbose /dev/md/vol0 /dev/md/imsm --raid-devices 4 --level 5<br />
<br />
Of course, the --size option can be used to limit the size of the disk space used in the volume during creation in order to create multiple volumes within the container. One important note is that the various volumes within the container MUST span the same disks. i.e. a RAID10 volume and a RAID5 volume spanning the same number of disks.<br />
<br />
=Advanced Options=<br />
==Chunk sizes==<br />
<br />
The chunk-size deserves an explanation. You can never write<br />
completely parallel to a set of disks. If you had two disks and wanted<br />
to write a byte, you would have to write four bits on each disk. <br />
Actually, every second bit would go to disk 0 and the others to disk<br />
1. Hardware just doesn't support that. Instead, we choose some chunk-<br />
size, which we define as the smallest "atomic" mass of data that can<br />
be written to the devices. A write of 16 kB with a chunk size of 4<br />
kB will cause the first and the third 4 kB chunks to be written to<br />
the first disk and the second and fourth chunks to be written to the<br />
second disk, in the RAID-0 case with two disks. Thus, for large<br />
writes, you may see lower overhead by having fairly large chunks,<br />
whereas arrays that are primarily holding small files may benefit more<br />
from a smaller chunk size.<br />
<br />
Chunk sizes must be specified for all RAID levels, including linear<br />
mode. However, the chunk-size does not make any difference for linear<br />
mode.<br />
<br />
For optimal performance, you should experiment with the chunk-size, as well<br />
as with the block-size of the filesystem you put on the array. For others experiments and performance charts, check out our [[Performance]] page. You can get chunk-size graphs galore.<br />
<br />
The argument to the chunk-size option in /etc/raidtab specifies the<br />
chunk-size in kilobytes. So "4" means "4 kB".<br />
<br />
====RAID-0====<br />
<br />
Data is written "almost" in parallel to the disks in the array.<br />
Actually, chunk-size bytes are written to each disk, serially.<br />
<br />
If you specify a 4 kB chunk size, and write 16 kB to an array of three<br />
disks, the RAID system will write 4 kB to disks 0, 1 and 2, in<br />
parallel, then the remaining 4 kB to disk 0.<br />
<br />
A 32 kB chunk-size is a reasonable starting point for most arrays. But<br />
the optimal value depends very much on the number of drives involved,<br />
the content of the file system you put on it, and many other factors.<br />
Experiment with it, to get the best performance.<br />
<br />
<br />
====RAID-0 with ext2====<br />
<br />
The following tip was contributed by michael@freenet-ag.de:<br />
<br />
NOTE: this tip is no longer needed since the ext2 fs supports dedicated options: see "Options for mke2fs" below<br />
<br />
There is more disk activity at the beginning of ext2fs block groups.<br />
On a single disk, that does not matter, but it can hurt RAID0, if all<br />
block groups happen to begin on the same disk.<br />
<br />
Example:<br />
<br />
With a raid using a chunk size of 4k (also called stride-size), and filesystem using a block size of 4k, each block occupies one stride.<br />
With two disks, the #disk * stride-size product (also called stripe-width) is 2*4k=8k.<br />
The default block group size is 32768 blocks, which is a multiple of the stripe-width of 2 blocks, so all block groups start on disk 0,<br />
which can easily become a hot spot, thus reducing overall performance.<br />
Unfortunately, the block group size can only be set in steps of 8 blocks (32k when using 4k blocks), which also happens to be a multiple of the stripe-width,<br />
so you can not avoid the problem by adjusting the blocks per group with the -g option of mkfs(8).<br />
<br />
If you add a disk, the stripe-width (#disk * stride-size product) is 12k,<br />
so the first block group starts on disk 0, the second block group starts on disk 2 and the third on disk 1.<br />
The load caused by disk activity at the block group beginnings spreads over all disks.<br />
<br />
In case you can not add a disk, try a stride size of 32k. The stripe-width (#disk * stride-size product) is then 64k.<br />
Since you can change the block group size in steps of 8 blocks (32k), using 32760 blocks per group solves the problem.<br />
<br />
Additionally, the block group boundaries should fall on stride boundaries. The examples above get this right.<br />
<br />
====RAID-1====<br />
<br />
For writes, the chunk-size doesn't affect the array, since all data<br />
must be written to all disks no matter what. For reads however, the<br />
chunk-size specifies how much data to read serially from the<br />
participating disks. Since all active disks in the array contain the<br />
same information, the RAID layer has complete freedom in choosing from<br />
which disk information is read - this is used by the RAID code to<br />
improve average seek times by picking the disk best suited for any<br />
given read operation.<br />
<br />
====RAID-4====<br />
<br />
When a write is done on a RAID-4 array, the parity information must be<br />
updated on the parity disk as well.<br />
<br />
The chunk-size affects read performance in the same way as in RAID-0,<br />
since reads from RAID-4 are done in the same way.<br />
<br />
<br />
====RAID-5====<br />
<br />
On RAID-5, the chunk size has the same meaning for reads as for<br />
RAID-0. Writing on RAID-5 is a little more complicated: When a chunk<br />
is written on a RAID-5 array, the corresponding parity chunk must be<br />
updated as well. Updating a parity chunk requires either<br />
<br />
* The original chunk, the new chunk, and the old parity block<br />
<br />
* Or, all chunks (except for the parity chunk) in the stripe<br />
<br />
The RAID code will pick the easiest way to update each parity chunk<br />
as the write progresses. Naturally, if your server has lots of<br />
memory and/or if the writes are nice and linear, updating the<br />
parity chunks will only impose the overhead of one extra write<br />
going over the bus (just like RAID-1). The parity calculation<br />
itself is extremely efficient, so while it does of course load the<br />
main CPU of the system, this impact is negligible. If the writes<br />
are small and scattered all over the array, the RAID layer will<br />
almost always need to read in all the untouched chunks from each<br />
stripe that is written to, in order to calculate the parity chunk.<br />
This will impose extra bus-overhead and latency due to extra reads.<br />
<br />
A reasonable chunk-size for RAID-5 is 128 kB. A study showed that with 4 drives (even-number-of-drives might make a difference) that large chunk sizes of 512-2048 kB gave superior results [http://blog.jamponi.net/2008/07/raid56-and-10-benchmarks-on-26255_10.html]. As always, you may want to experiment with this or check out our [[Performance]] page.<br />
<br />
Also see the section on special [[#Options for mke2fs|options to mke2fs]]. This affects<br />
RAID-5 performance.<br />
<br />
<br />
==ext2, ext3, and ext4==<br />
<br />
There are special options available when formatting RAID-4 or -5 devices with mke2fs or mkfs. The -E stride=nn,stripe-width=mm options will allow mke2fs to better place different ext2/ext3 specific data-structures in an intelligent way on the RAID device.<br />
<br />
Note: The commands mkfs or mkfs.ext3 or mkfs.ext2 are all versions of the same command, with the same options; use whichever is supported, and decide whether you are using ext2 or ext3 (non-journaled vs journaled). See the two versions of the same command below; each makes a different filesystem type.<br />
<br />
Here is an example, with its explanation below:<br />
<br />
mke2fs -v -m .1 -b 4096 -E stride=32,stripe-width=64 /dev/md0<br />
or<br />
mkfs.ext3 -v -m .1 -b 4096 -E stride=32,stripe-width=64 /dev/md0<br />
<br />
Options explained:<br />
The first command makes a ext2 filesystem, the second makes a ext3 filesystem<br />
-v verbose<br />
-m .1 leave .1% of disk to root (so it doesnt fill and cause problems)<br />
-b 4096 block size of 4kb (recommended above for large-file systems)<br />
-E stride=32,stripe-width=64 see below calculation<br />
<br />
===Calculation===<br />
* chunk size = 128kB (set by mdadm cmd, see chunk size advise above)<br />
* block size = 4kB (recommended for large files, and most of time)<br />
* stride = chunk / block = 128kB / 4k = 32kB<br />
* stripe-width = stride * ( (n disks in raid5) - 1 ) = 32kB * ( (3) - 1 ) = 32kB * 2 = 64kB<br />
<br />
If the chunk-size is 128 kB, it means, that 128 kB of consecutive data will reside on one disk. If we want to build an ext2 filesystem with 4 kB block-size, we realize that there will be 32 filesystem blocks in one array chunk.<br />
<br />
stripe-width=64 is calculated by multiplying the stride=32 value with the number of data disks in the array. <br />
<br />
A raid5 with n disks has n-1 data disks, one being reserved for parity. (Note: the mke2fs man page incorrectly states n+1; this is a known bug in the man-page docs that is now fixed.) A raid10 (1+0) with n disks is actually a raid 0 of n/2 raid1 subarrays with 2 disks each.<br />
<br />
===Performance===<br />
RAID-{4,5,10} performance is severely influenced by the stride and stripe-width options. It is uncertain how the stride option will affect other RAID levels. If anyone has information on this, please add to the knowledge.<br />
<br />
The ext2fs blocksize severely influences the performance of the filesystem. You should always use 4kB block size on any filesystem larger than a few hundred megabytes, unless you store a very large number of very small files on it.<br />
<br />
===Changing after creation===<br />
It is possible to change the parameters with <br />
tune2fs -E stride=n,stripe-width=m /dev/mdx<br />
<br />
== XFS ==<br />
<br />
xfsprogs and the mkfs.xfs utility '''automatically''' select the best stripe size and stripe width for underlying devices that support it, such as Linux software RAID devices. Earlier versions of xfs used a built-in libdisk and the GET_ARRAY_INFO ioctl to gather the information; newer versions make use of enhanced geometry detection in libblkid. When using libblkid, accurate geometry may also be obtained from hardware RAID devices which properly export this information.<br />
<br />
To create XFS filesystems optimized for RAID arrays manually, you'll need two parameters:<br />
<br />
* '''chunk size''': same as used with mdadm<br />
* '''number of "data" disks''': number of disks that store data, not disks used for parity or spares. For example:<br />
** RAID 0 with 2 disks: 2 data disks (n)<br />
** RAID 1 with 2 disks: 1 data disk (n/2)<br />
** RAID 10 with 10 disks: 5 data disks (n/2)<br />
** RAID 5 with 6 disks (no spares): 5 data disks (n-1)<br />
** RAID 6 with 6 disks (no spares): 4 data disks (n-2)<br />
<br />
With these numbers in hand, you then want to use mkfs.xfs's su and sw parameters when creating your filesystem.<br />
<br />
* '''su''': Stripe unit, which is the RAID chunk size, in bytes<br />
* '''sw''': Multiplier of the stripe unit, i.e. number of data disks<br />
<br />
If you've a 4-disk RAID 5 and are using a chunk size of 64 KiB, the command to use is:<br />
<br />
mkfs -t xfs -d su=64k -d sw=3 /dev/md0<br />
<br />
Alternately, you may use the sunit/swidth mkfs options to specify stripe unit and width in 512-byte-block units. For the array above, it could also be specified as:<br />
<br />
mkfs -t xfs -d sunit=128 -d swidth=384 /dev/md0<br />
<br />
The result is exactly the same; however, the su/sw combination is often simpler to remember. Beware that sunit/swidth are inconsistently used throughout XFS' utilities (see xfs_info below).<br />
<br />
To check the parameters in use for an XFS filesystem, use xfs_info.<br />
<br />
xfs_info /dev/md0<br />
<br />
meta-data=/dev/md0 isize=256 agcount=32, agsize=45785440 blks<br />
= sectsz=4096 attr=2<br />
data = bsize=4096 blocks=1465133952, imaxpct=5<br />
= sunit=16 swidth=48 blks<br />
naming =version 2 bsize=4096 ascii-ci=0<br />
log =internal bsize=4096 blocks=521728, version=2<br />
= sectsz=4096 sunit=1 blks, lazy-count=0<br />
realtime =none extsz=196608 blocks=0, rtextents=0<br />
<br />
Here, rather than displaying 512-byte units as used in mkfs.xfs, sunit and swidth are shown as multiples of the filesystem block size (bsize), another file system tunable. This inconsistency is for legacy reasons, and is not well-documented.<br />
<br />
For the above example, sunit (sunit×bsize = su, 16×4096 = 64 KiB) and swidth (swidth×bsize = sw, 48×4096 = 192 KiB) are optimal and correctly reported.<br />
<br />
While the stripe unit and stripe width cannot be changed after an XFS file system has been created, they can be overridden at mount time with the sunit/swidth options, similar to ones used by mkfs.xfs.<br />
<br />
From Documentation/filesystems/xfs.txt in the kernel tree:<br />
<br />
sunit=value and swidth=value<br />
Used to specify the stripe unit and width for a RAID device or<br />
a stripe volume. "value" must be specified in 512-byte block<br />
units.<br />
If this option is not specified and the filesystem was made on<br />
a stripe volume or the stripe width or unit were specified for<br />
the RAID device at mkfs time, then the mount system call will<br />
restore the value from the superblock. For filesystems that<br />
are made directly on RAID devices, these options can be used<br />
to override the information in the superblock if the underlying<br />
disk layout changes after the filesystem has been created.<br />
The "swidth" option is required if the "sunit" option has been<br />
specified, and must be a multiple of the "sunit" value.<br />
<br />
Source: [http://says.samat.org/ Samat Says: Tuning XFS for RAID]</div>Rpnabarhttps://raid.wiki.kernel.org/index.php/RAID_setupRAID setup2010-07-31T08:12:59Z<p>Rpnabar: /* Using the Array */</p>
<hr />
<div>=RAID setup=<br />
<br />
==General setup==<br />
<br />
This is what you need for any of the RAID levels:<br />
<br />
* A kernel with the appropriate md support either as modules or built-in. Preferably a kernel from the 2.6 series. Alternatively a stable 2.4 kernel (pre 2.4 kernels are no longer covered in this document).<br />
<br />
* The mdadm tool<br />
<br />
* Patience, Pizza, and your favorite caffeinated beverage.<br />
<br />
The first two items are included as standard in most GNU/Linux distributions<br />
today.<br />
<br />
If your system has RAID support, you should have a file called<br />
[[mdstat|/proc/mdstat]]. Remember it, that file is your friend. If you do not<br />
have that file, maybe your kernel does not have RAID support. <br />
<br />
If you're sure your kernel has raid support you may need to run run modprobe raid[RAID mode] to load raid support into your kernel. <br />
eg to support raid5:<br />
modprobe raid456<br />
<br />
See what the file contains, by doing a<br />
cat /proc/mdstat<br />
It should tell you that you have the right RAID personality (eg. RAID mode) registered, and that<br />
no RAID devices are currently active. See the [[mdstat|/proc/mdstat]] page for more details.<br />
<br />
==Preparing and partitioning your disk devices==<br />
<br />
Arrays can be built on top of entire disks or on partitions.<br />
<br />
This leads to 2 frequent questions:<br />
* Should I use entire device or a partition?<br />
* What partition type?<br />
Which are discussed in [[Partition Types]]<br />
<br />
==Downloading and installing mdadm - the RAID management tool==<br />
<br />
mdadm is now the standard RAID management tool and should be found in any modern distribution.<br />
<br />
You can download the most recent mdadm tarball at<br />
http://www.cse.unsw.edu.au/~neilb/source/mdadm/.<br />
<br />
Use the normal distribution method for obtaining the package:<br />
<br />
Debian, Ubuntu:<br />
apt-get install mdadm<br />
<br />
Gentoo:<br />
emerge mdadm<br />
<br />
RedHat:<br />
yum install mdadm<br />
<br />
==Mdadm modes of operation==<br />
<br />
mdadm is well documented in its manpage - well worth a read.<br />
<br />
man mdadm<br />
<br />
mdadm has 7 major modes of operation. Normal operation just uses the 'Create', 'Assemble' and 'Monitor' commands - the rest come in handy when you're messing with your array; typically fixing it or changing it.<br />
<br />
===1. Create===<br />
Create a new array with per-device superblocks (normal creation).<br />
<br />
===2. Assemble===<br />
Assemble the parts of a previously created array into an active array. Components can be explicitly<br />
given or can be searched for. mdadm checks that the components do form a bona fide array, and can, on<br />
request, fiddle superblock information so as to assemble a faulty array. Typically you do this in the<br />
init scripts after rebooting.<br />
<br />
===3. Follow or Monitor===<br />
Monitor one or more md devices and act on any state changes. This is only meaningful for raid1, 4, 5,<br />
6, 10 or multipath arrays as only these have interesting state. raid0 or linear never have missing,<br />
spare, or failed drives, so there is nothing to monitor. Typically you do this after rebooting too.<br />
<br />
===4. Build===<br />
Build an array that doesn't have per-device superblocks. For these sorts of arrays, mdadm cannot<br />
differentiate between initial creation and subsequent assembly of an array. It also cannot perform any<br />
checks that appropriate devices have been requested. Because of this, the Build mode should only be<br />
used together with a complete understanding of what you are doing.<br />
<br />
===5. Grow===<br />
[[Growing|Grow]], shrink or otherwise reshape an array in some way. Currently supported growth options including changing the active size of component devices in RAID level 1/4/5/6 and changing the number of active devices in RAID1.<br />
<br />
===6. Manage===<br />
This is for doing things to specific components of an array such as adding new spares and removing<br />
faulty devices.<br />
<br />
===7. Misc===<br />
This is an 'everything else' mode that supports operations on active arrays, operations on component devices such as erasing old superblocks, and information gathering operations.<br />
<br />
<br />
==Create RAID device==<br />
Below we'll see how to create arrays of various types; the basic approach is:<br />
<br />
mdadm --create /dev/md0 <blah><br />
mdadm --monitor /dev/md0<br />
<br />
If you want to access all the latest and upcoming features such as fully named RAID arrays so you no longer have to memorize which partition goes where, you'll want to make sure to use persistant metadata's in the version 1.0 or higher format, as there is no way (currently or planned) to convert an array to a different metadata version. Current recommendations are to use metadata version 1.2 except when creating a boot partition, in which case use version 1.0 metadata and RAID-1.[http://neil.brown.name/blog/20100519043730-002]<br />
<br />
To use newer metadata versions (current tools default to version 0.9 metadata) add the --metadata option '''after''' the switch stating what you're doing in the first place. This will work:<br />
<br />
mdadm --create /dev/md0 --metadata 1.2 <blah><br />
<br />
This, however, will not work:<br />
<br />
mdadm --metadata 1.2 --create /dev/md0 <blah><br />
<br />
===Linear mode===<br />
<br />
Ok, so you have two or more partitions which are not necessarily the<br />
same size (but of course can be), which you want to append to each<br />
other.<br />
<br />
Spare-disks are not supported here. If a disk dies, the array dies<br />
with it. There's no information to put on a spare disk.<br />
<br />
Using mdadm, a single command like<br />
<br />
mdadm --create --verbose /dev/md0 --level=linear --raid-devices=2 /dev/sdb6 /dev/sdc5<br />
<br />
should create the array. The parameters talk for themselves. The out-<br />
put might look like this<br />
<br />
mdadm: chunk size defaults to 64K<br />
mdadm: array /dev/md0 started.<br />
<br />
Have a look in [[mdstat|/proc/mdstat]]. You should see that the array is running.<br />
<br />
Now, you can create a filesystem, just like you would on any other<br />
device, mount it, include it in your /etc/fstab and so on.<br />
<br />
===RAID-0===<br />
<br />
You have two or more devices, of approximately the same size, and you<br />
want to combine their storage capacity and also combine their<br />
performance by accessing them in parallel.<br />
<br />
mdadm --create --verbose /dev/md0 --level=stripe --raid-devices=2 /dev/sdb6 /dev/sdc5<br />
<br />
Like in Linear mode, spare disks are not supported here either. RAID-0<br />
has no redundancy, so when a disk dies, the array goes with it.<br />
<br />
Having run mdadm you have initialised the superblocks and<br />
started the raid device. Have a look in [[mdstat|/proc/mdstat]] to see what's<br />
going on. You should see that your device is now running.<br />
<br />
/dev/md0 is now ready to be formatted, mounted, used and abused.<br />
<br />
===RAID-1===<br />
<br />
You have two devices of approximately same size, and you want the two<br />
to be mirrors of each other. Eventually you have more devices, which<br />
you want to keep as stand-by spare-disks, that will automatically<br />
become a part of the mirror if one of the active devices break.<br />
<br />
mdadm --create --verbose /dev/md0 --level=mirror --raid-devices=2 /dev/sdb1 /dev/sdc1<br />
<br />
If you have spare disks, you can add them to the end of the device<br />
specification like<br />
<br />
mdadm --create --verbose /dev/md0 --level=mirror --raid-devices=2 /dev/sdb1 /dev/sdc1 --spare-devices=1 /dev/sdd1<br />
<br />
Ok, now we're all set to start initializing the RAID. The mirror must<br />
be constructed, eg. the contents (however unimportant now, since the<br />
device is still not formatted) of the two devices must be<br />
synchronized.<br />
<br />
Check out the [[mdstat|/proc/mdstat]] file. It should tell you that the /dev/md0<br />
device has been started, that the mirror is being reconstructed, and<br />
an ETA of the completion of the reconstruction.<br />
<br />
Reconstruction is done using idle I/O bandwidth. So, your system<br />
should still be fairly responsive, although your disk LEDs should be<br />
glowing nicely.<br />
<br />
The reconstruction process is transparent, so you can actually use the<br />
device even though the mirror is currently under reconstruction.<br />
<br />
Try formatting the device, while the reconstruction is running. It<br />
will work. Also you can mount it and use it while reconstruction is<br />
running. Of Course, if the wrong disk breaks while the reconstruction<br />
is running, you're out of luck.<br />
<br />
===RAID-4/5/6===<br />
<br />
You have three or more devices (four or more for RAID-6) of roughly the same size, you want to<br />
combine them into a larger device, but still to maintain a degree of<br />
redundancy for data safety. Eventually you have a number of devices to<br />
use as spare-disks, that will not take part in the array before<br />
another device fails.<br />
<br />
If you use N devices where the smallest has size S, the size of the<br />
entire array will be (N-1)*S. This "missing" space is used for parity<br />
(redundancy) information. Thus, if any disk fails, all data stay<br />
intact. But if two disks fail, all data is lost.<br />
<br />
A chunk size of 32 kB is a good default for many general purpose<br />
filesystems of this size. The array on which the above raidtab is<br />
used, is a 7 times 6 GB = 36 GB (remember the (n-1)*s = (7-1)*6 = 36)<br />
device. It holds an ext2 filesystem with a 4 kB block size. You could<br />
go higher with both array chunk-size and filesystem block-size if your<br />
filesystem is either much larger, or just holds very large files. A<br />
recommended large-file chunk-size is 128kb.<br />
<br />
Ok, enough talking. Let's see if raid-5 works. Run your command:<br />
<br />
mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1 --spare-devices=1 /dev/sde1<br />
<br />
and see what happens. Hopefully your disks start working<br />
like mad, as they begin the reconstruction of your array. Have a look<br />
in [[mdstat|/proc/mdstat]] to see what's going on.<br />
<br />
If the device was successfully created, the reconstruction process has<br />
now begun. Your array is not consistent until this reconstruction<br />
phase has completed. However, the array is fully functional (except<br />
for the handling of device failures of course), and you can format it<br />
and use it even while it is reconstructing.<br />
<br />
The initial reconstruction will always appear as though the array is degraded and is being reconstructed onto a spare, even if only just enough devices were added with zero spares. This is to optimize the initial reconstruction process. This may be confusing or worrying; it is intended for good reason. For more information, please check this [http://marc.info/?l=linux-raid&m=112044009718483&w=2 source, directly from Neil Brown].<br />
<br />
Now, you can create a filesystem. See the section on special [[#Options for mke2fs|options to mke2fs]] before formatting the filesystem. You can now mount it, include it in your /etc/fstab and so on.<br />
<br />
==Create and mount filesystem==<br />
Have a look in [[mdstat|/proc/mdstat]]. You should see that the array is running.<br />
<br />
Now, you can create a filesystem, just like you would on any other device, mount it, include it in your /etc/fstab, and so on.<br />
<br />
Common filesystem creation commands are mk2fs and mkfs.ext3. Please see [[#Options for mke2fs|options for mke2fs]] for an example and details.<br />
<br />
<br />
==Using the Array==<br />
<br />
At this point you should be able to create a simple array of any flavour (hint: --level is your friend)<br />
<br />
Ok, now when you have your RAID device running, you can always stop it:<br />
<br />
mdadm --stop /dev/md0<br />
<br />
Starting is a little more complex; you may think that:<br />
mdadm --run /dev/md0<br />
would work - but it doesn't.<br />
<br />
Linux raid devices don't really exist on their own; they have to be assembled<br />
each time you want to use them. Assembly is like creation insofar as it pulls<br />
together devices<br />
<br />
If you earlier ran:<br />
mdadm --create /dev/md0 --level=5 --raid-devices=4 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1<br />
then<br />
mdadm --assemble /dev/md0 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1<br />
would work.<br />
<br />
However, the easy way to do this if you have a nice simple setup is:<br />
mdadm --assemble --scan <br />
<br />
[<br />
You might need to do this step earlier<br />
mdadm --detail --scan >> /etc/mdadm.conf<br />
]<br />
<br />
For complex cases (ie you pull in disks from other machines that you're trying to repair) <br />
this has the potential to start arrays you don't really want started. A safer mechanism is to<br />
use the uuid parameter and run:<br />
mdadm --scan --assemble --uuid=a26bf396:31389f83:0df1722d:f404fe4c<br />
<br />
This will only assemble the array that you want - but it will work no matter<br />
what has happened to the device names. This is particularly cool if, for example,<br />
you add in a new SATA controller card and all of a sudden /dev/sda becomes /dec/sde!!!<br />
<br />
==The Persistent Superblock==<br />
<br />
Back in "The Good Old Days" (TM), the raidtools would read your<br />
/etc/raidtab file, and then initialize the array. However, this would<br />
require that the filesystem on which /etc/raidtab resided was mounted.<br />
This was unfortunate if you want to boot on a RAID.<br />
<br />
Also, the old approach led to complications when mounting filesystems<br />
on RAID devices. They could not be put in the /etc/fstab file as<br />
usual, but would have to be mounted from the init-scripts.<br />
<br />
The persistent superblocks solve these problems. When an array is<br />
created with the persistent-superblock option (the default now),<br />
a special superblock is written to a location (different for <br />
different superblock versions) on all disks<br />
participating in the array. This allows the kernel to read the<br />
configuration of RAID devices directly from the disks involved,<br />
instead of reading from some configuration file that may not be<br />
available at all times.<br />
<br />
It's not a bad idea to maintain a consistent /etc/mdadm.conf file,<br />
since you may need this file for later recovery of the array.<br />
<br />
The persistent superblock is mandatory if you want auto-detection of<br />
your RAID devices upon system boot. This is described in the<br />
[[Autodetect]] section.<br />
<br />
Superblock physical layouts are listed on [[RAID superblock formats]] .<br />
<br />
== External Metadata ==<br />
MDRAID has always used its own metadata format. There are two different major formats for the MDRAID native metadata, the 0.90 and the version-1. Th old 0.90 format limits the arrays to 28 components and 2 terabytes. With the latest mdadm, version 1.2 is the default.<br />
<br />
Starting with Linux kernel v2.6.27 and mdadm v3.0, external metadata are supported. These formats have been long supported with DMRAID and allow the booting of RAID volumes from OptionROM depending on the vendor.<br />
<br />
The first format is the DDF (Disk Data Format) defined by SNIA as the "Industry Standard" RAID metadata format. When a DDF array is constructed, a [[container]] is created in which normal RAID arrarys can be created within the container.<br />
<br />
The second format is the Intel(r) Matrix Storage Manager metadata format. This also creates<br />
a [[container]] that is managed similar to DDF. And on some platforms (depending on vendor), this<br />
format is supported by option-ROM in order to allow booting.<br />
[http://www.intel.com/design/chipsets/matrixstorage_sb.htm]<br />
<br />
<br />
To report the RAID information from the Option ROM:<br />
<br />
mdadm --detail-platform<br />
<br />
Platform : Intel(R) Matrix Storage Manager<br />
Version : 8.9.0.1023<br />
RAID Levels : raid0 raid1 raid10 raid5<br />
Chunk Sizes : 4k 8k 16k 32k 64k 128k<br />
Max Disks : 6<br />
Max Volumes : 2<br />
I/O Controller : /sys/devices/pci0000:00/0000:00:1f.2<br />
Port0 : /dev/sda (3MT0585Z)<br />
Port1 : - non-disk device (ATAPI DVD D DH16D4S) -<br />
Port2 : /dev/sdb (WD-WCANK2850263)<br />
Port3 : /dev/sdc (3MT005ML)<br />
Port4 : /dev/sdd (WD-WCANK2850441)<br />
Port5 : /dev/sde (WD-WCANK2852905)<br />
Port6 : - no device attached –<br />
<br />
To create RAID volumes that are external metadata, we must first create a container:<br />
<br />
mdadm --create --verbose /dev/md/imsm /dev/sd[b-g] --raid-devices 4 --metadata=imsm<br />
<br />
In this example we created an IMSM based container for 4 RAID devices. Now we can create volumes within the container.<br />
<br />
mdadm --create --verbose /dev/md/vol0 /dev/md/imsm --raid-devices 4 --level 5<br />
<br />
Of course, the --size option can be used to limit the size of the disk space used in the volume during creation in order to create multiple volumes within the container. One important note is that the various volumes within the container MUST span the same disks. i.e. a RAID10 volume and a RAID5 volume spanning the same number of disks.<br />
<br />
=Advanced Options=<br />
==Chunk sizes==<br />
<br />
The chunk-size deserves an explanation. You can never write<br />
completely parallel to a set of disks. If you had two disks and wanted<br />
to write a byte, you would have to write four bits on each disk. <br />
Actually, every second bit would go to disk 0 and the others to disk<br />
1. Hardware just doesn't support that. Instead, we choose some chunk-<br />
size, which we define as the smallest "atomic" mass of data that can<br />
be written to the devices. A write of 16 kB with a chunk size of 4<br />
kB will cause the first and the third 4 kB chunks to be written to<br />
the first disk and the second and fourth chunks to be written to the<br />
second disk, in the RAID-0 case with two disks. Thus, for large<br />
writes, you may see lower overhead by having fairly large chunks,<br />
whereas arrays that are primarily holding small files may benefit more<br />
from a smaller chunk size.<br />
<br />
Chunk sizes must be specified for all RAID levels, including linear<br />
mode. However, the chunk-size does not make any difference for linear<br />
mode.<br />
<br />
For optimal performance, you should experiment with the chunk-size, as well<br />
as with the block-size of the filesystem you put on the array. For others experiments and performance charts, check out our [[Performance]] page. You can get chunk-size graphs galore.<br />
<br />
The argument to the chunk-size option in /etc/raidtab specifies the<br />
chunk-size in kilobytes. So "4" means "4 kB".<br />
<br />
====RAID-0====<br />
<br />
Data is written "almost" in parallel to the disks in the array.<br />
Actually, chunk-size bytes are written to each disk, serially.<br />
<br />
If you specify a 4 kB chunk size, and write 16 kB to an array of three<br />
disks, the RAID system will write 4 kB to disks 0, 1 and 2, in<br />
parallel, then the remaining 4 kB to disk 0.<br />
<br />
A 32 kB chunk-size is a reasonable starting point for most arrays. But<br />
the optimal value depends very much on the number of drives involved,<br />
the content of the file system you put on it, and many other factors.<br />
Experiment with it, to get the best performance.<br />
<br />
<br />
====RAID-0 with ext2====<br />
<br />
The following tip was contributed by michael@freenet-ag.de:<br />
<br />
NOTE: this tip is no longer needed since the ext2 fs supports dedicated options: see "Options for mke2fs" below<br />
<br />
There is more disk activity at the beginning of ext2fs block groups.<br />
On a single disk, that does not matter, but it can hurt RAID0, if all<br />
block groups happen to begin on the same disk.<br />
<br />
Example:<br />
<br />
With a raid using a chunk size of 4k (also called stride-size), and filesystem using a block size of 4k, each block occupies one stride.<br />
With two disks, the #disk * stride-size product (also called stripe-width) is 2*4k=8k.<br />
The default block group size is 32768 blocks, which is a multiple of the stripe-width of 2 blocks, so all block groups start on disk 0,<br />
which can easily become a hot spot, thus reducing overall performance.<br />
Unfortunately, the block group size can only be set in steps of 8 blocks (32k when using 4k blocks), which also happens to be a multiple of the stripe-width,<br />
so you can not avoid the problem by adjusting the blocks per group with the -g option of mkfs(8).<br />
<br />
If you add a disk, the stripe-width (#disk * stride-size product) is 12k,<br />
so the first block group starts on disk 0, the second block group starts on disk 2 and the third on disk 1.<br />
The load caused by disk activity at the block group beginnings spreads over all disks.<br />
<br />
In case you can not add a disk, try a stride size of 32k. The stripe-width (#disk * stride-size product) is then 64k.<br />
Since you can change the block group size in steps of 8 blocks (32k), using 32760 blocks per group solves the problem.<br />
<br />
Additionally, the block group boundaries should fall on stride boundaries. The examples above get this right.<br />
<br />
====RAID-1====<br />
<br />
For writes, the chunk-size doesn't affect the array, since all data<br />
must be written to all disks no matter what. For reads however, the<br />
chunk-size specifies how much data to read serially from the<br />
participating disks. Since all active disks in the array contain the<br />
same information, the RAID layer has complete freedom in choosing from<br />
which disk information is read - this is used by the RAID code to<br />
improve average seek times by picking the disk best suited for any<br />
given read operation.<br />
<br />
====RAID-4====<br />
<br />
When a write is done on a RAID-4 array, the parity information must be<br />
updated on the parity disk as well.<br />
<br />
The chunk-size affects read performance in the same way as in RAID-0,<br />
since reads from RAID-4 are done in the same way.<br />
<br />
<br />
====RAID-5====<br />
<br />
On RAID-5, the chunk size has the same meaning for reads as for<br />
RAID-0. Writing on RAID-5 is a little more complicated: When a chunk<br />
is written on a RAID-5 array, the corresponding parity chunk must be<br />
updated as well. Updating a parity chunk requires either<br />
<br />
* The original chunk, the new chunk, and the old parity block<br />
<br />
* Or, all chunks (except for the parity chunk) in the stripe<br />
<br />
The RAID code will pick the easiest way to update each parity chunk<br />
as the write progresses. Naturally, if your server has lots of<br />
memory and/or if the writes are nice and linear, updating the<br />
parity chunks will only impose the overhead of one extra write<br />
going over the bus (just like RAID-1). The parity calculation<br />
itself is extremely efficient, so while it does of course load the<br />
main CPU of the system, this impact is negligible. If the writes<br />
are small and scattered all over the array, the RAID layer will<br />
almost always need to read in all the untouched chunks from each<br />
stripe that is written to, in order to calculate the parity chunk.<br />
This will impose extra bus-overhead and latency due to extra reads.<br />
<br />
A reasonable chunk-size for RAID-5 is 128 kB. A study showed that with 4 drives (even-number-of-drives might make a difference) that large chunk sizes of 512-2048 kB gave superior results [http://blog.jamponi.net/2008/07/raid56-and-10-benchmarks-on-26255_10.html]. As always, you may want to experiment with this or check out our [[Performance]] page.<br />
<br />
Also see the section on special [[#Options for mke2fs|options to mke2fs]]. This affects<br />
RAID-5 performance.<br />
<br />
<br />
==ext2, ext3, and ext4==<br />
<br />
There are special options available when formatting RAID-4 or -5 devices with mke2fs or mkfs. The -E stride=nn,stripe-width=mm options will allow mke2fs to better place different ext2/ext3 specific data-structures in an intelligent way on the RAID device.<br />
<br />
Note: The commands mkfs or mkfs.ext3 or mkfs.ext2 are all versions of the same command, with the same options; use whichever is supported, and decide whether you are using ext2 or ext3 (non-journaled vs journaled). See the two versions of the same command below; each makes a different filesystem type.<br />
<br />
Here is an example, with its explanation below:<br />
<br />
mke2fs -v -m .1 -b 4096 -E stride=32,stripe-width=64 /dev/md0<br />
or<br />
mkfs.ext3 -v -m .1 -b 4096 -E stride=32,stripe-width=64 /dev/md0<br />
<br />
Options explained:<br />
The first command makes a ext2 filesystem, the second makes a ext3 filesystem<br />
-v verbose<br />
-m .1 leave .1% of disk to root (so it doesnt fill and cause problems)<br />
-b 4096 block size of 4kb (recommended above for large-file systems)<br />
-E stride=32,stripe-width=64 see below calculation<br />
<br />
===Calculation===<br />
* chunk size = 128kB (set by mdadm cmd, see chunk size advise above)<br />
* block size = 4kB (recommended for large files, and most of time)<br />
* stride = chunk / block = 128kB / 4k = 32kB<br />
* stripe-width = stride * ( (n disks in raid5) - 1 ) = 32kB * ( (3) - 1 ) = 32kB * 2 = 64kB<br />
<br />
If the chunk-size is 128 kB, it means, that 128 kB of consecutive data will reside on one disk. If we want to build an ext2 filesystem with 4 kB block-size, we realize that there will be 32 filesystem blocks in one array chunk.<br />
<br />
stripe-width=64 is calculated by multiplying the stride=32 value with the number of data disks in the array. <br />
<br />
A raid5 with n disks has n-1 data disks, one being reserved for parity. (Note: the mke2fs man page incorrectly states n+1; this is a known bug in the man-page docs that is now fixed.) A raid10 (1+0) with n disks is actually a raid 0 of n/2 raid1 subarrays with 2 disks each.<br />
<br />
===Performance===<br />
RAID-{4,5,10} performance is severely influenced by the stride and stripe-width options. It is uncertain how the stride option will affect other RAID levels. If anyone has information on this, please add to the knowledge.<br />
<br />
The ext2fs blocksize severely influences the performance of the filesystem. You should always use 4kB block size on any filesystem larger than a few hundred megabytes, unless you store a very large number of very small files on it.<br />
<br />
===Changing after creation===<br />
It is possible to change the parameters with <br />
tune2fs -E stride=n,stripe-width=m /dev/mdx<br />
<br />
== XFS ==<br />
<br />
xfsprogs and the mkfs.xfs utility '''automatically''' select the best stripe size and stripe width for underlying devices that support it, such as Linux software RAID devices. Earlier versions of xfs used a built-in libdisk and the GET_ARRAY_INFO ioctl to gather the information; newer versions make use of enhanced geometry detection in libblkid. When using libblkid, accurate geometry may also be obtained from hardware RAID devices which properly export this information.<br />
<br />
To create XFS filesystems optimized for RAID arrays manually, you'll need two parameters:<br />
<br />
* '''chunk size''': same as used with mdadm<br />
* '''number of "data" disks''': number of disks that store data, not disks used for parity or spares. For example:<br />
** RAID 0 with 2 disks: 2 data disks (n)<br />
** RAID 1 with 2 disks: 1 data disk (n/2)<br />
** RAID 10 with 10 disks: 5 data disks (n/2)<br />
** RAID 5 with 6 disks (no spares): 5 data disks (n-1)<br />
** RAID 6 with 6 disks (no spares): 4 data disks (n-2)<br />
<br />
With these numbers in hand, you then want to use mkfs.xfs's su and sw parameters when creating your filesystem.<br />
<br />
* '''su''': Stripe unit, which is the RAID chunk size, in bytes<br />
* '''sw''': Multiplier of the stripe unit, i.e. number of data disks<br />
<br />
If you've a 4-disk RAID 5 and are using a chunk size of 64 KiB, the command to use is:<br />
<br />
mkfs -t xfs -d su=64k -d sw=3 /dev/md0<br />
<br />
Alternately, you may use the sunit/swidth mkfs options to specify stripe unit and width in 512-byte-block units. For the array above, it could also be specified as:<br />
<br />
mkfs -t xfs -d sunit=128 -d swidth=384 /dev/md0<br />
<br />
The result is exactly the same; however, the su/sw combination is often simpler to remember. Beware that sunit/swidth are inconsistently used throughout XFS' utilities (see xfs_info below).<br />
<br />
To check the parameters in use for an XFS filesystem, use xfs_info.<br />
<br />
xfs_info /dev/md0<br />
<br />
meta-data=/dev/md0 isize=256 agcount=32, agsize=45785440 blks<br />
= sectsz=4096 attr=2<br />
data = bsize=4096 blocks=1465133952, imaxpct=5<br />
= sunit=16 swidth=48 blks<br />
naming =version 2 bsize=4096 ascii-ci=0<br />
log =internal bsize=4096 blocks=521728, version=2<br />
= sectsz=4096 sunit=1 blks, lazy-count=0<br />
realtime =none extsz=196608 blocks=0, rtextents=0<br />
<br />
Here, rather than displaying 512-byte units as used in mkfs.xfs, sunit and swidth are shown as multiples of the filesystem block size (bsize), another file system tunable. This inconsistency is for legacy reasons, and is not well-documented.<br />
<br />
For the above example, sunit (sunit×bsize = su, 16×4096 = 64 KiB) and swidth (swidth×bsize = sw, 48×4096 = 192 KiB) are optimal and correctly reported.<br />
<br />
While the stripe unit and stripe width cannot be changed after an XFS file system has been created, they can be overridden at mount time with the sunit/swidth options, similar to ones used by mkfs.xfs.<br />
<br />
From Documentation/filesystems/xfs.txt in the kernel tree:<br />
<br />
sunit=value and swidth=value<br />
Used to specify the stripe unit and width for a RAID device or<br />
a stripe volume. "value" must be specified in 512-byte block<br />
units.<br />
If this option is not specified and the filesystem was made on<br />
a stripe volume or the stripe width or unit were specified for<br />
the RAID device at mkfs time, then the mount system call will<br />
restore the value from the superblock. For filesystems that<br />
are made directly on RAID devices, these options can be used<br />
to override the information in the superblock if the underlying<br />
disk layout changes after the filesystem has been created.<br />
The "swidth" option is required if the "sunit" option has been<br />
specified, and must be a multiple of the "sunit" value.<br />
<br />
Source: [http://says.samat.org/ Samat Says: Tuning XFS for RAID]</div>Rpnabarhttps://raid.wiki.kernel.org/index.php/Tweaking,_tuning_and_troubleshootingTweaking, tuning and troubleshooting2010-07-31T07:02:31Z<p>Rpnabar: /* Booting on RAID */</p>
<hr />
<div>=Tweaking, tuning and troubleshooting=<br />
<br />
<br />
==Autodetection==<br />
<br />
[[Autodetect|Autodetection is a now-deprecated]] way to allow the RAID devices to be automatically recognized<br />
by the kernel at boot-time, right after the ordinary partition detection is done. If your system still uses<br />
autodetect then here <br />
<br />
This requires several things:<br />
<br />
1. You need autodetection support in the kernel. Check this<br />
<br />
1. You must be using version 0.9 superblocks (non-persistent or 1.x won't work).<br />
<br />
1. The partition-types of the devices used in the RAID must be set to 0xFD (use fdisk and set the type to "fd")<br />
<br />
<br />
NOTE: Be sure that your RAID is NOT RUNNING before changing the<br />
partition types. Use '''mdadm --stop /dev/md0''' to stop the device.<br />
<br />
If you set up 1, 2 and 3 from above, autodetection should be set up.<br />
Try rebooting. When the system comes up, cat'ing /proc/mdstat should<br />
tell you that your RAID is running.<br />
<br />
During boot, you could see messages similar to these:<br />
<br />
Oct 22 00:51:59 malthe kernel: SCSI device sdg: hdwr sector= 512<br />
bytes. Sectors= 12657717 [6180 MB] [6.2 GB]<br />
Oct 22 00:51:59 malthe kernel: Partition check:<br />
Oct 22 00:51:59 malthe kernel: sda: sda1 sda2 sda3 sda4<br />
Oct 22 00:51:59 malthe kernel: sdb: sdb1 sdb2<br />
Oct 22 00:51:59 malthe kernel: sdc: sdc1 sdc2<br />
Oct 22 00:51:59 malthe kernel: sdd: sdd1 sdd2<br />
Oct 22 00:51:59 malthe kernel: sde: sde1 sde2<br />
Oct 22 00:51:59 malthe kernel: sdf: sdf1 sdf2<br />
Oct 22 00:51:59 malthe kernel: sdg: sdg1 sdg2<br />
Oct 22 00:51:59 malthe kernel: autodetecting RAID arrays<br />
Oct 22 00:51:59 malthe kernel: (read) sdb1's sb offset: 6199872<br />
Oct 22 00:51:59 malthe kernel: bind<sdb1,1><br />
Oct 22 00:51:59 malthe kernel: (read) sdc1's sb offset: 6199872<br />
Oct 22 00:51:59 malthe kernel: bind<sdc1,2><br />
Oct 22 00:51:59 malthe kernel: (read) sdd1's sb offset: 6199872<br />
Oct 22 00:51:59 malthe kernel: bind<sdd1,3><br />
Oct 22 00:51:59 malthe kernel: (read) sde1's sb offset: 6199872<br />
Oct 22 00:51:59 malthe kernel: bind<sde1,4><br />
Oct 22 00:51:59 malthe kernel: (read) sdf1's sb offset: 6205376<br />
Oct 22 00:51:59 malthe kernel: bind<sdf1,5><br />
Oct 22 00:51:59 malthe kernel: (read) sdg1's sb offset: 6205376<br />
Oct 22 00:51:59 malthe kernel: bind<sdg1,6><br />
Oct 22 00:51:59 malthe kernel: autorunning md0<br />
Oct 22 00:51:59 malthe kernel: running: <sdg1><sdf1><sde1><sdd1><sdc1><sdb1><br />
Oct 22 00:51:59 malthe kernel: now!<br />
Oct 22 00:51:59 malthe kernel: md: md0: raid array is not clean --<br />
starting background reconstruction<br />
<br />
<br />
This is output from the autodetection of a RAID-5 array that was not<br />
cleanly shut down (eg. the machine crashed). Reconstruction is auto-<br />
matically initiated. Mounting this device is perfectly safe, since<br />
reconstruction is transparent and all data are consistent (it's only<br />
the parity information that is inconsistent - but that isn't needed<br />
until a device fails).<br />
<br />
Autostarted devices are also automatically stopped at shutdown. Don't<br />
worry about init scripts. Just use the /dev/md devices as any other<br />
/dev/sd or /dev/hd devices.<br />
<br />
Yes, it really is that easy - but it is also [[Autodetect|full of problems]] and should be avoided.<br />
<br />
==Booting on RAID==<br />
<br />
There are several ways to set up a system that mounts its root<br />
filesystem on a RAID device. Some distributions allow for RAID setup<br />
in the installation process, and this is by far the easiest way to get<br />
a nicely set up RAID system.<br />
<br />
Newer LILO distributions can handle RAID-1 devices, and thus the<br />
kernel can be loaded at boot-time from a RAID device. If configured <br />
appropriately LILO will correctly write boot-records on all disks in<br />
the array, to allow booting even if the primary disk fails (default LILO<br />
configurations are generally not setup like this).<br />
<br />
If you are using grub instead of LILO, then just start grub and<br />
configure it to use the second (or third, or fourth...) disk in the<br />
RAID-1 array you want to boot off as its root device and run setup.<br />
And that's all.<br />
<br />
For example, on an array consisting of /dev/hda1 and /dev/hdc1 where<br />
both partitions should be bootable you should just do this:<br />
<br />
grub<br />
grub>device (hd0) /dev/hdc<br />
grub>root (hd0,0)<br />
grub>setup (hd0)<br />
<br />
Some users have experienced problems with this, reporting that<br />
although booting with one drive connected worked, booting with both<br />
two drives failed. Nevertheless, running the described procedure with<br />
both disks fixed the problem, allowing the system to boot from either<br />
single drive or from the RAID-1<br />
<br />
Another way of ensuring that your system can always boot is, to create<br />
a boot floppy (if you are still one of those lucky souls whose system does have a floppy drive) when all the setup is done. If the disk on which the<br />
/boot filesystem resides dies, you can always boot from the floppy. On<br />
RedHat and RedHat derived systems, this can be accomplished with the<br />
mkbootdisk command.<br />
<br />
==Root filesystem on RAID==<br />
<br />
In order to have a system booting on RAID, the root filesystem (/)<br />
must be mounted on a RAID device. Two methods for achieving this is<br />
supplied bellow. The methods below assume that you install on a normal<br />
partition, and then - when the installation is complete - move the<br />
contents of your non-RAID root filesystem onto a new RAID device.<br />
Please note that this is no longer needed in general, as most newer<br />
GNU/Linux distributions support installation on RAID devices (and<br />
creation of the RAID devices during the installation process).<br />
However, you may still want to use the methods below, if you are<br />
migrating an existing system to RAID.<br />
<br />
<br />
===Method 1===<br />
<br />
This method assumes you have a spare disk you can install the system<br />
on, which is not part of the RAID you will be configuring.<br />
<br />
* First, install a normal system on your extra disk.<br />
<br />
* Get the kernel you plan on running, get the raid-patches and the tools, and make your system boot with this new RAID-aware kernel. Make sure that RAID-support is in the kernel, and is not loaded as modules.<br />
<br />
* Ok, now you should configure and create the RAID you plan to use for the root filesystem. This is standard procedure, as described elsewhere in this document.<br />
<br />
* Just to make sure everything's fine, try rebooting the system to see if the new RAID comes up on boot. It should.<br />
<br />
* Put a filesystem on the new array (using mke2fs), and mount it under /mnt/newroot<br />
<br />
* Now, copy the contents of your current root-filesystem (the spare disk) to the new root-filesystem (the array). There are lots of ways to do this, one of them is<br />
<br />
cd /<br />
find . -xdev | cpio -pm /mnt/newroot<br />
<br />
another way to copy everything from / to /mnt/newroot could be<br />
<br />
cp -ax / /mnt/newroot<br />
<br />
* You should modify the /mnt/newroot/etc/fstab file to use the correct device (the /dev/md? root device) for the root filesystem.<br />
<br />
* Now, unmount the current /boot filesystem, and mount the boot device on /mnt/newroot/boot instead. This is required for LILO to run successfully in the next step.<br />
<br />
* Update /mnt/newroot/etc/lilo.conf to point to the right devices. The boot device must still be a regular disk (non-RAID device), but the root device should point to your new RAID. When done, run<br />
<br />
lilo -r /mnt/newroot<br />
<br />
complete with no errors.<br />
<br />
* Reboot the system, and watch everything come up as expected :)<br />
<br />
If you're doing this with IDE disks, be sure to tell your BIOS that<br />
all disks are "auto-detect" types, so that the BIOS will allow your<br />
machine to boot even when a disk is missing.<br />
<br />
===Method 2===<br />
<br />
This method requires that your kernel and raidtools understand the<br />
failed-disk directive in the /etc/raidtab file - if you are working on<br />
a really old system this may not be the case, and you will need to<br />
upgrade your tools and/or kernel first.<br />
<br />
You can only use this method on RAID levels 1 and above, as the method<br />
uses an array in "degraded mode" which in turn is only possible if the<br />
RAID level has redundancy. The idea is to install a system on a disk<br />
which is purposely marked as failed in the RAID, then copy the system<br />
to the RAID which will be running in degraded mode, and finally making<br />
the RAID use the no-longer needed "install-disk", zapping the old<br />
installation but making the RAID run in non-degraded mode.<br />
<br />
* First, install a normal system on one disk (that will later become part of your RAID). It is important that this disk (or partition) is not the smallest one. If it is, it will not be possible to add it to the RAID later on!<br />
<br />
* Then, get the kernel, the patches, the tools etc. etc. You know the drill. Make your system boot with a new kernel that has the RAID support you need, compiled into the kernel.<br />
<br />
* Now, set up the RAID with your current root-device as the failed-disk in the /etc/raidtab file. Don't put the failed-disk as the first disk in the raidtab, that will give you problems with starting the RAID. Create the RAID, and put a filesystem on it. If using mdadm, you can create a degraded array just by running something like<br />
mdadm -C /dev/md0 --level raid1 --raid-disks 2 missing /dev/hdc1<br />
note the missing parameter.<br />
<br />
* Try rebooting and see if the RAID comes up as it should<br />
<br />
* Copy the system files, and reconfigure the system to use the RAID as root-device, as described in the previous section.<br />
<br />
* When your system successfully boots from the RAID, you can modify the /etc/raidtab file to include the previously failed-disk as a normal raid-disk. Now, raidhotadd the disk to your RAID.<br />
<br />
* You should now have a system that can boot from a non-degraded RAID.<br />
<br />
==Making the system boot on RAID==<br />
<br />
For the kernel to be able to mount the root filesystem, all support<br />
for the device on which the root filesystem resides, must be present<br />
in the kernel. Therefore, in order to mount the root filesystem on a<br />
RAID device, the kernel must have RAID support.<br />
<br />
The normal way of ensuring that the kernel can see the RAID device is<br />
to simply compile a kernel with all necessary RAID support compiled<br />
in. Make sure that you compile the RAID support into the kernel, and<br />
not as loadable modules. The kernel cannot load a module (from the<br />
root filesystem) before the root filesystem is mounted.<br />
<br />
However, since RedHat-6.0 ships with a kernel that has new-style RAID<br />
support as modules, I here describe how one can use the standard<br />
RedHat-6.0 kernel and still have the system boot on RAID.<br />
<br />
<br />
===Booting with RAID as module===<br />
<br />
You will have to instruct LILO to use a RAM-disk in order to achieve<br />
this. Use the mkinitrd command to create a ramdisk containing all<br />
kernel modules needed to mount the root partition. This can be done<br />
as:<br />
<br />
mkinitrd --with=<module> <ramdisk name> <kernel><br />
<br />
For example:<br />
<br />
mkinitrd --preload raid5 --with=raid5 raid-ramdisk 2.2.5-22<br />
<br />
This will ensure that the specified RAID module is present at boot-<br />
time, for the kernel to use when mounting the root device.<br />
<br />
===Modular RAID on Debian GNU/Linux after move to RAID===<br />
<br />
Debian users may encounter problems using an initrd to mount their<br />
root filesystem from RAID, if they have migrated a standard non-RAID<br />
Debian install to root on RAID.<br />
<br />
If your system fails to mount the root filesystem on boot (you will<br />
see this in a "kernel panic" message), then the problem may be that<br />
the initrd filesystem does not have the necessary support to mount the<br />
root filesystem from RAID.<br />
<br />
Debian seems to produce its initrd.img files on the assumption that<br />
the root filesystem to be mounted is the current one. This will<br />
usually result in a kernel panic if the root filesystem is moved to<br />
the raid device and you attempt to boot from that device using the<br />
same initrd image. The solution is to use the mkinitrd command but<br />
specifying the proposed new root filesystem. For example, the<br />
following commands should create and set up the new initrd on a Debian<br />
system:<br />
<br />
% mkinitrd -r /dev/md0 -o /boot/initrd.img-2.4.22raid<br />
% mv /initrd.img /initrd.img-nonraid<br />
% ln -s /boot/initrd.img-raid /initrd.img"<br />
<br />
<br />
==Converting a non-RAID RedHat System to run on Software RAID==<br />
<br />
This section was written and contributed by Mark Price, IBM. The text<br />
has undergone minor changes since his original work.<br />
<br />
Notice: the following information is provided "AS IS" with no<br />
representation or warranty of any kind either express or implied. You<br />
may use it freely at your own risk, and no one else will be liable for<br />
any damages arising out of such usage.<br />
<br />
===Introduction===<br />
<br />
The technote details how to convert a linux system with non RAID<br />
devices to run with a Software RAID configuration.<br />
<br />
===Scope===<br />
<br />
This scenario was tested with Redhat 7.1, but should be applicable to<br />
any release which supports Software RAID (md) devices.<br />
<br />
===Pre-conversion example system===<br />
<br />
The test system contains two SCSI disks, sda and sdb both of of which<br />
are the same physical size. As part of the test setup, I configured<br />
both disks to have the same partition layout, using fdisk to ensure<br />
the number of blocks for each partition was identical.<br />
<br />
DEVICE MOUNTPOINT SIZE DEVICE MOUNTPOINT SIZE<br />
/dev/sda1 / 2048MB /dev/sdb1 2048MB<br />
/dev/sda2 /boot 80MB /dev/sdb2 80MB<br />
/dev/sda3 /var/ 100MB /dev/sdb3 100MB<br />
/dev/sda4 SWAP 1024MB /dev/sdb4 SWAP 1024MB<br />
<br />
<br />
In our basic example, we are going to set up a simple RAID-1 Mirror,<br />
which requires only two physical disks.<br />
<br />
===Step-1 - boot rescue cd/floppy===<br />
<br />
The redhat installation CD provides a rescue mode which boots into<br />
linux from the CD and mounts any filesystems it can find on your<br />
disks.<br />
<br />
At the lilo prompt type<br />
<br />
lilo: linux rescue<br />
<br />
With the setup described above, the installer may ask you which disk<br />
your root filesystem in on, either sda or sdb. Select sda.<br />
<br />
The installer will mount your filesytems in the following way.<br />
<br />
DEVICE MOUNTPOINT TEMPORARY MOUNT POINT<br />
/dev/sda1 / /mnt/sysimage<br />
/dev/sda2 /boot /mnt/sysimage/boot<br />
/dev/sda3 /var /mnt/sysimage/var<br />
/dev/sda6 /home /mnt/sysimage/home<br />
<br />
<br />
Note: - Please bear in mind other distributions may mount your<br />
filesystems on different mount points, or may require you to mount<br />
them by hand.<br />
<br />
<br />
===Step-2 - create a /etc/raidtab file===<br />
<br />
Create the file /mnt/sysimage/etc/raidtab (or wherever your real /etc<br />
file system has been mounted.<br />
<br />
For our test system, the raidtab file would like like this.<br />
<br />
raiddev /dev/md0<br />
raid-level 1<br />
nr-raid-disks 2<br />
nr-spare-disks 0<br />
chunk-size 4<br />
persistent-superblock 1<br />
device /dev/sda1<br />
raid-disk 0<br />
device /dev/sdb1<br />
raid-disk 1<br />
<br />
raiddev /dev/md1<br />
raid-level 1<br />
nr-raid-disks 2<br />
nr-spare-disks 0<br />
chunk-size 4<br />
persistent-superblock 1<br />
device /dev/sda2<br />
raid-disk 0<br />
device /dev/sdb2<br />
raid-disk 1<br />
<br />
raiddev /dev/md2<br />
raid-level 1<br />
nr-raid-disks 2<br />
nr-spare-disks 0<br />
chunk-size 4<br />
persistent-superblock 1<br />
device /dev/sda3<br />
raid-disk 0<br />
device /dev/sdb3<br />
raid-disk 1<br />
<br />
Note: - It is important that the devices are in the correct order. ie.<br />
that /dev/sda1 is raid-disk 0 and not raid-disk 1. This instructs the<br />
md driver to sync from /dev/sda1, if it were the other way around it<br />
would sync from /dev/sdb1 which would destroy your filesystem.<br />
<br />
Now copy the raidtab file from your real root filesystem to the<br />
current root filesystem.<br />
<br />
(rescue)# cp /mnt/sysimage/etc/raidtab /etc/raidtab<br />
<br />
<br />
===Step-3 - create the md devices===<br />
<br />
There are two ways to do this, copy the device files from<br />
/mnt/sysimage/dev or use mknod to create them. The md device, is a<br />
(b)lock device with major number 9.<br />
<br />
(rescue)# mknod /dev/md0 b 9 0<br />
(rescue)# mknod /dev/md1 b 9 1<br />
(rescue)# mknod /dev/md2 b 9 2<br />
<br />
<br />
===Step-4 - unmount filesystems===<br />
<br />
In order to start the raid devices, and sync the drives, it is<br />
necessary to unmount all the temporary filesystems.<br />
<br />
(rescue)# umount /mnt/sysimage/var<br />
(rescue)# umount /mnt/sysimage/boot<br />
(rescue)# umount /mnt/sysimage/proc<br />
(rescue)# umount /mnt/sysimage<br />
<br />
<br />
Please note, you may not be able to umount /mnt/sysimage. This problem<br />
can be caused by the rescue system - if you choose to manually mount<br />
your filesystems instead of letting the rescue system do this automat-<br />
ically, this problem should go away.<br />
<br />
<br />
===Step-5 - start raid devices===<br />
<br />
Because there are filesystems on /dev/sda1, /dev/sda2 and /dev/sda3 it<br />
is necessary to force the start of the raid device.<br />
<br />
(rescue)# mkraid --really-force /dev/md2<br />
<br />
You can check the completion progress by cat'ing the /proc/mdstat<br />
file. It shows you status of the raid device and percentage left to<br />
sync.<br />
<br />
Continue with /boot and /<br />
<br />
(rescue)# mkraid --really-force /dev/md1<br />
(rescue)# mkraid --really-force /dev/md0<br />
<br />
he md driver syncs one device at a time.<br />
<br />
<br />
===Step-6 - remount filesystems===<br />
<br />
Mount the newly synced filesystems back into the /mnt/sysimage mount<br />
points.<br />
<br />
(rescue)# mount /dev/md0 /mnt/sysimage<br />
(rescue)# mount /dev/md1 /mnt/sysimage/boot<br />
(rescue)# mount /dev/md2 /mnt/sysimage/var<br />
<br />
<br />
<br />
===Step-7 - change root===<br />
<br />
You now need to change your current root directory to your real root<br />
file system.<br />
<br />
(rescue)# chroot /mnt/sysimage<br />
<br />
<br />
<br />
===Step-8 - edit config files===<br />
<br />
You need to configure lilo and /etc/fstab appropriately to boot from<br />
and mount the md devices.<br />
<br />
Note: - The boot device MUST be a non-raided device. The root device<br />
is your new md0 device. eg.<br />
<br />
boot=/dev/sda<br />
map=/boot/map<br />
install=/boot/boot.b<br />
prompt<br />
timeout=50<br />
message=/boot/message<br />
linear<br />
default=linux<br />
<br />
image=/boot/vmlinuz<br />
label=linux<br />
read-only<br />
root=/dev/md0<br />
<br />
<br />
Alter /etc/fstab<br />
<br />
/dev/md0 / ext3 defaults 1 1<br />
/dev/md1 /boot ext3 defaults 1 2<br />
/dev/md2 /var ext3 defaults 1 2<br />
/dev/sda4 swap swap defaults 0 0<br />
<br />
<br />
<br />
===Step-9 - run LILO===<br />
<br />
With the /etc/lilo.conf edited to reflect the new root=/dev/md0 and<br />
with /dev/md1 mounted as /boot, we can now run /sbin/lilo -v on the<br />
chrooted filesystem.<br />
<br />
<br />
===Step-10 - change partition types===<br />
<br />
The partition types of the all the partitions on ALL Drives which are<br />
used by the md driver must be changed to type 0xFD.<br />
<br />
Use fdisk to change the partition type, using option 't'.<br />
<br />
(rescue)# fdisk /dev/sda<br />
(rescue)# fdisk /dev/sdb<br />
<br />
Use the 'w' option after changing all the required partitions to save<br />
the partion table to disk.<br />
<br />
<br />
===Step-11 - resize filesystem===<br />
<br />
When we created the raid device, the physical partion became slightly<br />
smaller because a second superblock is stored at the end of the<br />
partition. If you reboot the system now, the reboot will fail with an<br />
error indicating the superblock is corrupt.<br />
<br />
Resize them prior to the reboot, ensure that the all md based<br />
filesystems are unmounted except root, and remount root read-only.<br />
<br />
(rescue)# mount / -o remount,ro<br />
<br />
You will be required to fsck each of the md devices. This is the<br />
reason for remounting root read-only. The -f flag is required to force<br />
fsck to check a clean filesystem.<br />
<br />
(rescue)# e2fsck -f /dev/md0<br />
<br />
This will generate the same error about inconsistent sizes and<br />
possibly corrupted superblock.Say N to 'Abort?'.<br />
<br />
(rescue)# resize2fs /dev/md0<br />
<br />
Repeat for all /dev/md devices.<br />
<br />
<br />
===Step-12 - checklist===<br />
<br />
The next step is to reboot the system, prior to doing this run through<br />
the checklist below and ensure all tasks have been completed.<br />
<br />
* All devices have finished syncing. Check /proc/mdstat<br />
<br />
* /etc/fstab has been edited to reflect the changes to the device names.<br />
<br />
* /etc/lilo.conf has beeb edited to reflect root device change.<br />
<br />
* /sbin/lilo has been run to update the boot loader.<br />
<br />
* The kernel has both SCSI and RAID(MD) drivers built into the kernel.<br />
<br />
* The partition types of all partitions on disks that are part of an md device have been changed to 0xfd.<br />
<br />
* The filesystems have been fsck'd and resize2fs'd.<br />
<br />
===Step-13 - reboot===<br />
<br />
You can now safely reboot the system, when the system comes up it will<br />
auto discover the md devices (based on the partition types).<br />
<br />
Your root filesystem will now be mirrored.<br />
<br />
<br />
==Sharing spare disks between different arrays==<br />
<br />
When running mdadm in the follow/monitor mode you can make different<br />
arrays share spare disks. That will surely make you save storage space<br />
without losing the comfort of fallback disks.<br />
<br />
In the world of software RAID, this is a brand new never-seen-before<br />
feature: for securing things to the point of spare disk areas, you<br />
just have to provide one single idle disk for a bunch of arrays.<br />
<br />
With mdadm is running as a daemon, you have an agent polling arrays at<br />
regular intervals. Then, as a disk fails on an array without a spare<br />
disk, mdadm removes an available spare disk from another array and<br />
inserts it into the array with the failed disk. The reconstruction<br />
process begins now in the degraded array as usual.<br />
<br />
To declare shared spare disks, just use the spare-group parameter when<br />
invoking mdadm as a daemon.<br />
<br />
==Pitfalls==<br />
<br />
Never NEVER never re-partition disks that are part of a running RAID.<br />
If you must alter the partition table on a disk which is a part of a<br />
RAID, stop the array first, then repartition.<br />
<br />
It is easy to put too many disks on a bus. A normal Fast-Wide SCSI bus<br />
can sustain 10 MB/s which is less than many disks can do alone today.<br />
Putting six such disks on the bus will of course not give you the<br />
expected performance boost. It is becoming equally easy to saturate<br />
the PCI bus - remember, a normal 32-bit 33 MHz PCI bus has a<br />
theoretical maximum bandwidth of around 133 MB/sec, considering<br />
command overhead etc. you will see a somewhat lower real-world<br />
transfer rate. Some disks today has a throughput in excess of 30<br />
MB/sec, so just four of those disks will actually max out your PCI<br />
bus! When designing high-performance RAID systems, be sure to take the<br />
whole I/O path into consideration - there are boards with more PCI<br />
busses, with 64-bit and 66 MHz busses, and with PCI-X.<br />
<br />
More SCSI controllers will only give you extra performance, if the<br />
SCSI busses are nearly maxed out by the disks on them. You will not<br />
see a performance improvement from using two 2940s with two old SCSI<br />
disks, instead of just running the two disks on one controller.<br />
<br />
If you forget the persistent-superblock option, your array may not<br />
start up willingly after it has been stopped. Just re-create the<br />
array with the option set correctly in the raidtab. Please note that<br />
this will destroy the information on the array!<br />
<br />
If a RAID-5 fails to reconstruct after a disk was removed and re-<br />
inserted, this may be because of the ordering of the devices in the<br />
raidtab. Try moving the first "device ..." and "raid-disk ..." pair to<br />
the bottom of the array description in the raidtab file.</div>Rpnabarhttps://raid.wiki.kernel.org/index.php/Detecting,_querying_and_testingDetecting, querying and testing2010-07-31T06:57:43Z<p>Rpnabar: /* Monitoring RAID arrays */</p>
<hr />
<div>=Detecting, querying and testing=<br />
<br />
This section is about life with a software RAID system, that's<br />
communicating with the arrays and tinkertoying them.<br />
<br />
Note that when it comes to md devices manipulation, you should always<br />
remember that you are working with entire filesystems. So, although<br />
there could be some redundancy to keep your files alive, you must<br />
proceed with caution.<br />
<br />
==Detecting a drive failure==<br />
<br />
Firstly: mdadm has an excellent 'monitor' mode which will send an email when a problem is detected in any array (more about that later).<br />
<br />
Of course the standard log and stat files will record more details about a drive failure.<br />
<br />
It's always a must for /var/log/messages to fill screens with tons of<br />
error messages, no matter what happened. But, when it's about a disk<br />
crash, huge lots of kernel errors are reported. Some nasty examples,<br />
for the masochists,<br />
<br />
kernel: scsi0 channel 0 : resetting for second half of retries.<br />
kernel: SCSI bus is being reset for host 0 channel 0.<br />
kernel: scsi0: Sending Bus Device Reset CCB #2666 to Target 0<br />
kernel: scsi0: Bus Device Reset CCB #2666 to Target 0 Completed<br />
kernel: scsi : aborting command due to timeout : pid 2649, scsi0, channel 0, id 0, lun 0 Write (6) 18 33 11 24 00<br />
kernel: scsi0: Aborting CCB #2669 to Target 0<br />
kernel: SCSI host 0 channel 0 reset (pid 2644) timed out - trying harder<br />
kernel: SCSI bus is being reset for host 0 channel 0.<br />
kernel: scsi0: CCB #2669 to Target 0 Aborted<br />
kernel: scsi0: Resetting BusLogic BT-958 due to Target 0<br />
kernel: scsi0: *** BusLogic BT-958 Initialized Successfully ***<br />
<br />
Most often, disk failures look like these,<br />
<br />
kernel: sidisk I/O error: dev 08:01, sector 1590410<br />
kernel: SCSI disk error : host 0 channel 0 id 0 lun 0 return code = 28000002<br />
<br />
or these<br />
<br />
kernel: hde: read_intr: error=0x10 { SectorIdNotFound }, CHS=31563/14/35, sector=0<br />
kernel: hde: read_intr: status=0x59 { DriveReady SeekComplete DataRequest Error }<br />
<br />
<br />
And, as expected, the classic [[mdstat|/proc/mdstat]] look will also reveal problems,<br />
<br />
Personalities : [linear] [raid0] [raid1] [translucent]<br />
read_ahead not set<br />
md7 : active raid1 sdc9[0] sdd5[8] 32000 blocks [2/1] [U_]<br />
<br />
<br />
Later on this section we will learn how to monitor RAID with mdadm so<br />
we can receive alert reports about disk failures. Now it's time to<br />
learn more about [[mdstat|/proc/mdstat]] interpretation.<br />
<br />
==Querying the array status==<br />
<br />
You can always take a look at the array status by doing '''cat /proc/mdstat'''<br />
It won't hurt. Take a look at the [[mdstat|/proc/mdstat]] page to learn how to read the file.<br />
<br />
Finally, remember that you can also use mdadm to check<br />
the arrays out.<br />
<br />
mdadm --detail /dev/mdx<br />
<br />
These commands will show spare and failed disks loud and clear.<br />
<br />
==Simulating a drive failure==<br />
<br />
If you plan to use RAID to get fault-tolerance, you may also want to<br />
test your setup, to see if it really works. Now, how does one<br />
simulate a disk failure?<br />
<br />
The short story is, that you can't, except perhaps for putting a fire<br />
axe thru the drive you want to "simulate" the fault on. You can never<br />
know what will happen if a drive dies. It may electrically take the<br />
bus it is attached to with it, rendering all drives on that bus<br />
inaccessible. The drive may also just report a read/write fault<br />
to the SCSI/IDE/SATA layer, which, if done properly, in turn makes the RAID layer handle this<br />
situation gracefully. This is fortunately the way things often go.<br />
<br />
Remember, that you must be running RAID-{1,4,5,6,10} for your array to be<br />
able to survive a disk failure. Linear- or RAID-0 will fail<br />
completely when a device is missing.<br />
<br />
===Force-fail by hardware===<br />
<br />
If you want to simulate a drive failure, you can just plug out the<br />
drive. If your HW does not [[Hardware_issues#Hot_Swap|support disk hot-unplugging]], you should do this with the power off (if you are interested in testing whether your data can survive with a disk less than the usual number, there is no point in being a hot-plug cowboy here. Take the system down, unplug the disk, and boot it up again)<br />
<br />
Look in the syslog, and look at [[mdstat|/proc/mdstat]] to see how the RAID is<br />
doing. Did it work? Did you get an email from the mdadm monitor?<br />
<br />
Faulty disks should appear marked with an (F) if you look at<br />
[[mdstat|/proc/mdstat]]. Also, users of mdadm should see the device state as<br />
faulty.<br />
<br />
When you've re-connected the disk again (with the power off, of<br />
course, remember), you can add the "new" device to the RAID again,<br />
with the ''mdadm --add''' command.<br />
<br />
===Force-fail by software===<br />
<br />
You can just simulate a drive failure without unplugging things.<br />
Just running the command<br />
<br />
mdadm --manage --set-faulty /dev/md1 /dev/sdc2<br />
<br />
should be enough to fail the disk /dev/sdc2 of the array /dev/md1.<br />
<br />
<br />
Now things move up and fun appears. First, you should see something<br />
like the first line of this on your system's log. Something like the<br />
second line will appear if you have spare disks configured.<br />
<br />
kernel: raid1: Disk failure on sdc2, disabling device.<br />
kernel: md1: resyncing spare disk sdb7 to replace failed disk<br />
<br />
<br />
Checking [[mdstat|/proc/mdstat]] out will show the degraded array. If there was a<br />
spare disk available, reconstruction should have started.<br />
<br />
Another useful command at this point is:<br />
<br />
mdadm --detail /dev/md1<br />
<br />
Enjoy the view.<br />
<br />
Now you've seen how it goes when a device fails. Let's fix things up.<br />
<br />
First, we will remove the failed disk from the array. Run the command<br />
<br />
mdadm /dev/md1 -r /dev/sdc2<br />
<br />
Note that mdadm cannot pull a disk out of a running array.<br />
For obvious reasons, only faulty disks can be hot-removed from an<br />
array (even stopping and unmounting the device won't help - if you ever want<br />
to remove a 'good' disk, you have to tell the array to put it into the<br />
'failed' state as above).<br />
<br />
Now we have a /dev/md1 which has just lost a device. This could be a<br />
degraded RAID or perhaps a system in the middle of a reconstruction<br />
process. We wait until recovery ends before setting things back to<br />
normal.<br />
<br />
So the trip ends when we send /dev/sdc2 back home.<br />
<br />
mdadm /dev/md1 -a /dev/sdc2<br />
<br />
<br />
As the prodigal son returns to the array, we'll see it becoming an<br />
active member of /dev/md1 if necessary. If not, it will be marked as<br />
a spare disk. That's management made easy.<br />
<br />
==Simulating data corruption==<br />
<br />
RAID (be it hardware or software), assumes that if a write to a disk<br />
doesn't return an error, then the write was successful. Therefore, if<br />
your disk corrupts data without returning an error, your data will<br />
become corrupted. This is of course very unlikely to happen, but it<br />
is possible, and it would result in a corrupt filesystem.<br />
<br />
RAID cannot, and is not supposed to, guard against data corruption on<br />
the media. Therefore, it doesn't make any sense either, to purposely<br />
corrupt data (using dd for example) on a disk to see how the RAID<br />
system will handle that. It is most likely (unless you corrupt the<br />
RAID superblock) that the RAID layer will never find out about the<br />
corruption, but your filesystem on the RAID device will be corrupted.<br />
<br />
This is the way things are supposed to work. RAID is not a guarantee<br />
for data integrity, it just allows you to keep your data if a disk<br />
dies (that is, with RAID levels above or equal one, of course).<br />
<br />
==Monitoring RAID arrays==<br />
<br />
You can run mdadm as a daemon by using the follow-monitor mode. If<br />
needed, that will make mdadm send email alerts to the system<br />
administrator when arrays encounter errors or fail. Also, follow mode<br />
can be used to trigger contingency commands if a disk fails, like<br />
giving a second chance to a failed disk by removing and reinserting<br />
it, so a non-fatal failure could be automatically solved.<br />
<br />
Let's see a basic example. Running<br />
<br />
mdadm --monitor --daemonise --mail=root@localhost --delay=1800 /dev/md2<br />
<br />
should release a mdadm daemon to monitor /dev/md2. The --daemonise switch tells mdadm to run as a deamon. The delay parameter means that polling will be done in intervals of 1800 seconds.<br />
Finally, critical events and fatal errors should be e-mailed to the<br />
system manager. That's RAID monitoring made easy.<br />
<br />
Finally, the --program or --alert parameters specify the program to be<br />
run whenever an event is detected.<br />
<br />
Note that, when supplying the -f switch, the mdadm daemon will never exit once it decides that there<br />
are arrays to monitor, so it should normally be run in the background.<br />
Remember that your are running a daemon, not a shell command.<br />
If mdadm is ran to monitor without the -f switch, it will behave as a normal shell command and wait for you to stop it.<br />
<br />
Using mdadm to monitor a RAID array is simple and effective. However,<br />
there are fundamental problems with that kind of monitoring - what<br />
happens, for example, if the mdadm daemon stops? In order to overcome<br />
this problem, one should look towards "real" monitoring solutions.<br />
There are a number of free software, open source, and even commercial<br />
solutions available which can be used for Software RAID monitoring on<br />
Linux. A search on FreshMeat should return a good number of matches.</div>Rpnabar