Tweaking, tuning and troubleshooting

From Linux Raid Wiki
(Difference between revisions)
Jump to: navigation, search
m (Autodetection)
m
 
(5 intermediate revisions by 4 users not shown)
Line 1: Line 1:
 +
{| style="border:1px solid #aaaaaa; background-color:#f9f9f9;width:100%; font-family: Verdana, sans-serif;"
 +
|- padding:5px;padding-top:0.5em;font-size: 95%;
 +
| Back to [[Detecting, querying and testing]] <span style="float:right; padding-left:5px;">Forward to [[Growing]]</span>
 +
|}
 
=Tweaking, tuning and troubleshooting=
 
=Tweaking, tuning and troubleshooting=
 
  
 
==Autodetection==
 
==Autodetection==
  
[[Autodetect|Autodetection is a now-deprecated]] way to allow the RAID devices to be automatically recognized
+
[[Autodetect|In-kernel autodetection]] was a way to allow the RAID devices to be automatically recognized
by the kernel at boot-time, right after the ordinary partition detection is done. If your system still uses
+
by the kernel at boot-time, right after the ordinary partition detection is done. Modern kernels do not recognise raid arrays and in order to boot off a version 1.2 array, you must use an initramfs to assemble the array.
autodetect then here
+
 
 +
It is possible to boot off a raid array without an initramfs but the following is necessary
 +
 
 +
# You must use metadata 0.9 or 1.0 that goes at the end of the array
 +
# The array must be raid-1 - a mirror
 +
# The kernel will not realise it's an array, so boot the partition as read-only, then remount / as the mirror read-write once the array has started.
 +
 
 +
== Booting on RAID ==
  
This requires several things:
+
[[LegacyBoot|LILO and (legacy) Grub 1]]
  
1. You need autodetection support in the kernel. Check this
+
Pretty much all modern linux systems use Grub 2. Your install program should set it up correctly, but if you have to set it up manually, make sure that the raid driver is loaded. Also make sure when linux is loaded that the domdadm option is passed. An example boot entry is
1. You must be using version 0.9 superblocks (non-persistent or 1.x won't work).
+
1. The partition-types of the devices used in the RAID must be set to 0xFD  (use fdisk and set the type to "fd")
+
  
 +
menuentry 'Gentoo GNU/Linux, with Linux 4.4.6-gentoo' --class gentoo --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-4.4.6-gentoo-advanced-ab538350-d249-413b-86ef-4bd5280600b8' {
 +
        load_video
 +
        insmod gzio
 +
        insmod part_gpt
 +
        insmod diskfilter
 +
        insmod mdraid1x
 +
        insmod ext2
 +
        set root='mduuid/69270eaca840f6e70199064bd5863c5d'
 +
        if [ x$feature_platform_search_hint = xy ]; then
 +
          search --no-floppy --fs-uuid --set=root --hint='mduuid/69270eaca840f6e70199064bd5863c5d'  ab538350-d249-413b-86ef-4bd5280600b8
 +
        else
 +
          search --no-floppy --fs-uuid --set=root ab538350-d249-413b-86ef-4bd5280600b8
 +
        fi
 +
        echo    'Loading Linux 4.4.6-gentoo ...'
 +
        linux  /boot/vmlinuz-4.4.6-gentoo root=UUID=ab538350-d249-413b-86ef-4bd5280600b8 ro  domdadm
 +
        echo    'Loading initial ramdisk ...'
 +
        initrd  /boot/initramfs-genkernel-x86_64-4.4.6-gentoo
 +
}
  
NOTE: Be sure that your RAID is NOT RUNNING before changing the
 
partition types.  Use <code>mdadm --stop /dev/md0</code> to stop the device.
 
  
If you set up 1, 2 and 3 from above, autodetection should be set up.
+
== Converting the root filesystem to RAID ==
Try rebooting.  When the system comes up, cat'ing <tt>/proc/mdstat</tt> should
+
tell you that your RAID is running.
+
  
During boot, you could see messages similar to these:
+
The time-honoured way of coping with this sort of thing is to have a small
 +
/boot partition at the start of the drive. This, however, means that your boot
 +
details are not protected by raid, unless you go to the trouble of manually
 +
copying them every time they change, or you mess about with old metadata
 +
formats.
  
  Oct 22 00:51:59 malthe kernel: SCSI device sdg: hdwr sector= 512
+
And if you're doing a new install, most modern distros will set up raid for you.
    bytes. Sectors= 12657717 [6180 MB] [6.2 GB]
+
The ones that won't and expect you to do your own disk setup, will come with
  Oct 22 00:51:59 malthe kernel: Partition check:
+
raid support enabled so you can create a raid device before installing.
  Oct 22 00:51:59 malthe kernel:  sda: sda1 sda2 sda3 sda4
+
  Oct 22 00:51:59 malthe kernel:  sdb: sdb1 sdb2
+
  Oct 22 00:51:59 malthe kernel:  sdc: sdc1 sdc2
+
  Oct 22 00:51:59 malthe kernel:  sdd: sdd1 sdd2
+
  Oct 22 00:51:59 malthe kernel:  sde: sde1 sde2
+
  Oct 22 00:51:59 malthe kernel:  sdf: sdf1 sdf2
+
  Oct 22 00:51:59 malthe kernel:  sdg: sdg1 sdg2
+
  Oct 22 00:51:59 malthe kernel: autodetecting RAID arrays
+
  Oct 22 00:51:59 malthe kernel: (read) sdb1's sb offset: 6199872
+
  Oct 22 00:51:59 malthe kernel: bind<sdb1,1>
+
  Oct 22 00:51:59 malthe kernel: (read) sdc1's sb offset: 6199872
+
  Oct 22 00:51:59 malthe kernel: bind<sdc1,2>
+
  Oct 22 00:51:59 malthe kernel: (read) sdd1's sb offset: 6199872
+
  Oct 22 00:51:59 malthe kernel: bind<sdd1,3>
+
  Oct 22 00:51:59 malthe kernel: (read) sde1's sb offset: 6199872
+
  Oct 22 00:51:59 malthe kernel: bind<sde1,4>
+
  Oct 22 00:51:59 malthe kernel: (read) sdf1's sb offset: 6205376
+
  Oct 22 00:51:59 malthe kernel: bind<sdf1,5>
+
  Oct 22 00:51:59 malthe kernel: (read) sdg1's sb offset: 6205376
+
  Oct 22 00:51:59 malthe kernel: bind<sdg1,6>
+
  Oct 22 00:51:59 malthe kernel: autorunning md0
+
  Oct 22 00:51:59 malthe kernel: running: <sdg1><sdf1><sde1><sdd1><sdc1><sdb1>
+
  Oct 22 00:51:59 malthe kernel: now!
+
  Oct 22 00:51:59 malthe kernel: md: md0: raid array is not clean --
+
    starting background reconstruction
+
  
 +
=== Method 2016 ===
  
This is output from the autodetection of a RAID-5 array that was not
+
This method assumes you are adding a new drive, and will set up a degraded array
cleanly shut down (e.g. the machine crashed).  Reconstruction is automatically initiated.  Mounting this device is perfectly safe, since
+
before converting it to a full working array. It's easier if you're adding two drives
reconstruction is transparent and all data are consistent (it's only
+
and can set up a fully working array. Note that, by default, a system will not boot from an array
the parity information that is inconsistent - but that isn't needed
+
that has become degraded. [TODO: document how to make it boot. Hopefully it will boot from an array
until a device fails).
+
that has been set up in degraded mode]
  
Autostarted devices are also automatically stopped at shutdown. Don't
+
* First, make sure your kernel has raid compiled in, and that mdadm is installed. If you're
worry about init scripts.  Just use the /dev/md devices as any other
+
not using grub2, upgrade now.
/dev/sd or /dev/hd devices.
+
  
Yes, it really is that easy - but it is also [[Autodetect|full of problems]] and should be avoided.
+
* Add the new disk. If it's the same size as the original, and you plan to mirror everything,
 +
then make sure you can afford to lose a little disk space. Install grub2 on that drive, too.
  
==Booting on RAID==
+
* Plan and create your new partitioning scheme. It doesn't have to be the same as on the old
 +
disk, but if the new disk is larger and you use the extra space, you will not be able to raid
 +
everything.
  
There are several ways to set up a system that mounts its root
+
* Create your arrays using your new disk. Use a command similar to the following - note the
filesystem on a RAID device. Some distributions allow for RAID setup
+
use of a named array - "root" - and the word "missing" which tells the create command to create
in the installation process, and this is by far the easiest way to get
+
a mirror with just one active device.
a nicely set up RAID system.
+
  
Newer LILO distributions can handle RAID-1 devices, and thus the
+
mdadm --create /dev/md/root --level raid1 --raid-disks 2 missing /dev/sdb1
kernel can be loaded at boot-time from a RAID device. If configured
+
appropriately LILO will correctly write boot-records on all disks in
+
the array, to allow booting even if the primary disk fails (default LILO
+
configurations are generally not setup like this).
+
  
If you are using grub instead of LILO, then just start grub and
+
* Create file systems on your new arrays with a command like the following:
configure it to use the second (or third, or fourth...) disk in the
+
RAID-1 array you want to boot off as its root device and run setup.
+
And that's all.
+
  
For example, on an array consisting of /dev/hda1 and /dev/hdc1 where
+
mkfs.ext4 /dev/md/root
both partitions should be bootable you should just do this:
+
  
  grub
+
* Mount your new file system and copy the contents of your root file system to the new filesystem
  grub>device (hd0) /dev/hdc
+
  grub>root (hd0,0)
+
  grub>setup (hd0)
+
  
Some users have experienced problems with this, reporting that
+
mount /dev/md/root /mnt/newroot
although booting with one drive connected worked, booting with both
+
  cp -ax / /mnt/newroot
two drives failed. Nevertheless, running the described procedure with
+
both disks fixed the problem, allowing the system to boot from either
+
single drive or from the RAID-1
+
  
Another way of ensuring that your system can always boot is, to create
+
Note options ax to copy everything including permissions and links but not follow any mount points.
a boot floppy (if you are still one of those lucky souls whose system does have a floppy drive) when all the setup is done. If the disk on which the
+
/boot filesystem resides dies, you can always boot from the floppy. On
+
RedHat and RedHat derived systems, this can be accomplished with the
+
mkbootdisk command.
+
  
==Root filesystem on RAID==
+
* Run grub2-mkconfig, check that everything is okay, that it has detected /mnt/newroot as a boot partition,
 +
and that when booting from it it loads the raid driver and passed the domdadm option to linux. Add this configuration to grub
 +
(making sure it's in both /boot/grub and /mnt/newroot/boot/grub),
 +
and reboot to boot on the raid drive.
  
In order to have a system booting on RAID, the root filesystem (/)
+
* make sure that grub is reading its configuration at boot time from your new drive [TODO: How?]
must be mounted on a RAID device. Two methods for achieving this are
+
supplied below. The methods below assume that you will install on a normal
+
partition, and then - when the installation is complete - move the
+
contents of your non-RAID root filesystem onto a new RAID device.
+
Please note that this is no longer needed in general, as most newer
+
GNU/Linux distributions support installation on RAID devices (and
+
creation of the RAID devices during the installation process).
+
However, you may still want to use the methods below, if you are
+
migrating an existing system to RAID.
+
  
===Method 1===
+
* Copy the data from all the old partitions to the new ones the same way as with root, and update fstab or whatever. Reboot the system to make sure you have a working system all on the new drive.
 +
 
 +
* If the new partitions are the same size as the old ones, you can now add them in and let the mirrors rebuild.
 +
 
 +
mdadm --grow /dev/mdroot --add /dev/sda1
 +
 
 +
It would be wise to just do root first, and then reboot to make sure we have not messed up grub. If we didn't successfully switch it to the
 +
new drive then the old drive no longer exists and it won't be able to find its configuration.
 +
 
 +
=== Method 1 (2011) ===
  
 
This method assumes you have a spare disk you can install the system
 
This method assumes you have a spare disk you can install the system
Line 157: Line 145:
 
machine to boot even when a disk is missing.
 
machine to boot even when a disk is missing.
  
===Method 2===
+
=== Method 2 (2011) ===
  
 
This method requires that your kernel and raidtools understand the
 
This method requires that your kernel and raidtools understand the
Line 184: Line 172:
 
*  Copy the system files, and reconfigure the system to use the RAID as root-device, as described in the previous section.
 
*  Copy the system files, and reconfigure the system to use the RAID as root-device, as described in the previous section.
  
*  When your system successfully boots from the RAID, you can modify the /etc/raidtab file to include the previously failed-disk as a normal raid-disk. Now, raidhotadd the disk to your RAID.
+
*  When your system successfully boots from the RAID, you can modify the <tt>/etc/mdamd.conf</tt> file to include the previously failed-disk as a normal raid-disk. Now use <code>mdadm /dev/md0 --add /dev/hd??</code> to add the disk to your RAID.
  
 
*  You should now have a system that can boot from a non-degraded RAID.
 
*  You should now have a system that can boot from a non-degraded RAID.
  
==Making the system boot on RAID==
+
== Making the system boot on RAID (2011) ==
  
 
For the kernel to be able to mount the root filesystem, all support
 
For the kernel to be able to mount the root filesystem, all support
Line 247: Line 235:
  
  
==Converting a non-RAID RedHat System to run on Software RAID==
+
== Converting a non-RAID RedHat System to run on Software RAID (2011) ==
  
 
This section was written and contributed by Mark Price, IBM. The text
 
This section was written and contributed by Mark Price, IBM. The text
Line 257: Line 245:
 
any damages arising out of such usage.
 
any damages arising out of such usage.
  
===Introduction===
+
[[Convert RedHat to Raid|Converting a non-RAID RedHat System to run on Software RAID]]
 
+
The technote details how to convert a linux system with non RAID
+
devices to run with a Software RAID configuration.
+
 
+
===Scope===
+
 
+
This scenario was tested with Redhat 7.1, but should be applicable to
+
any release which supports Software RAID (md) devices.
+
 
+
===Pre-conversion example system===
+
 
+
The test system contains two SCSI disks, sda and sdb both of of which
+
are the same physical size. As part of the test setup, I configured
+
both disks to have the same partition layout, using fdisk to ensure
+
the number of blocks for each partition was identical.
+
 
+
  DEVICE      MOUNTPOINT  SIZE        DEVICE      MOUNTPOINT  SIZE
+
  /dev/sda1  /          2048MB      /dev/sdb1              2048MB
+
  /dev/sda2  /boot      80MB        /dev/sdb2              80MB
+
  /dev/sda3  /var/      100MB      /dev/sdb3              100MB
+
  /dev/sda4  SWAP        1024MB      /dev/sdb4  SWAP        1024MB
+
 
+
 
+
In our basic example, we are going to set up a simple RAID-1 Mirror,
+
which requires only two physical disks.
+
 
+
===Step-1 - boot rescue cd/floppy===
+
 
+
The redhat installation CD provides a rescue mode which boots into
+
linux from the CD and mounts any filesystems it can find on your
+
disks.
+
 
+
At the lilo prompt type
+
 
+
      lilo: linux rescue
+
 
+
With the setup described above, the installer may ask you which disk
+
your root filesystem in on, either sda or sdb. Select sda.
+
 
+
The installer will mount your filesytems in the following way.
+
 
+
  DEVICE      MOUNTPOINT  TEMPORARY MOUNT POINT
+
  /dev/sda1  /          /mnt/sysimage
+
  /dev/sda2  /boot      /mnt/sysimage/boot
+
  /dev/sda3  /var        /mnt/sysimage/var
+
  /dev/sda6  /home      /mnt/sysimage/home
+
 
+
 
+
Note: - Please bear in mind other distributions may mount your
+
filesystems on different mount points, or may require you to mount
+
them by hand.
+
 
+
 
+
===Step-2 - create a /etc/raidtab  file===
+
 
+
Create the file /mnt/sysimage/etc/raidtab (or wherever your real /etc
+
file system has been mounted.
+
 
+
For our test system, the raidtab file would like like this.
+
 
+
  raiddev /dev/md0
+
      raid-level              1
+
      nr-raid-disks          2
+
      nr-spare-disks          0
+
      chunk-size              4
+
      persistent-superblock  1
+
      device                  /dev/sda1
+
      raid-disk              0
+
      device                  /dev/sdb1
+
      raid-disk              1
+
 
+
  raiddev /dev/md1
+
      raid-level              1
+
      nr-raid-disks          2
+
      nr-spare-disks          0
+
      chunk-size              4
+
      persistent-superblock  1
+
      device                  /dev/sda2
+
      raid-disk              0
+
      device                  /dev/sdb2
+
      raid-disk              1
+
 
+
  raiddev /dev/md2
+
      raid-level              1
+
      nr-raid-disks          2
+
      nr-spare-disks          0
+
      chunk-size              4
+
      persistent-superblock  1
+
      device                  /dev/sda3
+
      raid-disk              0
+
      device                  /dev/sdb3
+
      raid-disk              1
+
 
+
Note: - It is important that the devices are in the correct order. ie.
+
that /dev/sda1 is raid-disk 0 and not raid-disk 1. This instructs the
+
md driver to sync from /dev/sda1, if it were the other way around it
+
would sync from /dev/sdb1 which would destroy your filesystem.
+
 
+
Now copy the raidtab file from your real root filesystem to the
+
current root filesystem.
+
 
+
  (rescue)# cp /mnt/sysimage/etc/raidtab /etc/raidtab
+
 
+
 
+
===Step-3 - create the md devices===
+
 
+
There are two ways to do this, copy the device files from
+
/mnt/sysimage/dev or use mknod to create them. The md device, is a
+
(b)lock device with major number 9.
+
 
+
  (rescue)# mknod /dev/md0 b 9 0
+
  (rescue)# mknod /dev/md1 b 9 1
+
  (rescue)# mknod /dev/md2 b 9 2
+
 
+
 
+
===Step-4 - unmount filesystems===
+
 
+
In order to start the raid devices, and sync the drives, it is
+
necessary to unmount all the temporary filesystems.
+
 
+
  (rescue)# umount /mnt/sysimage/var
+
  (rescue)# umount /mnt/sysimage/boot
+
  (rescue)# umount /mnt/sysimage/proc
+
  (rescue)# umount /mnt/sysimage
+
 
+
 
+
Please note, you may not be able to umount /mnt/sysimage. This problem
+
can be caused by the rescue system - if you choose to manually mount
+
your filesystems instead of letting the rescue system do this automat-
+
ically, this problem should go away.
+
 
+
 
+
===Step-5 - start raid devices===
+
 
+
Because there are filesystems on /dev/sda1, /dev/sda2 and /dev/sda3 it
+
is necessary to force the start of the raid device.
+
 
+
  (rescue)# mkraid --really-force /dev/md2
+
 
+
You can check the completion progress by cat'ing the /proc/mdstat
+
file. It shows you status of the raid device and percentage left to
+
sync.
+
 
+
Continue with /boot and /
+
 
+
  (rescue)# mkraid --really-force /dev/md1
+
  (rescue)# mkraid --really-force /dev/md0
+
 
+
he md driver syncs one device at a time.
+
 
+
 
+
===Step-6 - remount filesystems===
+
 
+
Mount the newly synced filesystems back into the /mnt/sysimage mount
+
points.
+
 
+
  (rescue)# mount /dev/md0 /mnt/sysimage
+
  (rescue)# mount /dev/md1 /mnt/sysimage/boot
+
  (rescue)# mount /dev/md2 /mnt/sysimage/var
+
 
+
 
+
 
+
===Step-7 - change root===
+
 
+
You now need to change your current root directory to your real root
+
file system.
+
 
+
  (rescue)# chroot /mnt/sysimage
+
 
+
 
+
 
+
===Step-8 - edit config files===
+
 
+
You need to configure lilo and /etc/fstab appropriately to boot from
+
and mount the md devices.
+
 
+
Note: - The boot device MUST be a non-raided device. The root device
+
is your new md0 device.  eg.
+
 
+
  boot=/dev/sda
+
  map=/boot/map
+
  install=/boot/boot.b
+
  prompt
+
  timeout=50
+
  message=/boot/message
+
  linear
+
  default=linux
+
 
+
  image=/boot/vmlinuz
+
      label=linux
+
      read-only
+
      root=/dev/md0
+
 
+
 
+
Alter /etc/fstab
+
 
+
  /dev/md0              /                      ext3    defaults        1 1
+
  /dev/md1              /boot                  ext3    defaults        1 2
+
  /dev/md2              /var                    ext3    defaults        1 2
+
  /dev/sda4              swap                    swap    defaults        0 0
+
 
+
 
+
 
+
===Step-9 - run LILO===
+
 
+
With the /etc/lilo.conf edited to reflect the new root=/dev/md0 and
+
with /dev/md1 mounted as /boot, we can now run /sbin/lilo -v on the
+
chrooted filesystem.
+
 
+
 
+
===Step-10 - change partition types===
+
 
+
The partition types of the all the partitions on ALL Drives which are
+
used by the md driver must be changed to type 0xFD.
+
 
+
Use fdisk to change the partition type, using option 't'.
+
 
+
  (rescue)# fdisk /dev/sda
+
  (rescue)# fdisk /dev/sdb
+
 
+
Use the 'w' option after changing all the required partitions to save
+
the partion table to disk.
+
 
+
 
+
===Step-11 - resize filesystem===
+
 
+
When we created the raid device, the physical partion became slightly
+
smaller because a second superblock is stored at the end of the
+
partition. If you reboot the system now, the reboot will fail with an
+
error indicating the superblock is corrupt.
+
 
+
Resize them prior to the reboot, ensure that the all md based
+
filesystems are unmounted except root, and remount root read-only.
+
 
+
  (rescue)# mount / -o remount,ro
+
 
+
You will be required to fsck each of the md devices. This is the
+
reason for remounting root read-only. The -f flag is required to force
+
fsck to check a clean filesystem.
+
 
+
  (rescue)# e2fsck -f /dev/md0
+
 
+
This will generate the same error about inconsistent sizes and
+
possibly corrupted superblock.Say N to 'Abort?'.
+
 
+
  (rescue)# resize2fs /dev/md0
+
 
+
Repeat for all /dev/md devices.
+
 
+
 
+
===Step-12 - checklist===
+
 
+
The next step is to reboot the system, prior to doing this run through
+
the checklist below and ensure all tasks have been completed.
+
 
+
*  All devices have finished syncing. Check /proc/mdstat
+
 
+
*  /etc/fstab has been edited to reflect the changes to the device names.
+
 
+
*  /etc/lilo.conf has beeb edited to reflect root device change.
+
 
+
*  /sbin/lilo has been run to update the boot loader.
+
 
+
*  The kernel has both SCSI and RAID(MD) drivers built into the kernel.
+
 
+
*  The partition types of all partitions on disks that are part of an md device have been changed to 0xfd.
+
 
+
*  The filesystems have been fsck'd and resize2fs'd.
+
 
+
===Step-13 - reboot===
+
 
+
You can now safely reboot the system, when the system comes up it will
+
auto discover the md devices (based on the partition types).
+
 
+
Your root filesystem will now be mirrored.
+
 
+
  
 
==Sharing spare disks between different arrays==
 
==Sharing spare disks between different arrays==
Line 554: Line 266:
 
invoking mdadm as a daemon.
 
invoking mdadm as a daemon.
  
==Pitfalls==
+
== Pitfalls (2011) ==
  
 
Never NEVER never re-partition disks that are part of a running RAID.
 
Never NEVER never re-partition disks that are part of a running RAID.
Line 587: Line 299:
 
raidtab. Try moving the first "device ..." and "raid-disk ..." pair to
 
raidtab. Try moving the first "device ..." and "raid-disk ..." pair to
 
the bottom of the array description in the raidtab file.
 
the bottom of the array description in the raidtab file.
 +
 +
{| style="border:1px solid #aaaaaa; background-color:#f9f9f9;width:100%; font-family: Verdana, sans-serif;"
 +
|- padding:5px;padding-top:0.5em;font-size: 95%;
 +
| Back to [[Detecting, querying and testing]] <span style="float:right; padding-left:5px;">Forward to [[Growing]]</span>
 +
|}

Latest revision as of 20:52, 20 September 2016

Back to Detecting, querying and testing Forward to Growing

Contents

[edit] Tweaking, tuning and troubleshooting

[edit] Autodetection

In-kernel autodetection was a way to allow the RAID devices to be automatically recognized by the kernel at boot-time, right after the ordinary partition detection is done. Modern kernels do not recognise raid arrays and in order to boot off a version 1.2 array, you must use an initramfs to assemble the array.

It is possible to boot off a raid array without an initramfs but the following is necessary

  1. You must use metadata 0.9 or 1.0 that goes at the end of the array
  2. The array must be raid-1 - a mirror
  3. The kernel will not realise it's an array, so boot the partition as read-only, then remount / as the mirror read-write once the array has started.

[edit] Booting on RAID

LILO and (legacy) Grub 1

Pretty much all modern linux systems use Grub 2. Your install program should set it up correctly, but if you have to set it up manually, make sure that the raid driver is loaded. Also make sure when linux is loaded that the domdadm option is passed. An example boot entry is

menuentry 'Gentoo GNU/Linux, with Linux 4.4.6-gentoo' --class gentoo --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-4.4.6-gentoo-advanced-ab538350-d249-413b-86ef-4bd5280600b8' {
       load_video
       insmod gzio
       insmod part_gpt
       insmod diskfilter
       insmod mdraid1x
       insmod ext2
       set root='mduuid/69270eaca840f6e70199064bd5863c5d'
       if [ x$feature_platform_search_hint = xy ]; then
         search --no-floppy --fs-uuid --set=root --hint='mduuid/69270eaca840f6e70199064bd5863c5d'  ab538350-d249-413b-86ef-4bd5280600b8
       else
         search --no-floppy --fs-uuid --set=root ab538350-d249-413b-86ef-4bd5280600b8
       fi
       echo    'Loading Linux 4.4.6-gentoo ...'
       linux   /boot/vmlinuz-4.4.6-gentoo root=UUID=ab538350-d249-413b-86ef-4bd5280600b8 ro  domdadm
       echo    'Loading initial ramdisk ...'
       initrd  /boot/initramfs-genkernel-x86_64-4.4.6-gentoo
}


[edit] Converting the root filesystem to RAID

The time-honoured way of coping with this sort of thing is to have a small /boot partition at the start of the drive. This, however, means that your boot details are not protected by raid, unless you go to the trouble of manually copying them every time they change, or you mess about with old metadata formats.

And if you're doing a new install, most modern distros will set up raid for you. The ones that won't and expect you to do your own disk setup, will come with raid support enabled so you can create a raid device before installing.

[edit] Method 2016

This method assumes you are adding a new drive, and will set up a degraded array before converting it to a full working array. It's easier if you're adding two drives and can set up a fully working array. Note that, by default, a system will not boot from an array that has become degraded. [TODO: document how to make it boot. Hopefully it will boot from an array that has been set up in degraded mode]

  • First, make sure your kernel has raid compiled in, and that mdadm is installed. If you're

not using grub2, upgrade now.

  • Add the new disk. If it's the same size as the original, and you plan to mirror everything,

then make sure you can afford to lose a little disk space. Install grub2 on that drive, too.

  • Plan and create your new partitioning scheme. It doesn't have to be the same as on the old

disk, but if the new disk is larger and you use the extra space, you will not be able to raid everything.

  • Create your arrays using your new disk. Use a command similar to the following - note the

use of a named array - "root" - and the word "missing" which tells the create command to create a mirror with just one active device.

mdadm --create /dev/md/root --level raid1 --raid-disks 2 missing /dev/sdb1
  • Create file systems on your new arrays with a command like the following:
mkfs.ext4 /dev/md/root
  • Mount your new file system and copy the contents of your root file system to the new filesystem
mount /dev/md/root /mnt/newroot
cp -ax / /mnt/newroot

Note options ax to copy everything including permissions and links but not follow any mount points.

  • Run grub2-mkconfig, check that everything is okay, that it has detected /mnt/newroot as a boot partition,

and that when booting from it it loads the raid driver and passed the domdadm option to linux. Add this configuration to grub (making sure it's in both /boot/grub and /mnt/newroot/boot/grub), and reboot to boot on the raid drive.

  • make sure that grub is reading its configuration at boot time from your new drive [TODO: How?]
  • Copy the data from all the old partitions to the new ones the same way as with root, and update fstab or whatever. Reboot the system to make sure you have a working system all on the new drive.
  • If the new partitions are the same size as the old ones, you can now add them in and let the mirrors rebuild.
mdadm --grow /dev/mdroot --add /dev/sda1

It would be wise to just do root first, and then reboot to make sure we have not messed up grub. If we didn't successfully switch it to the new drive then the old drive no longer exists and it won't be able to find its configuration.

[edit] Method 1 (2011)

This method assumes you have a spare disk you can install the system on, which is not part of the RAID you will be configuring.

  • First, install a normal system on your extra disk.
  • Get the kernel you plan on running, get the raid-patches and the tools, and make your system boot with this new RAID-aware kernel. Make sure that RAID-support is in the kernel, and is not loaded as modules.
  • Ok, now you should configure and create the RAID you plan to use for the root filesystem. This is standard procedure, as described elsewhere in this document.
  • Just to make sure everything's fine, try rebooting the system to see if the new RAID comes up on boot. It should.
  • Put a filesystem on the new array (using mke2fs), and mount it under /mnt/newroot
  • Now, copy the contents of your current root-filesystem (the spare disk) to the new root-filesystem (the array). There are lots of ways to do this, one of them is
     cd /
     find . -xdev | cpio -pm /mnt/newroot

another way to copy everything from / to /mnt/newroot could be

   cp -ax / /mnt/newroot
  • You should modify the /mnt/newroot/etc/fstab file to use the correct device (the /dev/md? root device) for the root filesystem.
  • Now, unmount the current /boot filesystem, and mount the boot device on /mnt/newroot/boot instead. This is required for LILO to run successfully in the next step.
  • Update /mnt/newroot/etc/lilo.conf to point to the right devices. The boot device must still be a regular disk (non-RAID device), but the root device should point to your new RAID. When done, run
     lilo -r /mnt/newroot

complete with no errors.

  • Reboot the system, and watch everything come up as expected  :)

If you're doing this with IDE disks, be sure to tell your BIOS that all disks are "auto-detect" types, so that the BIOS will allow your machine to boot even when a disk is missing.

[edit] Method 2 (2011)

This method requires that your kernel and raidtools understand the failed-disk directive in the /etc/raidtab file - if you are working on a really old system this may not be the case, and you will need to upgrade your tools and/or kernel first.

You can only use this method on RAID levels 1 and above, as the method uses an array in "degraded mode" which in turn is only possible if the RAID level has redundancy. The idea is to install a system on a disk which is purposely marked as failed in the RAID, then copy the system to the RAID which will be running in degraded mode, and finally making the RAID use the no-longer needed "install-disk", zapping the old installation but making the RAID run in non-degraded mode.

  • First, install a normal system on one disk (that will later become part of your RAID). It is important that this disk (or partition) is not the smallest one. If it is, it will not be possible to add it to the RAID later on!
  • Then, get the kernel, the patches, the tools etc. etc. You know the drill. Make your system boot with a new kernel that has the RAID support you need, compiled into the kernel.
  • Now, set up the RAID with your current root-device as the failed-disk in the /etc/raidtab file. Don't put the failed-disk as the first disk in the raidtab, that will give you problems with starting the RAID. Create the RAID, and put a filesystem on it. If using mdadm, you can create a degraded array just by running something like
mdadm -C /dev/md0 --level raid1 --raid-disks 2 missing /dev/hdc1

note the missing parameter.

  • Try rebooting and see if the RAID comes up as it should
  • Copy the system files, and reconfigure the system to use the RAID as root-device, as described in the previous section.
  • When your system successfully boots from the RAID, you can modify the /etc/mdamd.conf file to include the previously failed-disk as a normal raid-disk. Now use mdadm /dev/md0 --add /dev/hd?? to add the disk to your RAID.
  • You should now have a system that can boot from a non-degraded RAID.

[edit] Making the system boot on RAID (2011)

For the kernel to be able to mount the root filesystem, all support for the device on which the root filesystem resides, must be present in the kernel. Therefore, in order to mount the root filesystem on a RAID device, the kernel must have RAID support.

The normal way of ensuring that the kernel can see the RAID device is to simply compile a kernel with all necessary RAID support compiled in. Make sure that you compile the RAID support into the kernel, and not as loadable modules. The kernel cannot load a module (from the root filesystem) before the root filesystem is mounted.

However, since RedHat-6.0 ships with a kernel that has new-style RAID support as modules, I here describe how one can use the standard RedHat-6.0 kernel and still have the system boot on RAID.


[edit] Booting with RAID as module

You will have to instruct LILO to use a RAM-disk in order to achieve this. Use the mkinitrd command to create a ramdisk containing all kernel modules needed to mount the root partition. This can be done as:

  mkinitrd --with=<module> <ramdisk name> <kernel>

For example:

  mkinitrd --preload raid5 --with=raid5 raid-ramdisk 2.2.5-22

This will ensure that the specified RAID module is present at boot- time, for the kernel to use when mounting the root device.

[edit] Modular RAID on Debian GNU/Linux after move to RAID

Debian users may encounter problems using an initrd to mount their root filesystem from RAID, if they have migrated a standard non-RAID Debian install to root on RAID.

If your system fails to mount the root filesystem on boot (you will see this in a "kernel panic" message), then the problem may be that the initrd filesystem does not have the necessary support to mount the root filesystem from RAID.

Debian seems to produce its initrd.img files on the assumption that the root filesystem to be mounted is the current one. This will usually result in a kernel panic if the root filesystem is moved to the raid device and you attempt to boot from that device using the same initrd image. The solution is to use the mkinitrd command but specifying the proposed new root filesystem. For example, the following commands should create and set up the new initrd on a Debian system:

 % mkinitrd -r /dev/md0 -o /boot/initrd.img-2.4.22raid
 % mv /initrd.img /initrd.img-nonraid
 % ln -s /boot/initrd.img-raid /initrd.img"


[edit] Converting a non-RAID RedHat System to run on Software RAID (2011)

This section was written and contributed by Mark Price, IBM. The text has undergone minor changes since his original work.

Notice: the following information is provided "AS IS" with no representation or warranty of any kind either express or implied. You may use it freely at your own risk, and no one else will be liable for any damages arising out of such usage.

Converting a non-RAID RedHat System to run on Software RAID

[edit] Sharing spare disks between different arrays

When running mdadm in the follow/monitor mode you can make different arrays share spare disks. That will surely make you save storage space without losing the comfort of fallback disks.

In the world of software RAID, this is a brand new never-seen-before feature: for securing things to the point of spare disk areas, you just have to provide one single idle disk for a bunch of arrays.

With mdadm is running as a daemon, you have an agent polling arrays at regular intervals. Then, as a disk fails on an array without a spare disk, mdadm removes an available spare disk from another array and inserts it into the array with the failed disk. The reconstruction process begins now in the degraded array as usual.

To declare shared spare disks, just use the spare-group parameter when invoking mdadm as a daemon.

[edit] Pitfalls (2011)

Never NEVER never re-partition disks that are part of a running RAID. If you must alter the partition table on a disk which is a part of a RAID, stop the array first, then repartition.

It is easy to put too many disks on a bus. A normal Fast-Wide SCSI bus can sustain 10 MB/s which is less than many disks can do alone today. Putting six such disks on the bus will of course not give you the expected performance boost. It is becoming equally easy to saturate the PCI bus - remember, a normal 32-bit 33 MHz PCI bus has a theoretical maximum bandwidth of around 133 MB/sec, considering command overhead etc. you will see a somewhat lower real-world transfer rate. Some disks today has a throughput in excess of 30 MB/sec, so just four of those disks will actually max out your PCI bus! When designing high-performance RAID systems, be sure to take the whole I/O path into consideration - there are boards with more PCI busses, with 64-bit and 66 MHz busses, and with PCI-X.

More SCSI controllers will only give you extra performance, if the SCSI busses are nearly maxed out by the disks on them. You will not see a performance improvement from using two 2940s with two old SCSI disks, instead of just running the two disks on one controller.

If you forget the persistent-superblock option, your array may not start up willingly after it has been stopped. Just re-create the array with the option set correctly in the raidtab. Please note that this will destroy the information on the array!

If a RAID-5 fails to reconstruct after a disk was removed and re- inserted, this may be because of the ordering of the devices in the raidtab. Try moving the first "device ..." and "raid-disk ..." pair to the bottom of the array description in the raidtab file.

Back to Detecting, querying and testing Forward to Growing
Personal tools