Convert RedHat to Raid

From Linux Raid Wiki
Jump to: navigation, search

Contents

Introduction

The technote details how to convert a linux system with non RAID devices to run with a Software RAID configuration.

Scope

This scenario was tested with Redhat 7.1, but should be applicable to any release which supports Software RAID (md) devices.

Pre-conversion example system

The test system contains two SCSI disks, sda and sdb both of of which are the same physical size. As part of the test setup, I configured both disks to have the same partition layout, using fdisk to ensure the number of blocks for each partition was identical.

 DEVICE      MOUNTPOINT  SIZE        DEVICE      MOUNTPOINT  SIZE
 /dev/sda1   /           2048MB      /dev/sdb1               2048MB
 /dev/sda2   /boot       80MB        /dev/sdb2               80MB
 /dev/sda3   /var/       100MB       /dev/sdb3               100MB
 /dev/sda4   SWAP        1024MB      /dev/sdb4   SWAP        1024MB


In our basic example, we are going to set up a simple RAID-1 Mirror, which requires only two physical disks.

Step-1 - boot rescue cd/floppy

The redhat installation CD provides a rescue mode which boots into linux from the CD and mounts any filesystems it can find on your disks.

At the lilo prompt type

     lilo: linux rescue

With the setup described above, the installer may ask you which disk your root filesystem in on, either sda or sdb. Select sda.

The installer will mount your filesytems in the following way.

 DEVICE      MOUNTPOINT  TEMPORARY MOUNT POINT
 /dev/sda1   /           /mnt/sysimage
 /dev/sda2   /boot       /mnt/sysimage/boot
 /dev/sda3   /var        /mnt/sysimage/var
 /dev/sda6   /home       /mnt/sysimage/home


Note: - Please bear in mind other distributions may mount your filesystems on different mount points, or may require you to mount them by hand.


Step-2 - create a /etc/raidtab file

Create the file /mnt/sysimage/etc/raidtab (or wherever your real /etc file system has been mounted.

For our test system, the raidtab file would like like this.

 raiddev /dev/md0
     raid-level              1
     nr-raid-disks           2
     nr-spare-disks          0
     chunk-size              4
     persistent-superblock   1
     device                  /dev/sda1
     raid-disk               0
     device                  /dev/sdb1
     raid-disk               1
 raiddev /dev/md1
     raid-level              1
     nr-raid-disks           2
     nr-spare-disks          0
     chunk-size              4
     persistent-superblock   1
     device                  /dev/sda2
     raid-disk               0
     device                  /dev/sdb2
     raid-disk               1
 raiddev /dev/md2
     raid-level              1
     nr-raid-disks           2
     nr-spare-disks          0
     chunk-size              4
     persistent-superblock   1
     device                  /dev/sda3
     raid-disk               0
     device                  /dev/sdb3
     raid-disk               1

Note: - It is important that the devices are in the correct order. ie. that /dev/sda1 is raid-disk 0 and not raid-disk 1. This instructs the md driver to sync from /dev/sda1, if it were the other way around it would sync from /dev/sdb1 which would destroy your filesystem.

Now copy the raidtab file from your real root filesystem to the current root filesystem.

 (rescue)# cp /mnt/sysimage/etc/raidtab /etc/raidtab


Step-3 - create the md devices

There are two ways to do this, copy the device files from /mnt/sysimage/dev or use mknod to create them. The md device, is a (b)lock device with major number 9.

 (rescue)# mknod /dev/md0 b 9 0
 (rescue)# mknod /dev/md1 b 9 1
 (rescue)# mknod /dev/md2 b 9 2


Step-4 - unmount filesystems

In order to start the raid devices, and sync the drives, it is necessary to unmount all the temporary filesystems.

 (rescue)# umount /mnt/sysimage/var
 (rescue)# umount /mnt/sysimage/boot
 (rescue)# umount /mnt/sysimage/proc
 (rescue)# umount /mnt/sysimage


Please note, you may not be able to umount /mnt/sysimage. This problem can be caused by the rescue system - if you choose to manually mount your filesystems instead of letting the rescue system do this automat- ically, this problem should go away.


Step-5 - start raid devices

Because there are filesystems on /dev/sda1, /dev/sda2 and /dev/sda3 it is necessary to force the start of the raid device.

 (rescue)# mkraid --really-force /dev/md2

You can check the completion progress by cat'ing the /proc/mdstat file. It shows you status of the raid device and percentage left to sync.

Continue with /boot and /

 (rescue)# mkraid --really-force /dev/md1
 (rescue)# mkraid --really-force /dev/md0

he md driver syncs one device at a time.


Step-6 - remount filesystems

Mount the newly synced filesystems back into the /mnt/sysimage mount points.

 (rescue)# mount /dev/md0 /mnt/sysimage
 (rescue)# mount /dev/md1 /mnt/sysimage/boot
 (rescue)# mount /dev/md2 /mnt/sysimage/var


Step-7 - change root

You now need to change your current root directory to your real root file system.

 (rescue)# chroot /mnt/sysimage


Step-8 - edit config files

You need to configure lilo and /etc/fstab appropriately to boot from and mount the md devices.

Note: - The boot device MUST be a non-raided device. The root device is your new md0 device. eg.

 boot=/dev/sda
 map=/boot/map
 install=/boot/boot.b
 prompt
 timeout=50
 message=/boot/message
 linear
 default=linux
 image=/boot/vmlinuz
     label=linux
     read-only
     root=/dev/md0


Alter /etc/fstab

 /dev/md0               /                       ext3    defaults        1 1
 /dev/md1               /boot                   ext3    defaults        1 2
 /dev/md2               /var                    ext3    defaults        1 2
 /dev/sda4              swap                    swap    defaults        0 0


Step-9 - run LILO

With the /etc/lilo.conf edited to reflect the new root=/dev/md0 and with /dev/md1 mounted as /boot, we can now run /sbin/lilo -v on the chrooted filesystem.


Step-10 - change partition types

The partition types of the all the partitions on ALL Drives which are used by the md driver must be changed to type 0xFD.

Use fdisk to change the partition type, using option 't'.

 (rescue)# fdisk /dev/sda
 (rescue)# fdisk /dev/sdb

Use the 'w' option after changing all the required partitions to save the partion table to disk.


Step-11 - resize filesystem

When we created the raid device, the physical partion became slightly smaller because a second superblock is stored at the end of the partition. If you reboot the system now, the reboot will fail with an error indicating the superblock is corrupt.

Resize them prior to the reboot, ensure that the all md based filesystems are unmounted except root, and remount root read-only.

 (rescue)# mount / -o remount,ro

You will be required to fsck each of the md devices. This is the reason for remounting root read-only. The -f flag is required to force fsck to check a clean filesystem.

 (rescue)# e2fsck -f /dev/md0

This will generate the same error about inconsistent sizes and possibly corrupted superblock.Say N to 'Abort?'.

 (rescue)# resize2fs /dev/md0

Repeat for all /dev/md devices.


Step-12 - checklist

The next step is to reboot the system, prior to doing this run through the checklist below and ensure all tasks have been completed.

  • All devices have finished syncing. Check /proc/mdstat
  • /etc/fstab has been edited to reflect the changes to the device names.
  • /etc/lilo.conf has beeb edited to reflect root device change.
  • /sbin/lilo has been run to update the boot loader.
  • The kernel has both SCSI and RAID(MD) drivers built into the kernel.
  • The partition types of all partitions on disks that are part of an md device have been changed to 0xfd.
  • The filesystems have been fsck'd and resize2fs'd.

Step-13 - reboot

You can now safely reboot the system, when the system comes up it will auto discover the md devices (based on the partition types).

Your root filesystem will now be mirrored.

Personal tools