Setting up a (new) system

From Linux Raid Wiki
Jump to: navigation, search
Back to RAID and filesystems Forward to Converting an existing system

This section describes setting up a RAID system using a distro like gentoo or slackware, that expects you to do a lot of the work yourself.

Contents

Planning the layout

Linux sees most storage as block devices. It is also quite happy to layer block devices on top of each other. The devices we need to consider are disk drives, raid arrays, logical volumes, and partitions. Network Block Devices can also be used. It is very important to get clear which layers go where in the stack - linux couldn't care, but it can get very confusing for admins, and confused admins means computer disasters.

Partitioning your system

Every system needs a / partition. Most systems have one or more swap partitions. Many systems have a separate /home partition. You may want to have other partitions. And do you want to use LVM to manage your user partitions?

If you have a separate /home partition, do you want this on a separate array?

Do you want linux to manage the swap space directly, or do you want it on raid? There is no point in using raid to create a raid-0 or linear array, as the ability to do this has been part of linux for a long time.

Once you have decided on your layout, partition all your boot drives identically. If you are using a separate array for /home, partition (or not) those drives identically too.

Configuring Swap

Most distros allocate 2GB for swap. The original Unix swap algorithm required twice ram for efficient operation, and this algorithm was still in use in the early 2.4 kernels. Wol knows this was re-written back then but is unaware whether the fundamental algorithm was altered, therefore his rule-of-thumb is, on each disk, create a swap partition equal to twice the motherboard's maximum ram (disks are cheap). (Note, however, that oodles of swap can be used for a very effective DoS attack if a fork or malloc bomb chews up ram triggering massive swap use.)

By default, if these are mounted as per normal in fstab, linux will create a linear array from them.

If you want a raid-0 swap, just add the "priority" option to the line in fstab

 /dev/sda2       swap           swap    defaults,pri=1   0 0
 /dev/sdb2       swap           swap    defaults,pri=1   0 0

The problem with this is that, should a drive fail, any process that has been swapped out to the failed drive will probably crash. If you've used the priority option, that probably means that all swapped-out processes will crash.

If you want to protect against this, you need to merge your swap partitions into a raid that has redundancy eg raid-1 or raid-5 (or create a swap file on a raid partition).

Setting up the boot disks

Grub needs the first 2MB at the start of an MBR disk, or a partition of its own on a GPT disk, to install itself. UEFI needs its own partition. That means you must partition your boot drives.

Use fdisk or gdisk (or the partition manager of your choice) to prep your boot disk. If you are using GPT, you will need a 1MB partition, type EF00 for UEFI, or EF02 for grub. If you are not putting swap on RAID, add your swap partition. Put the remaining space into a partition to create your RAID from. If you are not using LVM you can create multiple partitions for multiple arrays if you wish.

Make sure all your disks are prep'd identically. gdisk has a mechanism for copying a GPT from one disk to another, but make sure you reset the partition GUIDs.

Now create your arrays. Note that your boot disk is the only sensible occasion where you might not want to use a v1.2 superblock (the default). If you want to boot without using an initramfs, you need a filesystem that the boot mechanism can recognise, which means you need a v1.0 mirror. Under any other circumstances you should stick with 1.2.

mdadm --create /dev/md/root --level=1 --raid-devices=2 /dev/sda3 /dev/sdb3

If you are using LVM, read the LVM howto and partition your array. I didn't, so

mkfs.ext4 /dev/md/root

Now you can install your distro. Make sure you install grub on all the disks. You don't want to lose the first disk and suddenly discover your system won't reboot!

Setting up grub

There are a lot of good resources out there on setting up grub 2 (this page ignores lilo and grub legacy because, well, they are legacy). However, be careful - for example the gentoo page for grub2/raid has currently been cobbled together and needs revising by someone who knows the difference between raid and lvm. (Disclaimer - Wol wrote most of the raid stuff and someone else has merged it with the lvm stuff.)

Grub is only required if you are using BIOS boot. If you are using UEFI, then that will load the initramfs for you and boot linux.

If you are using drives of 2TB or less, you can use the MBR. This will leave 2MB empty at the start of the disk for grub. If you are using drives over 2TB (or plan to be using UEFI in future) you should be using GPT. If you are using GPT you must create the 1MB type EF02 partition. This can then be changed to a type EF00 partition later for use with UEFI.

Edit the file /etc/default/grub and add "domdadm" to the following line

GRUB_CMDLINE_LINUX="domdadm"

Now run grub-mkconfig, and check that your boot section contains the line

insmod mdraid1x

and that the line that loads linux looks something like

linux   /boot/vmlinuz-4.4.6-gentoo root=UUID=ab538350-d249-413b-86ef-4bd5280600b8 ro  domdadm

If you do a

ls -al /dev/disk/by-uuid

then the uuid should point to an md array.

Finally install grub to both your disks

grub-install /dev/sda
grub-install /dev/sdb

Note that you MUST use an initrd to boot with a version 1.2 array as the kernel can no longer assemble the array itself and needs to call out to userspace. This page does not discuss booting from older version arrays.

Setting up UEFI

[TODO: Write this section]

Setting up a separate home

It should be a matter of preference whether to pass bare drives or a partition to the RAID if you're not booting off it. However, there seem to be more and more "idiot proof" tools out there that assume that a disk without an MBR or GPT is unused and can be taken over without warning. And there are too many reports of system upgrades (typically eg replacing a motherboard) that also result in GPTs mysteriously appearing and overwriting raid superblocks. Back in the day with MBRs the choice of disk type codes (eg 82 for linux, 83 for swap) was severely restricted, but now that GPT gives you so many more options, plus partition labels, it seems daft not to use the facility.

mdadm --create /dev/md/home --level=5 --raid-devices=3 /dev/sdc /dev/sdd /dev/sde
mkfs.ext4 /dev/md/home
Back to RAID and filesystems Forward to Converting an existing system
Personal tools