System2020

From Linux Raid Wiki
Jump to: navigation, search

This is a story about building a new gentoo home desktop/server. I have a moderately beefy PC at home, that I use as a fileserver and workstation, and that the family (including me) access with laptops.

Contents

2018 - The hardware

Our existing system has an Athlon X3 processor, with two 3TB Seagate Barracudas, and 4 memory slots maxed out with 4 x 4GB DIMMs. I still had the system it replaced, an old Athlon Thunderbird in a nice case, so I bought a new motherboard, Ryzen processor, and 2 x 8GB ram. I also replaced the power supply, along with two new 4TB Seagate Ironwolves. And the system failed to boot!

Fast forward to Spring 2020, and I took the computer to a shop, where they told me the motherboard was dud, and replaced it. They returned the original motherboard, and I was most miffed when I discovered it was still under warranty, RMA'd it, and Gigabyte confirmed that it was simply the CPU was newer than the motherboard and required a BIOS update - which is what I told the shop I suspected! So now I've got a working, spare mobo I didn't need.

2020 - Setting up

Initial setup

Gentoo needs a host system to set it up, so I installed openSUSE on sda. The motherboard boot screen asked whether I wanted to set up EFI, which I didn't, so SUSE promptly installed bios grub. I then used gdisk to set up sdb. Okay, my setup may not be everyone's choice, but here goes ...

sdb1 512MB EFI/Grub
sdb2 64GB swap (my new mobo takes 4 x 16GB DIMMs)
sdb3 1TB root
sdb4 3TB home

Okay, that adds up to a bit more than my 4TB drive, but we'll call that rounding error ...

dm-integrity

This was a trial-and-error game ... the man page told you everything you needed to know but not how to use it. Because it's not integrated into mdadm, it needs special treatment at boot time. I formatted and set up my partitions with

integritysetup format /dev/sdb3
integritysetup format /dev/sdb4
integritysetup open /dev/sdb3 dm-sdb3
integritysetup open /dev/sdb4 dm-sdb4

Note that you cannot use the option --no-wipe. A shame because it takes absolutely ages to format the partition - even longer if your system decides to suspend because you're not doing anything on screen! This should be fixable, but I guess mdadm reads from the disk before writing, and that's a no-no with an uninitialised dm-integrity device.

mdadm

Now we create the arrays

mdadm --create /dev/md/root --level=1 --raid-devices=2 /dev/mapper/dm-sdb3 missing
mdadm --create /dev/md/home --level=1 --raid-devices=2 /dev/mapper/dm-sdb4 missing

When you run mdadm, use the device names that you defined that appear in /dev/mapper, namely here /dev/mapper/dm-sdb3 and /dev/mapper/dm-sdb4.

Note that these disappear on reboot, so you will need to edit the boot sequence to run integritysetup. Once they re-appear on boot, mdadm on my system found them with no further action on my part.

pvcreate

pvcreate /dev/md/root /dev/md/home

This created my two lvm volumes.

vgcreate

vgcreate vg-root /dev/md/root
vgcreate vg-home /dev/md/root

Now I have my two volume groups. It's my choice to have two separate lvm groups, as I will have just one home, and want to use lvm to take snapshots. The root group is to allow me to play with distros etc.

lvcreate

lvcreate --name lv-gentoo --size 128G vg-root
lvcreate --name lv-data --size 255T vg-home

I now have my two volumes for my root and data partitions. Now for the gentoo install, but first work out how systemd works because it's a pain having to run "integritysetup open" every time I boot the system ...

systemd

My first experience of systemd (gentoo runs openRC by default). The first thing is to create a shell script that I put in /usr/local/bin

#!/bin/bash
# open device mapper targets
integritysetup open /dev/sdb3 dm-sdb3
integritysetup open /dev/sdb4 dm-sdb4

and a systemd unit file I put in the same place

[Unit]
DefaultDependencies=no
Before=mdmonitor.service
[Service]
ExecStart=/usr/local/bin/integritysetup.sh
[Install]
WantedBy=default.target

Then we need to set up the service to run. Initially I made the mistake of linking my service file into /etc/systemd/system before I started it. Note that when I deleted it, systemctl deleted the file from /etc/systemd/system. I then copied it into /etc/systemd/system, but I suspect systemctl might do that for you.

Note also the DefaultDependencies is required, otherwise systemd makes it depend on a working filesystem. Not good if the service is meant to help bring up your working filesystem.

systemctl enable integritysetup.service

At which point, when I rebooted the system, all my disks came back without needing manual intervention. Of course, that means a future project will be enabling mdadm to detect a dm-integrity/raid disk and enable it by itself.

And this is where I discovered that you should always refer to drives and partitions by a UUID. While transferring data from my old system to my new one, I added a third drive which promptly appeared as sdb, and broke all my scripts using partition names! Do as I say, not as I do :-)

2021 - Setting up

Starting (partially) Again

I couldn't get the init system to load dm-integrity, so I ended up wiping the root partition and redoing it without integrity.

I discovered that the only place I could install the service file and get systemctl to acknowledge its existence was /etc/systemd/system. This is apparently where you should put them, so I've linked everything from /usr/local/bin.

November 2021

The last piece of the puzzle dropped into place. "startplasma-wayland" was only working randomly, and I just couldn't work oput why. Finally, on the plasma home page, I found some info that said "rewriting plasma to work purely with Wayland and remove the X dependencies will be a lot of work. Starting plasma from a tty is not supported or recommended. Expect problems if you try it." I'd thought I'd get it all working properly before I enabled gui on boot, but it turns out I had to enable gui-on-boot to get it to work properly!

So at time of writing I have backed up my /home partition on to one of my old Barracudas, and I've added the second Barracuda into the array to make a 3-drive raid-5. (Slaps self on wrist - you shouldn't raid a Barracuda!) And the array is currently rebuilding from the original 2-drive mirror.

December 2021

The final step (before documenting everything properly here :-)

I bought some Crucial ram chips - 2 by 16GB - and upgraded the system. DDR4 ram is tricky to install, especially when it's all under the power supply cables and you have to half-dismantle the system to get at it ... which is when I discovered the shop that mis-diagnosed my old motherboard as faulty, also messed up the new system. Reading the manual, it looks like you fill slots 1 & 3, followed by 2 & 4. Of course, the shop put the old chips in slots 1 & 2! WOW! The system boots SOOO much faster. Before the upgrade, LVM took ages to scan the VGs, so much so that I had to add a timeout to systemd to stop it bombing out before the logical volumes were ready to mount. Now the ram is in the correct slots, it boots so fast I miss the "initialising VGs" message.

So now I have a fully working system, with a full blown raid system. The only thing is, it can be slightly slow to get going because of all the logic layers from hard drive to running application, though I suspect fixing the ram may have fixed that problem somewhat..

Personal tools