mycroes

There's always time to play

Sunday, February 8, 2009

Migrating from single disk to 3-disk RAID 5 (on Gentoo)

When following this guide a disk crash before the final step might still result in data loss, so if your data *is* important, back it up!

As my 1TB harddrive was filling up I was considering the ways I could expand storage. I hate usb drives, just for the simple fact that they're slow. I also don't like external storage at all, unless it's network storage. However, network storage would be slow for me too unless I at least spend some money to buy a decent gigabit switch instead of my 10/100 switch.

The option I liked most turned out to be to add another internal drive. I was already using LVM, so extending the volume group to span another disk wouldn't be hard, but that had it's serious downsides too. I'm not someone who does regular backups, I actually just don't do them at all unless something is really broken or I really feel I need to. Last time either of those happened is longer than a year ago I think, so a lot has changed since then. Now there's a lot of stuff I just don't need to backup, for the simple fact that it's someone else's service. I only use IMAP mail accounts and that's about the most important thing I could lose...

Because I don't do regular backups, I hope my harddisks will stay alive. Starting with a little bit of data and a new harddisk I can survive a near-instant crash. However when I would extend my 1TB lvm volume group with another drive, the risk of losing all data would immediately become twice as high. That's something I'm not willing to risk, because there's ways in which I simply don't have to risk it.

The solution I came up with was to buy 2 extra drives and create a RAID 5 array on top of them. A RAID 5 array requires at least 3 block devices, best to use 3 different disks for that, and will allow recovery of all data if at most one drive fails. It's possible to start with 3 drives and later expand to more while keeping all data on the disks. Another advantage is that data will be spread across disks, so reads and writes will also be spread across disks, so they will both be faster (the processing overhead for RAID 5 is very low).

When I made up my mind I had to figure out how to transition from my single disk to the RAID array, without having to use yet another disk to temporarily store my current data. A while ago I found some pages that pointed out that it's possible to create a 'degraded' RAID array. A degraded array means that on drive fail you might lose data, but when I still have all data on the old disk that won't be an issue. After having all the data on the degraded array, the old drive can be added to the array to fix the array integrity again.

So to summarize all of the above this is what I had to do:
- Add the drives to the system
- Create a degraded RAID 5 array on the 2 new drives
- Copy all data to the degraded RAID array
- Clean the old disk and add it to the array

Of course in reality I also had to compile a kernel with linux software raid support, because it was not there yet.

And here's what I typed with some additional explanation:
# paludis -i mdadm

Install mdadm, the linux software raid utility

# fdisk /dev/sda
# fdisk /dev/sdc

Partition the disks, in my case it just meant creating a full size partition with type fd (linux raid autodetect). Yes, my new disks are sda and sdc, I might have switched cables somewhere...

# mdadm --create -f /dev/md0 --level=5 --raid-devices=3 /dev/sda1 /dev/sdc1 missing

Create the degraded RAID 5 array with a missing drive.

# mdadm --examine --scan >> /etc/mdadm/mdadm.conf

Create a usable config file. Don't know when I would need it, but hey, it's there when I need it...

# pvcreate /dev/md0

Initialize a physical volume for lvm.

# vgcreate vg /dev/md0

Create the volume group 'vg' on the RAID array. Keep in mind that it needs to be different from the volume group name you was already using, if you were already using one...

# lvcreate -L30G -nroot vg
# lvcreate -L1200G -nhome vg

Create logical volumes for the root and home partition (that's all I use).

# mkfs.ext4 -L root /dev/vg/root
# mkfs.ext4 -L home /dev/vg/home

Create a filesystem, why not use ext4 immediately? Especially nice with biggish files.

At this point it's time to copy over all files, everyone has their own opinions about how this should be done, I just cped them over from the live install, leaving out /dev, /proc, /sys and creating them afterwards, copied /dev/null and /dev/console (copying all of /dev isn't really feasible, and those are all you need in gentoo) and I was done copying.

Now there's some additional stuff to worry about. I'm using gentoo, so I have to worry about being able to start from my RAID array too. Because I wanted to use the full disks for my RAID array I'm going to have to boot from another device, like a usb flash drive. So I started by copying my kernel with RAID and lvm support to my usb drive, created the entry in my syslinux config and created an initrd with
genkernel --lvm --mdadm initrd
. This created a nice initrd, with the helpful message that to use the lvm stuff I also had to pass dolvm on my kernel commandline. Now it was time to see if it all worked, and... fail... My RAID array was not detected, so all I could use was my old disk. After a while I found that for mdadm support you also need to pass 'domdadm' on the kernel commandline, and that solved all of my issues with booting, now let's hope that gets documented somewhere too.

I was now able to boot using my new lvm logical volumes on my new RAID 5 array. I actually started copying my home volume when I was able to boot, because that would take quite some time. All in all there was just one thing left to do, and that was adding the old drive to the RAID array (after changing the partition layout of course):

# mdadm /dev/md0 --add /dev/sdb1

Add sdb1 to the raid array.

Immediately the raid array will be resynced. This will take quite a while, because the RAID array itself has no knowledge about the data on it, so the entire disks are being synced.

While this all happens and after this all happens you can
$ cat /proc/mdstat
to see the status of the RAID array.

Now you should have a working RAID 5 array, just make sure you don't delete your data because a RAID array won't protect you from being stupid.

For a more detailed guide on just setting up the RAID array with a missing drive see Linux Software RAID 101 -- Part 3: creating an array with a missing drive, the guide I used to create my RAID array (you will notice all the mdadm commands are directly copied from that page).

1 comment:

Unknown said...

Thanks, took me forever to figure out why my mdadm + lvm wasn't being detected correctly on boot >.>