https://raid.wiki.kernel.org/api.php?action=feedcontributions&user=Ivanbev&feedformat=atomLinux Raid Wiki - User contributions [en]2024-03-29T08:54:56ZUser contributionsMediaWiki 1.19.24https://raid.wiki.kernel.org/index.php/GrowingGrowing2008-07-05T23:16:14Z<p>Ivanbev: Added LVM section; Separated filesystem-growing from RAID array growing; reformatting of commands for clarity.</p>
<hr />
<div>==Adding partitions==<br />
<br />
When new disks are added, existing raid partitions can be grown to use the new disks. After the new disk was partitioned, the RAID level 1/4/5/6 array can be grown for example using this command (assuming that before growing it contains three drives):<br />
<br />
mdadm --add /dev/md1 /dev/sdb3<br />
mdadm --grow --raid-devices=4 /dev/md1<br />
<br />
The process can take even 10 hours. There is a critical section at start, which cannot be backed up. To allow recovery after unexpected power failure, an additional option <code>--backup-file=</code> can be specified.<br />
<br />
<br />
==Expanding existing partitions==<br />
<br />
It is possible to migrate the whole array to larger drives (e.g. 250 GB to 1 TB) by replacing one by one. In the end the number of devices will be the same, the data will remain intact, and you will have more space available to you.<br />
<br />
<br />
===Extending an existing RAID array===<br />
<br />
In order to increase the usable size of the array, you must increase the size of all disks in that array. Depending on the size of your disks, this may take days to complete. It is also important to note that while the array undergoes the resync process, it is vulnerable to irrecoverable failure if another drive were to fail. It would (of course) be a wise idea to completely back up your data before continuing.<br />
<br />
First, choose a drive and completely remove it from the array<br />
<br />
mdadm -f /dev/md0 /dev/sdd1<br />
mdadm -r /dev/md0 /dev/sdd1<br />
<br />
Next, partition the new drive so that you are using the amount of space you will eventually use on all new disks. For example, if you are going from 100 GB drives to 250 GB drives, you will want to partition the new 250 GB drive to use 250 GB, not 100 GB. Also, remember to set the partition type to '''fd''', Linux raid autodetect.<br />
<br />
fdisk /dev/sde<br />
<br />
Now add the new disk to the array:<br />
<br />
mdadm --add /dev/md0 /dev/sde1<br />
<br />
Allow the resync to fully complete before continuing. You will now have to repeat the above steps for *each* disk in your array. Once all of the drives in your array have been replaced with larger drives, we can grow the space on the array by issuing:<br />
<br />
mdadm --grow /dev/md0 --size=max<br />
<br />
The array now represents one disk using all of the new available space.<br />
<br />
<br />
===Extending the filesystem===<br />
<br />
Now that you have expanded the underlying partition, you must now resize your filesystem to take advantage of it. For an ext2/ext3 filesystem:<br />
<br />
resize2fs /dev/md0<br />
<br />
For a reiserfs filesystem:<br />
<br />
resize_reiserfs /dev/md0<br />
<br />
Please see filesystem documentation for other filesystems.<br />
<br />
<br />
===LVM: Growing the PV===<br />
<br />
LVM (logical volume manager) abstracts a logical volume (that a filesystem sits on) from the physical disk. If you are used to LVM then you are likely used to growing LVs (logical volumes), but what we grow here is the PV (physical volume) that sits on the ''md'' device (RAID array).<br />
<br />
For further LVM documentation, please see the [http://tldp.org/HOWTO/LVM-HOWTO/ Linux LVM HOWTO]<br />
<br />
Growing the physical volume is trivial:<br />
<br />
pvresize /dev/md0<br />
<br />
A before-and-after example is:<br />
<br />
root@barcelona:~# pvdisplay<br />
--- Physical volume ---<br />
PV Name /dev/md0<br />
VG Name server1_vg<br />
PV Size 931.01 GB / not usable 558.43 GB<br />
Allocatable yes<br />
PE Size (KByte) 4096<br />
Total PE 95379<br />
Free PE 42849<br />
Allocated PE 52530<br />
PV UUID BV0mGK-FRtQ-KTLv-aW3I-TllW-Pkiz-3yVPd1<br />
<br />
root@barcelona:~# pvresize /dev/md0<br />
Physical volume "/dev/md0" changed<br />
1 physical volume(s) resized / 0 physical volume(s) not resized<br />
<br />
root@barcelona:~# pvdisplay<br />
--- Physical volume ---<br />
PV Name /dev/md0<br />
VG Name server1_vg<br />
PV Size 931.01 GB / not usable 1.19 MB<br />
Allocatable yes<br />
PE Size (KByte) 4096<br />
Total PE 238337<br />
Free PE 185807<br />
Allocated PE 52530<br />
PV UUID BV0mGK-FRtQ-KTLv-aW3I-TllW-Pkiz-3yVPd1<br />
<br />
The above is the PV part after md0 was grown from ~400GB to ~930GB (a 400GB disk to a 1TB disk). Note the ''PV Size'' descriptions before and after.<br />
<br />
Once the PV has been grown (and hence the size of the VG, volume group, will have increased), you can increase the size of an LV (logical volume), and then finally the filesystem, eg:<br />
<br />
lvextend -L +50G -n home_lv server1_vg<br />
resize2fs /dev/server1_vg/home_lv<br />
<br />
The above grows the ''home_lv'' logical volume in the ''server1_vg'' volume group by 50GB. It then grows the ext2/ext3 filesystem on that LV to the full size of the LV, as per ''Extending the filesystem'' above.</div>Ivanbevhttps://raid.wiki.kernel.org/index.php/Talk:GrowingTalk:Growing2008-07-05T20:13:43Z<p>Ivanbev: Suggest stating that "expanding existing partitions" assumes single partition per disk</p>
<hr />
<div>Under title ''expanding existing partitions'', consider adding something like the following:<br />
<br />
----<br />
The below assumes a single partition per disk. If you have multiple partitions in different raid arrays (eg sdd1 in md0, sdd2 in md1 etc) then ensure you fail/remove and re-add all relevant partitions.<br />
----<br />
<br />
This is just so people don't forget to check whether they have multiple partitions on the disk they are about to remove!<br />
<br />
--[[User:Ivanbev|Ivanbev]] 13:13, 5 July 2008 (PDT)</div>Ivanbev