Talk:Growing

From Linux Raid Wiki
(Difference between revisions)
Jump to: navigation, search
m (Reply to Anthony pointing to RAID_Boot)
m (typo)
 
(One intermediate revision by one user not shown)
Line 26: Line 26:
  
 
-- [[User:DavidGreaves|DavidGreaves]] 19:22, 27 July 2011 (UTC)
 
-- [[User:DavidGreaves|DavidGreaves]] 19:22, 27 July 2011 (UTC)
 +
 +
----
 +
When "Extending an existing RAID array", the proposed solution has the inconvenient that if an error occurs on an other disc during the rebuild of the larger one, we can looze all data. Ie, we lost the security of the RAID during most of the operation (which is, however, disque intensive...)
 +
 +
If the RAID array can be stopped (because FS on it can be unmounted), would it be possible to:
 +
# unmount all FS on the RAID (or in LVM is the RAID is a LVM PV)
 +
# desactivate LVM (if used)
 +
# stop RAID
 +
# copy the whole (old) short partition into the (new) larger one (with dd)
 +
# change the RAID setup so the new partition replace the old one in the RAID setup
 +
# restart the RAID array
 +
# restart the LVM (if used)
 +
# remount the partitions
 +
 +
If this would work, there wont be any time where one disque crash can looze all data.
 +
However, I'm not sure that the RAID will restart after changing the setup. For example, if the RAID software expect some private data at the end of the partition, these data wont be present in the new partition (copied with dd, they will be in the middle of the new larger partition).
 +
Is there a way to correctly do what I describe here?
 +
 +
-- [[User:Vdanjean|Vdanjean]] 21:19, 27 August 2012 (UTC)

Latest revision as of 21:20, 27 August 2012

Under title expanding existing partitions, consider adding something like the following:


The below assumes a single partition per disk. If you have multiple partitions in different raid arrays (eg sdd1 in md0, sdd2 in md1 etc) then ensure you fail/remove and re-add all relevant partitions.


This is just so people don't forget to check whether they have multiple partitions on the disk they are about to remove!

--Ivanbev 13:13, 5 July 2008 (PDT)


Expanding the size of an array while a write-intent bitmap is active can easily be fatal to the array. Trust me, I have the sleepless night to prove it. Neither mdadm nor the kernel code presently tests for this set of conditions when you attempt the --size=max; adding such a check and returning an error would be nice.

It is also helpful to explicitly show the wait for array recovery to complete before creating a write-intent bitmap, in case someone cuts-and-pastes the examples into a script.

It might be nice to use /dev/mdX consistently in the examples, instead of the occasional /dev/md0 and /dev/md1.

--CraigMiloRogers 12:58, 27 June 2009 (PDT)

Almost all Linux MD articles and references say to set the partition type to Linux RAID autodetect, yet you and a few others refer to this process as deprecated, without explanation or alternative. People finding this article don't need mysticism -- we need solutions, which are proving hard to come by in an arena that is still anachronistic and immature. Management of disk volumes should be an intrinsic service of the OS, with straightforward management tools. Coming from a Solaris environment where ZFS is a dream to use, I'm nothing short of floored at how primitive this stuff appears to be on all Linux distributions.

- Anthony11 7/27/11

There is information about autodetect in RAID_Boot - link added to the page. There is plenty of information about LVM on the internet. I look forward to seeing your contributions to improving this situation.

-- DavidGreaves 19:22, 27 July 2011 (UTC)


When "Extending an existing RAID array", the proposed solution has the inconvenient that if an error occurs on an other disc during the rebuild of the larger one, we can looze all data. Ie, we lost the security of the RAID during most of the operation (which is, however, disque intensive...)

If the RAID array can be stopped (because FS on it can be unmounted), would it be possible to:

  1. unmount all FS on the RAID (or in LVM is the RAID is a LVM PV)
  2. desactivate LVM (if used)
  3. stop RAID
  4. copy the whole (old) short partition into the (new) larger one (with dd)
  5. change the RAID setup so the new partition replace the old one in the RAID setup
  6. restart the RAID array
  7. restart the LVM (if used)
  8. remount the partitions

If this would work, there wont be any time where one disque crash can looze all data. However, I'm not sure that the RAID will restart after changing the setup. For example, if the RAID software expect some private data at the end of the partition, these data wont be present in the new partition (copied with dd, they will be in the middle of the new larger partition). Is there a way to correctly do what I describe here?

-- Vdanjean 21:19, 27 August 2012 (UTC)

Personal tools