DDF Fake RAID

From Linux Raid Wiki
(Difference between revisions)
Jump to: navigation, search
(SLES 11 notes)
(Creating the driver update disk)
 
(4 intermediate revisions by one user not shown)
Line 103: Line 103:
 
== SLES 11 ==
 
== SLES 11 ==
  
It is difficult change a running SLES11 system from using '''dmraid''' to '''mdadm''' for DDF fake RAIDs after installation (it can only be done using the rescue system). The following section describes how to install SLES11 from scratch using a Driver Update Disk (DUD, aka update medium).  
+
It is difficult change a running SLES11 system from using '''dmraid''' to '''mdadm''' for DDF fake RAIDs after installation (it can only be done using the rescue system). The following section describes how to install SLES11 SP3 from scratch using a Driver Update Disk (DUD, aka update medium).  
  
 
<div style="color:red; width:90%; margin-left:auto; margin-right:auto">'''DISCLAIMER OF WARRANTY AND LIABILITY:''' YOU USE THIS AT YOUR OWN RISK. THERE IS NO GUARANTEE OF ANY KIND THAT FOLLOWING THIS PROCEDURE WILL WORK. THE AUTHOR(S) OF THIS RECIPE SHALL NOT BE HELD LIABLE FOR ANY DAMAGE, INCLUDING DATA LOSS OR HARDWARE DAMAGE, THAT MIGHT BE CAUSED BY APPLYING THIS PROCEDURE TO YOUR SYSTEM(S).'''</div>
 
<div style="color:red; width:90%; margin-left:auto; margin-right:auto">'''DISCLAIMER OF WARRANTY AND LIABILITY:''' YOU USE THIS AT YOUR OWN RISK. THERE IS NO GUARANTEE OF ANY KIND THAT FOLLOWING THIS PROCEDURE WILL WORK. THE AUTHOR(S) OF THIS RECIPE SHALL NOT BE HELD LIABLE FOR ANY DAMAGE, INCLUDING DATA LOSS OR HARDWARE DAMAGE, THAT MIGHT BE CAUSED BY APPLYING THIS PROCEDURE TO YOUR SYSTEM(S).'''</div>
 +
 +
=== Shortcut for the impatient ===
 +
 +
Download the DUD image from [https://www.dropbox.com/sh/hnjxqtq0njdkksx/MVeBFTa7bu Dropbox].
  
 
=== Creating the driver update disk ===
 
=== Creating the driver update disk ===
  
 
You will need to create a [ftp://ftp.suse.de/pub/people/hvogel/Update-Media-HOWTO/Update-Media-HOWTO.html DUD]. The DUD contains  
 
You will need to create a [ftp://ftp.suse.de/pub/people/hvogel/Update-Media-HOWTO/Update-Media-HOWTO.html DUD]. The DUD contains  
* The mdadm-3.3 and a modified yast2-storage package
+
the mdadm-3.3 package, and modifications to the md tools and the installation program YaST2 for the installation environment.
* the unpacked contents of these packages for the installation system ("inst-sys)
+
 
The following example shows how to create the DUD for x86_64 from the RPMS from the [http://download.opensuse.org/repositories/home:/mwilck:/mdadm-DDF:/SLE-11/SLE_11_SP3/ Open Build Service]. These are assumed to be stored under the directory <tt>$RPMS</tt>.
 
The following example shows how to create the DUD for x86_64 from the RPMS from the [http://download.opensuse.org/repositories/home:/mwilck:/mdadm-DDF:/SLE-11/SLE_11_SP3/ Open Build Service]. These are assumed to be stored under the directory <tt>$RPMS</tt>.
  
 
  # Create the basic DUD structure
 
  # Create the basic DUD structure
  mkdir -p dud/linux/suse/x86_64-sles11/{install,inst-sys}
+
  BASE=dud/linux/suse/x86_64-sles11
 +
mkdir -p $BASE/{install,inst-sys}
 
   
 
   
  cat >dud/linux/suse/x86_64-sles11/dud.config <<EOF
+
  cat >$BASE/dud.config <<EOF
 
  UpdateName: mdadm-DDF
 
  UpdateName: mdadm-DDF
 
  UpdateID: mdadm-DDF-$(date +%Y%m%d)
 
  UpdateID: mdadm-DDF-$(date +%Y%m%d)
Line 123: Line 127:
 
  EOF
 
  EOF
 
   
 
   
  cat >dud/linux/suse/x86_64-sles11/install/update.post <<\EOF
+
  cat >$BASE/install/update.post <<\EOF
 
  #! /bin/sh
 
  #! /bin/sh
 
  exec >/var/log/dud.log 2>&1
 
  exec >/var/log/dud.log 2>&1
Line 129: Line 133:
 
  rpm -Uv mdadm*.rpm
 
  rpm -Uv mdadm*.rpm
 
  EOF
 
  EOF
  chmod a+x dud/linux/suse/x86_64-sles11/install/update.post
+
  chmod a+x $BASE/install/update.post
 
   
 
   
 
  # Copy the RPMs
 
  # Copy the RPMs
  cp $RPMS/mdadm*x86_64.rpm dud/linux/suse/x86_64-sles11/install
+
  cp $RPMS/mdadm*x86_64.rpm $BASE/install
 
   
 
   
 
  # Create the inst-sys portion
 
  # Create the inst-sys portion
  pushd dud/linux/suse/x86_64-sles11/inst-sys/
+
  pushd $BASE/inst-sys/
 
  rpm2cpio $RPMS/yast2-storage-lib*x86_64.rpm | cpio -idmvu
 
  rpm2cpio $RPMS/yast2-storage-lib*x86_64.rpm | cpio -idmvu
 
  rpm2cpio $RPMS/mdadm*x86_64.rpm | cpio -idmvu './sbin/*' './lib/udev/rules.d/*'
 
  rpm2cpio $RPMS/mdadm*x86_64.rpm | cpio -idmvu './sbin/*' './lib/udev/rules.d/*'
Line 145: Line 149:
 
The 32bit DUD is created in the same way. It is possible to combine 32bit and 64bit in a single DUD.
 
The 32bit DUD is created in the same way. It is possible to combine 32bit and 64bit in a single DUD.
  
=== Installing SLES 11 ===
+
=== Installing SLES 11 SP3 ===
  
 
* Use the <tt>F6</tt> key at the boot screen to activate the DUD. Start the installation.
 
* Use the <tt>F6</tt> key at the boot screen to activate the DUD. Start the installation.
Line 152: Line 156:
 
* Proceed as usual. You should see your BIOS RAID array as MD device (e.g. <tt>/dev/md126</tt>)
 
* Proceed as usual. You should see your BIOS RAID array as MD device (e.g. <tt>/dev/md126</tt>)
 
* Under the menu item ''Expert/Boot Loader Settings/Boot Loader Installation/Boot Loader Installation Details'', check that YaST has guessed the disk order correctly (it did in my tests). If you boot from your BIOS fake RAID, the md device (e.g. <tt>/dev/md126</tt>) should be the first disk (<tt>/dev/md127</tt> is usually just the MD "container" and isn't bootable at all).
 
* Under the menu item ''Expert/Boot Loader Settings/Boot Loader Installation/Boot Loader Installation Details'', check that YaST has guessed the disk order correctly (it did in my tests). If you boot from your BIOS fake RAID, the md device (e.g. <tt>/dev/md126</tt>) should be the first disk (<tt>/dev/md127</tt> is usually just the MD "container" and isn't bootable at all).
 +
 +
This procedure has been tested successfully with SLES11 SP3.
  
 
=== Notes ===
 
=== Notes ===
Line 157: Line 163:
 
The mdadm package on the [http://download.opensuse.org/repositories/home:/mwilck:/mdadm-DDF:/SLE-11/SLE_11_SP3/ Open Build Service] contains two minor patches to mdadm-3.3 that are needed for YaST2 to work with mdadm on LSI fake RAID systems, but currently (2013-09-09) not yet accepted upstream.
 
The mdadm package on the [http://download.opensuse.org/repositories/home:/mwilck:/mdadm-DDF:/SLE-11/SLE_11_SP3/ Open Build Service] contains two minor patches to mdadm-3.3 that are needed for YaST2 to work with mdadm on LSI fake RAID systems, but currently (2013-09-09) not yet accepted upstream.
 
The changes to "<tt>yast2-storage-lib</tt>" to make this work are simple and easy to understand, but not beautiful. Use this DUD only when your system has a DDF fake RAID that you want to run with MD.
 
The changes to "<tt>yast2-storage-lib</tt>" to make this work are simple and easy to understand, but not beautiful. Use this DUD only when your system has a DDF fake RAID that you want to run with MD.
 +
 +
== OpenSUSE ==
 +
 +
The procedure for OpenSUSE is very similar to that for SLES 11.
 +
The following section describes how to install OpenSUSE 12.3 from scratch using a Driver Update Disk (DUD, aka update medium).
 +
 +
<div style="color:red; width:90%; margin-left:auto; margin-right:auto">'''DISCLAIMER OF WARRANTY AND LIABILITY:''' YOU USE THIS AT YOUR OWN RISK. THERE IS NO GUARANTEE OF ANY KIND THAT FOLLOWING THIS PROCEDURE WILL WORK. THE AUTHOR(S) OF THIS RECIPE SHALL NOT BE HELD LIABLE FOR ANY DAMAGE, INCLUDING DATA LOSS OR HARDWARE DAMAGE, THAT MIGHT BE CAUSED BY APPLYING THIS PROCEDURE TO YOUR SYSTEM(S).'''</div>
 +
 +
=== Creating the driver update disk ===
 +
 +
You will need to create a [ftp://ftp.suse.de/pub/people/hvogel/Update-Media-HOWTO/Update-Media-HOWTO.html DUD]. The DUD contains
 +
the mdadm-3.3 package, and modifications to the md tools and the installation program YaST2 for the installation environment.
 +
The following example shows how to create the DUD for x86_64 using the RPMS from the [http://download.opensuse.org/repositories/home:/mwilck:/mdadm-DDF:/SLE-11/openSUSE/ Open Build Service] (you need the '''libstorage-123''' and '''mdadm''' packages from that repository). The packages are assumed to be stored under the directory <tt>$RPMS</tt>.
 +
 +
''The steps until "Copy the RPMS" are identical to the procedure for SLES11 above, except that the BASE path is different.
 +
 +
# Create the basic DUD structure
 +
BASE=dud/linux/suse/x86_64-12.3
 +
mkdir -p $BASE/{install,inst-sys}
 +
 +
# ... further steps as as above for SLES 11, until ...
 +
 +
# Create the inst-sys portion
 +
pushd $BASE/inst-sys/
 +
rpm2cpio $RPMS/libstorage4*x86_64.rpm | cpio -idmvu ./usr/lib*/*
 +
rpm2cpio $RPMS/libstorage-ruby*x86_64.rpm | cpio -idmvu
 +
rpm2cpio $RPMS/libstorage-python*x86_64.rpm | cpio -idmvu
 +
rpm2cpio $RPMS/mdadm*x86_64.rpm | cpio -idmvu './sbin/*' './lib/udev/rules.d/*'
 +
popd
 +
 +
# Create the Update Medium in CD-ROM format
 +
mkisofs -J -r -V MDADM -o sles11-dud.iso dud
 +
 +
=== Installing OpenSUSE 12.3 ===
 +
 +
The procedure is the same as for SLES11 SP3 above.
 +
 +
=== Notes ===
 +
 +
mdadm needs to honour the <tt>--offroot</tt> option to work with OpenSUSE 12.3 and 13.1 beta.

Latest revision as of 21:39, 12 September 2013

Contents

[edit] Enabling Linux MD on DDF Fake RAID systems

The terms "Fake RAID" or "BIOS RAID" denote systems with no RAID controller, but with a simple BIOS that is able to do basic RAID operations on an array of disks. For the actual runtime RAID implementation, this RAID solution relies on a OS driver. On Windows this will usually be a driver from the RAID vendor. On Linux, the MD RAID stack can be used as a fully featured, well tested and stable RAID stack - if mdadm understands the meta data used by the vendor. Linux has another tool as well, dmraid, which uses the kernel device mapper to access the data on the RAID arrays, but doesn't qualify as a fully featured RAID solution to the same extent as MD/mdadm. The advantage of dmraid over mdadm is that it understands many more meta data formats. As of 2013, most distributions use dmraid to access fake RAID in any format except the Intel IMSM (also "ISW", "Matrix") format.

One meta data format commonly found on modern RAID systems (real as well as "fake" RAID) is the SNIA DDF format. Both mdadm and dmraid have supported it for some time. This page is about tweaking Linux distributions to use mdadm rather than dmraid for DDF.

[edit] General Requirements

In order to run a DDF disk array with MD, your distribution needs

  1. a recent version of mdadm. As of 2013/08, I recommend a pre-release of mdadm 3.3. DDF support in mdadm has seen a lot of fixes and improvements in 3.3;
  2. udev rules that support invoking mdadm (rather than dmraid) when disks with DDF meta data are detected;
  3. support for mdadm on DDF in initial RAM disk, i.e. in tools like mkinitrd or dracut;
  4. support for mdadm on DDF in the OS installer.

The last requirement is optional, but not having it means to have to modify the system after installation to use mdadm instead of dmraid, and that is an expert task.

[edit] Installation support for various distributions

This is work in progress. I will describe the installation procedure for some distributions, hoping that users of other distributions will follow up.

[edit] RHEL 6 / CentOS 6

For installation of RHEL 6 or CentOS 6, you will need to create a driver disk and a anaconda updates disk. The procedure below has been tested with CentOS 6.4. It should work on older RHEL/CentOS 6 releases as well.

DISCLAIMER OF WARRANTY AND LIABILITY: YOU USE THIS AT YOUR OWN RISK. THERE IS NO GUARANTEE OF ANY KIND THAT FOLLOWING THIS PROCEDURE WILL WORK. THE AUTHOR(S) OF THIS RECIPE SHALL NOT BE HELD LIABLE FOR ANY DAMAGE, INCLUDING DATA LOSS OR HARDWARE DAMAGE, THAT MIGHT BE CAUSED BY APPLYING THIS PROCEDURE TO YOUR SYSTEM(S).

[edit] Shortcut for the impatient

Get the driver disk and updates images matching your system from Dropbox. Please read the instructions below in order to understand which image you need!

[edit] Creating the driver disk image

The driver disk provides the updated packages (not actually "drivers") to be installed.

Download the latest RPMs for mdadm and dracut from the openSUSE Build Service to some directory on your system. The following procedure creates a driver disk for both 32bit and 64bit; it assumes that all RPMs have been downloaded to /tmp/rpms. It can be run on any Linux system, but be sure that createrepo on your system is compatible with RHEL 6 (to be sure, run the following commands on a RHEL/CentOS 6 system).

RPMS=/tmp/rpms
mkdir -p dud/rpms/{i386,x86_64}
echo "Driver Update Disk version 3" >dud/rhdd3
cp $RPMS/*i686.rpm $RPMS/*.noarch.rpm dud/rpms/i386
cp $RPMS/*x86_64.rpm $RPMS/*.noarch.rpm dud/rpms/x86_64
(cd dud/rpms/i386; createrepo .)     # See below for CentOS6! 
(cd dud/rpms/x86_64; createrepo .)   # See below for CentOS6!
mkisofs -J -r -V OEMDRV -o dud.iso dud

The resulting image can be burnt to a CD, copied to an USB stick, or placed e.g. on a HTTP server. See the Red Hat documentation for details.

[edit] Important CentOS 6 note

If you plan to install from the CentOS 6 DVDs (even if you use only DVD1) you need to generate the repositories on the driver disk differently:

(cd dud/rpms/i386; createrepo -u 'media://ddf.mdadm.i386#1' .)
(cd dud/rpms/x86_64; createrepo -u 'media://ddf.mdadm.x86_64#1' .)

The rest of the procedure is the same.

Failing to do this may result in an unbootable system after installation!

This note does not apply to any RHEL 6 installation scenario, nor to installations made with the CentOS 6 "minimal" or with "netinstall" images, provided the network installation source is a mirror of the CentOS "os/" repository. But if the network installation server just contains unpacked CentOS 6 DVDs, you must use the -u option to createrepo, as shown above.

(For the curious: this is necessary because when packages are split over several media, anaconda will install packages from media "#1" first and other sources later. We serve the dracut package from our driver disk, which comes after "#1". When the kernel (on media "#1") is installed, dracut won't be installed yet, and the initrd generation fails, resulting in an unbootable system. If this happens to you despite this warning, just boot a rescue system using the same dd and updates options as for the original installation, chroot into your new installation, and reinstall the kernel package.)

[edit] Creating the anaconda updates image

The anaconda updates disk contains files to be modified during the installation itself.

File:Updates.tar.gz contains a skeleton directory with anaconda updates (python code and udev rules) which are necessary to change anaconda's logic such that mdadm rather than dmraid will be used to access the DDF disks. Unpack the contents of File:Updates.tar.gz and copy the mdadm and mdmon binaries for your architecture into the updates/ directory. Here is the procedure for x86_64:

tar xfvz Updates.tar.gz  # this will create the updates/ folder
# extract the mdadm and mdmon binaries for x86_64 from the RPM and copy them to updates/
mkdir tmp
(cd tmp; rpm2cpio $RPMS/mdadm*x86_64.rpm | cpio -idm)
cp -a tmp/sbin/{mdadm,mdmon} updates/

There are several methods to provde the contents of the updates directory during installation:

# This creates a updates.img file to put on a network server
(cd updates; find . | cpio -c -o) | gzip -c >updates.img
# This creates a updates ISO image to be used in a CD-ROM drive
mkisofs -J -r -V UPDATES -o updates.iso updates

The CD-ROM drive method will only work if your system has a second drive or if the installation itself is done from a network server (not from CD/DVD), because otherwise the installation media will be locked in DVD drive when you try to access the updates. You can also use a floppy or USB disk, see the Fedora Wiki for details.

[edit] Starting the installation

You need to activate the driver disk and the updates disk using the dd and updates boot options. Consult the Red Hat documentation for details. After the driver disk was read, you may see an "Error" dialog with the message "No new drivers were found on this driver disk. ...". This is not an error, because the disk actually doesn't contain drivers (only ordinary packages, which will be installed anyway). Choose "Continue" and tell the installer in the following screen that you don't have additional driver disks.

The installation should proceed normally. Your DDF disks should be offered as an MD device by anaconda.

Important: If the RAID array has just been created (e.g. in the BIOS SETUP), MD will start a rebuild during package installation. Let the rebuild finish before rebooting, otherwise MD will restart the rebuild process after reboot. This is because mdadm has currently no safe way to save the "progress" of a rebuild process in the meta data.

[edit] Checking if all worked correctly

After reboot, your system should come up cleanly with the DDF disks driven by MD. If no rebuild was running before rebooting after package installation, the array should be optimal and consistent after reboot, no further rebuild/recovery should be started. If you encounter problems, please complain oon the linux-raid mailing list.

[edit] Technical background

The code changes for both dracut and anaconda are fairly trivial. See File:Anaconda.patch.txt for the anaconda patch. The dracut patch can be found in the source RPM onthe openSUSE build service (see above).

[edit] SLES 11

It is difficult change a running SLES11 system from using dmraid to mdadm for DDF fake RAIDs after installation (it can only be done using the rescue system). The following section describes how to install SLES11 SP3 from scratch using a Driver Update Disk (DUD, aka update medium).

DISCLAIMER OF WARRANTY AND LIABILITY: YOU USE THIS AT YOUR OWN RISK. THERE IS NO GUARANTEE OF ANY KIND THAT FOLLOWING THIS PROCEDURE WILL WORK. THE AUTHOR(S) OF THIS RECIPE SHALL NOT BE HELD LIABLE FOR ANY DAMAGE, INCLUDING DATA LOSS OR HARDWARE DAMAGE, THAT MIGHT BE CAUSED BY APPLYING THIS PROCEDURE TO YOUR SYSTEM(S).

[edit] Shortcut for the impatient

Download the DUD image from Dropbox.

[edit] Creating the driver update disk

You will need to create a DUD. The DUD contains the mdadm-3.3 package, and modifications to the md tools and the installation program YaST2 for the installation environment. The following example shows how to create the DUD for x86_64 from the RPMS from the Open Build Service. These are assumed to be stored under the directory $RPMS.

# Create the basic DUD structure
BASE=dud/linux/suse/x86_64-sles11
mkdir -p $BASE/{install,inst-sys}

cat >$BASE/dud.config <<EOF
UpdateName: mdadm-DDF
UpdateID: mdadm-DDF-$(date +%Y%m%d)
UpdatePriority: 100
EOF

cat >$BASE/install/update.post <<\EOF
#! /bin/sh
exec >/var/log/dud.log 2>&1
set -x
rpm -Uv mdadm*.rpm
EOF
chmod a+x $BASE/install/update.post

# Copy the RPMs
cp $RPMS/mdadm*x86_64.rpm $BASE/install

# Create the inst-sys portion
pushd $BASE/inst-sys/
rpm2cpio $RPMS/yast2-storage-lib*x86_64.rpm | cpio -idmvu
rpm2cpio $RPMS/mdadm*x86_64.rpm | cpio -idmvu './sbin/*' './lib/udev/rules.d/*'
popd

# Create the Update Medium in CD-ROM format
mkisofs -J -r -V MDADM -o sles11-dud.iso dud

The 32bit DUD is created in the same way. It is possible to combine 32bit and 64bit in a single DUD.

[edit] Installing SLES 11 SP3

  • Use the F6 key at the boot screen to activate the DUD. Start the installation.
  • The setup program will complain that the DUD isn't signed; click "OK" to accept it anyway.
  • During "System Analysis", YaST2 (the installation program) will display a prompt "Following MD compatible RAID devices were detected" and some cryptic text. Choose "Yes" here to activate MD (choosing "No" will activate dmraid).
  • Proceed as usual. You should see your BIOS RAID array as MD device (e.g. /dev/md126)
  • Under the menu item Expert/Boot Loader Settings/Boot Loader Installation/Boot Loader Installation Details, check that YaST has guessed the disk order correctly (it did in my tests). If you boot from your BIOS fake RAID, the md device (e.g. /dev/md126) should be the first disk (/dev/md127 is usually just the MD "container" and isn't bootable at all).

This procedure has been tested successfully with SLES11 SP3.

[edit] Notes

The mdadm package on the Open Build Service contains two minor patches to mdadm-3.3 that are needed for YaST2 to work with mdadm on LSI fake RAID systems, but currently (2013-09-09) not yet accepted upstream. The changes to "yast2-storage-lib" to make this work are simple and easy to understand, but not beautiful. Use this DUD only when your system has a DDF fake RAID that you want to run with MD.

[edit] OpenSUSE

The procedure for OpenSUSE is very similar to that for SLES 11. The following section describes how to install OpenSUSE 12.3 from scratch using a Driver Update Disk (DUD, aka update medium).

DISCLAIMER OF WARRANTY AND LIABILITY: YOU USE THIS AT YOUR OWN RISK. THERE IS NO GUARANTEE OF ANY KIND THAT FOLLOWING THIS PROCEDURE WILL WORK. THE AUTHOR(S) OF THIS RECIPE SHALL NOT BE HELD LIABLE FOR ANY DAMAGE, INCLUDING DATA LOSS OR HARDWARE DAMAGE, THAT MIGHT BE CAUSED BY APPLYING THIS PROCEDURE TO YOUR SYSTEM(S).

[edit] Creating the driver update disk

You will need to create a DUD. The DUD contains the mdadm-3.3 package, and modifications to the md tools and the installation program YaST2 for the installation environment. The following example shows how to create the DUD for x86_64 using the RPMS from the Open Build Service (you need the libstorage-123 and mdadm packages from that repository). The packages are assumed to be stored under the directory $RPMS.

The steps until "Copy the RPMS" are identical to the procedure for SLES11 above, except that the BASE path is different.

# Create the basic DUD structure
BASE=dud/linux/suse/x86_64-12.3
mkdir -p $BASE/{install,inst-sys}

# ... further steps as as above for SLES 11, until ...

# Create the inst-sys portion
pushd $BASE/inst-sys/
rpm2cpio $RPMS/libstorage4*x86_64.rpm | cpio -idmvu ./usr/lib*/*
rpm2cpio $RPMS/libstorage-ruby*x86_64.rpm | cpio -idmvu 
rpm2cpio $RPMS/libstorage-python*x86_64.rpm | cpio -idmvu
rpm2cpio $RPMS/mdadm*x86_64.rpm | cpio -idmvu './sbin/*' './lib/udev/rules.d/*'
popd

# Create the Update Medium in CD-ROM format
mkisofs -J -r -V MDADM -o sles11-dud.iso dud

[edit] Installing OpenSUSE 12.3

The procedure is the same as for SLES11 SP3 above.

[edit] Notes

mdadm needs to honour the --offroot option to work with OpenSUSE 12.3 and 13.1 beta.

Personal tools