Hardware issues

From Linux Raid Wiki
Revision as of 20:21, 8 July 2008 by Keld (Talk | contribs)

Jump to: navigation, search


Hardware issues

This section will mention some of the hardware concerns involved when running software RAID.

If you are going after high performance, you should make sure that the bus(ses) to the drives are fast enough. You should not have 14 UW-SCSI drives on one UW bus, if each drive can give 20 MB/s and the bus can only sustain 160 MB/s. Also, you should only have one device per IDE channel. Running disks as master/slave is horrible for performance. IDE is really bad at accessing more that one drive per channel. Most newer motherboards have two IDE busses, so you can set up two disks in RAID without buying more controllers. Extra IDE controllers are rather cheap these days, so setting up 6-8 disk systems with IDE is easy and affordable.

Also most newer motherboards have one or more controllers for SATA disks. SATA disks do not connect more than one disk on one channel so the problems mentined aboce with multiple IO for 2 disks on one channel is not relevant for SATA disks.

See also the section on bottlenecks.

IDE Configuration

It is indeed possible to run RAID over IDE disks. And excellent performance can be achieved too. In fact, today's price on IDE drives and controllers does make IDE something to be considered, when setting up new RAID systems.

  • Physical stability: IDE drives has traditionally been of lower mechanical quality than SCSI drives. Even today, the warranty on IDE drives is typically one year, whereas it is often three to five years on SCSI drives. Although it is not fair to say, that IDE drives are per definition poorly made, one should be aware that IDE drives of some brand may fail more often that similar SCSI drives. However, other brands use the exact same mechanical setup for both SCSI and IDE drives. It all boils down to: All disks fail, sooner or later, and one should be prepared for that.
  • Data integrity: Earlier, IDE had no way of assuring that the data sent onto the IDE bus would be the same as the data actually written to the disk. This was due to total lack of parity, checksums, etc. With the Ultra-DMA standard, IDE drives now do a checksum on the data they receive, and thus it becomes highly unlikely that data get corrupted. The PCI bus however, does not have parity or checksum, and that bus is used for both IDE and SCSI systems.
  • Performance: I am not going to write thoroughly about IDE performance here. The really short story is:
    • IDE drives are fast, although they are not (as of this writing) found in 10.000 or 15.000 rpm versions as their SCSI counterparts
    • IDE has more CPU overhead than SCSI (but who cares?)
    • Only use one IDE drive per IDE bus, slave disks spoil performance
    • Fault survival: The IDE driver usually survives a failing IDE device. The RAID layer will mark the disk as failed, and if you are running RAID levels 1 or above, the machine should work just fine until you can take it down for maintenance.

It is very important, that you only use one IDE disk per IDE bus. Not only would two disks ruin the performance, but the failure of a disk often guarantees the failure of the bus, and therefore the failure of all disks on that bus. In a fault-tolerant RAID setup (RAID levels 1,4,5), the failure of one disk can be handled, but the failure of two disks (the two disks on the bus that fails due to the failure of the one disk) will render the array unusable. Also, when the master drive on a bus fails, the slave or the IDE controller may get awfully confused. One bus, one drive, that's the rule.

There are cheap PCI IDE controllers out there. You often get two or four busses for around $80. Considering the much lower price of IDE disks versus SCSI disks, an IDE disk array can often be a really nice solution if one can live with the relatively low number (around 8 probably) of disks one can attach to a typical system.

IDE has major cabling problems when it comes to large arrays. Even if you had enough PCI slots, it's unlikely that you could fit much more than 8 disks in a system and still get it running without data corruption caused by too long IDE cables.

Furthermore, some of the newer IDE drives come with a restriction that they are only to be used a given number of hours per day. These drives are meant for desktop usage, and it can lead to severe problems if these are used in a 24/7 server RAID environment.

Hot Swap

Although hot swapping of drives is supported to some extent, it is still not something one can do easily.

Hot-swapping IDE drives

Don't ! IDE doesn't handle hot swapping at all. Sure, it may work for you, if your IDE driver is compiled as a module (only possible in the 2.2 series of the kernel), and you re-load it after you've replaced the drive. But you may just as well end up with a fried IDE controller, and you'll be looking at a lot more down-time than just the time it would have taken to replace the drive on a downed system.

The main problem, except for the electrical issues that can destroy your hardware, is that the IDE bus must be re-scanned after disks are swapped. While newer Linux kernels do support re-scan of an IDE bus (with the help of the hdparm utility), re-detecting partitions is still something that is lacking. If the new disk is 100% identical to the old one (wrt. geometry etc.), it may work, but really, you are walking the bleeding edge here.

Hot-swapping SCSI drives

Normal SCSI hardware is not hot-swappable either. It may however work. If your SCSI driver supports re-scanning the bus, and removing and appending devices, you may be able to hot-swap devices. However, on a normal SCSI bus you probably shouldn't unplug devices while your system is still powered up. But then again, it may just work (and you may end up with fried hardware).

The SCSI layer should survive if a disk dies, but not all SCSI drivers handle this yet. If your SCSI driver dies when a disk goes down, your system will go with it, and hot-plug isn't really interesting then.

Hot-swapping with SATA

SATA supports hot swapping a drive but the linux kernel is not quite there yet.

see http://linux-ata.org/driver-status.html for more information on SATA hotpug status.

Hot-swapping with SCA

With SCA, it is possible to hot-plug devices. Unfortunately, this is not as simple as it should be, but it is both possible and safe.

Replace the RAID device, disk device, and host/channel/id/lun numbers with the appropriate values in the example below:

  • Dump the partition table from the drive, if it is still readable:
    sfdisk -d /dev/sdb > partitions.sdb

  • Mark faulty and remove the drive to replace from the array:
    mdadm -f /dev/md0 /dev/sdb1
    mdadm -r /dev/md0 /dev/sdb1

  • Look up the Host, Channel, ID and Lun of the drive to replace, by looking in

  • Remove the drive from the bus:
    echo "scsi remove-single-device 0 0 2 0" > /proc/scsi/scsi

  • Verify that the drive has been correctly removed, by looking in

  • Unplug the drive from your SCA bay, and insert a new drive
  • Add the new drive to the bus:
    echo "scsi add-single-device 0 0 2 0" > /proc/scsi/scsi

(this should spin up the drive as well)

  • Re-partition the drive using the previously dumped partition table:

    sfdisk /dev/sdb < partitions.sdb

  • Add the drive to your array:
    mdadm -a /dev/md0 /dev/sdb1

The arguments to the "scsi remove-single-device" commands are: Host, Channel, Id and Lun. These numbers are found in the "/proc/scsi/scsi" file.

The above steps have been tried and tested on a system with IBM SCA disks and an Adaptec SCSI controller. If you encounter problems or find easier ways to do this, please discuss this on the linux-raid mailing list.

Personal tools