Hardware issues

From Linux Raid Wiki
(Difference between revisions)
Jump to: navigation, search
m (Hotplug support by SATA/SAS cables: URL changed for SATA brochure.pdf, fixed this document to point to the new location.)
(add navigation)
Line 1: Line 1:
{| style="border:1px solid #aaaaaa; background-color:#f9f9f9;width:100%; font-family: Verdana, sans-serif;"
|- padding:5px;padding-top:0.5em;font-size: 95%;
| Back to [[Devices]] <span style="float:right; padding-left:5px;">Forward to [[RAID setup]]</span>
=Hardware issues=
=Hardware issues=
Line 212: Line 216:
find easier ways to do this, please discuss this on the linux-raid
find easier ways to do this, please discuss this on the linux-raid
mailing list.
mailing list.
{| style="border:1px solid #aaaaaa; background-color:#f9f9f9;width:100%; font-family: Verdana, sans-serif;"
|- padding:5px;padding-top:0.5em;font-size: 95%;
| Back to [[Devices]] <span style="float:right; padding-left:5px;">Forward to [[RAID setup]]</span>

Revision as of 11:14, 4 April 2011

Back to Devices Forward to RAID setup


Hardware issues

This section will mention some of the hardware concerns involved when running software RAID.

If you are going after high performance, you should make sure that the bus(ses) to the drives are fast enough. You should not have 14 UW-SCSI drives on one UW bus, if each drive can give 20 MB/s but the bus can only sustain 160 MB/s. Also, you should only have one device per IDE (PATA) channel, since master and slave are not concurrent. This is not relevant for SATA, since it does not provide master/slave. Most newer motherboards have two or more PATA/SATA busses, so you can set up RAID without buying more controllers. Extra controllers are rather cheap these days, so setting up 6-8 disk systems is easy and affordable.

See also the section on bottlenecks.

IDE Configuration

It is indeed possible to run RAID over IDE disks. And excellent performance can be achieved too. In fact, today's price on IDE drives and controllers does make IDE something to be considered, when setting up new RAID systems.

  • Physical stability: in the past, IDE drive were marketed as "desktop", and were of lower quality than SCSI ("server" or "enterprise") drives.

This is largely changed now, as reflected in the fact that SATA and SCSI drives often have the same warranty period. Differences now focus on features (such as power-saving, or MTBF ratings, vibration resistance or firmware differences). It all boils down to: All disks fail, sooner or later, and one should be prepared for that.

  • Data integrity: In old systems, PIO-mode IDE had no way of assuring that the data sent onto the IDE bus would be received by the disk correctly. With the Ultra-DMA standard, IDE drives now do a checksum on the data they receive, which makes corruption unlikely. The PCI bus used by the controller, however, often does not have parity or checksum, though some systems now support these features.
  • Performance: I am not going to write thoroughly about IDE performance here. The really short story is:
    • PATA/SATA drives are fast in bandwidth because of high recording density, but since they are commonly only 7200 RPM, ATA disks normally have higher latency than 10 or 15k RPM SCSI diks.
    • old PIO-mode IDE had significant CPU overhead, but UDMA (including all SATA) are comparable to SCSI.
    • Only use one IDE drive per IDE bus, slave disks spoil performance
    • Fault survival: The IDE driver usually survives a failing IDE device. The RAID layer will mark the disk as failed, and if you are running RAID levels 1 or above, the machine should work just fine until you can take it down for maintenance.

It is very important, that you only use one IDE disk per IDE bus. Not only would two disks ruin the performance, but the failure of a disk often guarantees the failure of the bus, and therefore the failure of all disks on that bus. In a fault-tolerant RAID setup (RAID levels 1,4,5), the failure of one disk can be handled, but the failure of two disks (the two disks on the bus that fails due to the failure of the one disk) will render the array unusable. Also, when the master drive on a bus fails, the slave or the IDE controller may get awfully confused. One bus, one drive, that's the rule.

There are cheap PCI IDE controllers out there. You often get two or four busses for around $80. Considering the much lower price of IDE disks versus SCSI disks, an IDE disk array can often be a really nice solution if one can live with the relatively low number (around 8 probably) of disks one can attach to a typical system.

IDE has major cabling problems when it comes to large arrays. Even if you had enough PCI slots, it's unlikely that you could fit much more than 8 disks in a system and still get it running without data corruption caused by too long IDE cables.

Furthermore, some of the newer IDE drives come with a restriction that they are only to be used a given number of hours per day. These drives are meant for desktop usage, and it can lead to severe problems if these are used in a 24/7 server RAID environment.

SATA is beginning to support a new feature called "port multipliers", which effectively multiplex several SATA disks onto the same host SATA port. this can decrease cabling concerns. it's also fairly common to see multi-port SATA controllers, which put 4 ports onto the connector originated by Infiniband; this makes it possible to create 24-port SATA controllers, for instance.

Hot Swap

Note: for description of Linux RAID hotplug support, see the Hotplug page.

Hot-swapping IDE drives

Don't ! IDE doesn't handle hot swapping at all. Sure, it may work for you, if your IDE driver is compiled as a module (only possible in the 2.2 series of the kernel), and you re-load it after you've replaced the drive. But you may just as well end up with a fried IDE controller, and you'll be looking at a lot more down-time than just the time it would have taken to replace the drive on a downed system.

The main problem, except for the electrical issues that can destroy your hardware, is that the IDE bus must be re-scanned after disks are swapped. While newer Linux kernels do support re-scan of an IDE bus (with the help of the hdparm utility), re-detecting partitions is still something that is lacking. If the new disk is 100% identical to the old one (wrt. geometry etc.), it may work, but really, you are walking the bleeding edge here.

Hot-swapping SCSI drives

Normal SCSI hardware is not hot-swappable either. It may however work. If your SCSI driver supports re-scanning the bus, and removing and appending devices, you may be able to hot-swap devices. However, on a normal SCSI bus you probably shouldn't unplug devices while your system is still powered up. But then again, it may just work (and you may end up with fried hardware).

The SCSI layer should survive if a disk dies, but not all SCSI drivers handle this yet. If your SCSI driver dies when a disk goes down, your system will go with it, and hot-plug isn't really interesting then.

Hot-swapping with SATA/SAS

SATA/SAS hotplug support is required by the SATA/SAS specifications, therefore SATA/SAS platform is the one where hotplug should be least problematic. But still, you can fall in non-compliance pitfalls, so read on before you start experimenting!

Hotplug support in mainboard/disk controller chipsets

Newer mainboard/disk controllers chipsets and their drivers usually support hotplug.

If the chipset is AHCI-compliant, it will be (probably) able to use the ahci kernel module providing hotplug and power managment support. The ahci module is present in the Linux kernel since 2.6.19.

But still, not all chipsets support hotplug. Also, some chipsets that could in theory support hotplug (but are not AHCI-compliant) don't have the necessary support in the linux kernel. For more information on SATA drivers' status, see http://ata.wiki.kernel.org/index.php/SATA_hardware_features.

Hotplug support in SATA/SAS disks

All current SATA and SAS drives that have the 15 pin SATA power connector are hotplug-ready. There might be some very old historical SATA disks with 4-pin Molex power connector which do not have the 15 pin SATA power connector. Such old drives should never be hotplugged directly (without a hotswap bay) otherwise you risk their damage.

Hotplug support by SATA/SAS cables

For protecting the disk circuitry during the hotplug, the 15-pin SATA/SAS power connector on the cable side must have 2 pins (pin nr. 4 and 12) longer than the others.


  • on cable/backplane connector ("receptacle") side, pins 4 and 12 are longer and are called "staggered pins". These pins bring the GND to the disk before the other pins get attached, ensuring that no sensitive circuitry is connected before there is a reliable system ground
  • on the device side, pins 3, 7, 13 are the staggered pins. These pins bring the 3.3V, 5V and 12V power to the precharge power electronics in the disk before the other power pins are atached.

Important warning Normal 15-pin SATA power cable receptacle, found in ordinary power supplies or computer cases, does not have pins 4 and 12 staggered! In fact, it is quite hard to find a hotplug-compatible SATA power receptacle. On the first sight, the difference is subtle, see pictures of several SATA receptacle types here before you try start playing hotplug games with your drive!

!!!! Please remember, that without the staggered GND pins on the SATA power cable receptacle, you risk the damage of your disk when doing hotplug/hot-unplug !!!!

The hotplug-compatible SATA power receptacle must be present in all SAS/SATA hotswap cages.

In case you don't have hotswap cage, but you do have 15pin hotplug-compatible SATA power receptacle, this should be the correct sequence for plugging and unplugging the disk[1]:

For hotplug:

  1. connect the 15pin power receptacle to the disk
  2. connect the 7pin data cable

For hot-unplug:

  1. unplug the data cable from the disk
  2. unplug the power cable

Hot-swapping with SCA

With SCA, it is possible to hot-plug devices. Unfortunately, this is not as simple as it should be, but it is both possible and safe.

Replace the RAID device, disk device, and host/channel/id/lun numbers with the appropriate values in the example below:

  • Dump the partition table from the drive, if it is still readable:
    sfdisk -d /dev/sdb > partitions.sdb

  • Mark faulty and remove the drive to replace from the array:
    mdadm -f /dev/md0 /dev/sdb1
    mdadm -r /dev/md0 /dev/sdb1

  • Look up the Host, Channel, ID and Lun of the drive to replace, by looking in

  • Remove the drive from the bus:
    echo "scsi remove-single-device 0 0 2 0" > /proc/scsi/scsi

  • Verify that the drive has been correctly removed, by looking in

  • Unplug the drive from your SCA bay, and insert a new drive
  • Add the new drive to the bus:
    echo "scsi add-single-device 0 0 2 0" > /proc/scsi/scsi

(this should spin up the drive as well)

  • Re-partition the drive using the previously dumped partition table:

    sfdisk /dev/sdb < partitions.sdb

  • Add the drive to your array:
    mdadm -a /dev/md0 /dev/sdb1

The arguments to the "scsi remove-single-device" commands are: Host, Channel, Id and Lun. These numbers are found in the "/proc/scsi/scsi" file.

The above steps have been tried and tested on a system with IBM SCA disks and an Adaptec SCSI controller. If you encounter problems or find easier ways to do this, please discuss this on the linux-raid mailing list.

Back to Devices Forward to RAID setup
Personal tools