Promoting Linux Requires Advertising. It Matters to Me. TM
GnuCash Personal Finance Manager

(Old/Obsolete) RAID Product and Performance Reviews for Linux

Caution: Stale Data This page contains old (circa 1998) product reviews and performance benchmarks that originally appeared on the RAID for Linux web page. As of August 2002, this page is no longer actively maintained, and is archived here for historical purposes.

Current Linux Software RAID Status

(Current as of 1998; the information below may be rancid.)

Software RAID 0, 1, 4, 5
The Linux-2.4.x kernels include Software RAID by default. The same level of RAID support can be obtained in the Linux-2.2 kernels by applying the patches at To use Linux software RAID, you also need to install the version-0.9 (or newer) of 'raidtools'. This is a small collection of command line tools that allow you to start, stop, and reconstruct raid arrays. 'Hot' reconstruction and maintenance is now supported.

Software RAID 0, 1, 4, 5 in Older Linux Kernels
The md Multi-Device kernel module, included as a standard part of the v2.0.x kernels, provides RAID-0 (disk striping) and multi-disk linear-append spanning support in the Linux kernel. The RAID 1,4 and 5 kernel modules are a standard part of the latest 2.1.x kernels; patches are available for the 2.0.x kernels and the earlier 2.1.x kernels. The code itself appears stable, although some newer features, such as hot-reconstruction, should still be considered as alpha or beta quality code.

The RAID patches can be applied to kernels v2.0.29 and higher, and 2.1.26 through 2.1.62 (versions 2.1.x newer than this come with the RAID code built-in). Please avoid kernel 2.0.30, it has serious memory management, TCP/IP Masquerading and ISDN problems. (Earlier and later versions should be OK). Mirroring-over-striping is supported, as well as other combinations (RAID 1,4,5 can be put on top of other RAID-1,4,5 devices, or over the linear or striped personalities. Linear & striping over RAID 1,4,5 are not supported).

Please note that many of the 2.1.x series development kernels have problems, and thus the latest RAID patches from the ALPHA directory at need to be applied.

Hot-Plug Support
Linux supports "hot-plug" SCSI in the sense that SCSI devices can be removed and added without rebooting the machine. The procedures for this are documented in the SCSI Programming HOWTO. From the command-line, the commands are
echo "scsi remove-single-device host channel ID LUN " > /proc/scsi/scsi
echo "scsi add-single-device host channel ID LUN " > /proc/scsi/scsi
Don't confuse this ability with the hot-plug support offered by vendors of outboard raid boxes.

Disk Management
There currently does not seem to be any coherent, unified scheme for disk and RAID management in Linux. There is a mish-mash of unrelated diagnostic tools.

Hard-drive errors (which typically occur when a disk is failing or has failed) are written to the syslog message daemon. You need to configure a system-log-checking utility (such as 'logcheck') to extract the important messages from the chaff of other system activity. I don't know of any package that will keep statistics or show any 'frequency of intermittent failures' report.

You can view the status of RAID by performing a cat /proc/mdstat. You can fiddle with IDE parameters (at the risk of making you system unusable) with hdparm; you can run IDE diagnostics with ide-smart. Benchmarks include bonnie among others. Volume management can be done with the LVM tools. But there is no complete, unified, graphical suite that I know of.

Product Reviews

The following product reviews were submitted by readers of this web page. Note that little effort has been made to verify their subjectiveness or to filter out malicious submissions.

Manufacturer: DPT
Model Number: PM3334UW/2 (two-channel "SmartRAID IV")
Number of disks, total size, raid-config: Two RAID-5 groups, one on each SCSI channel, each array consisting of nine non-hot-swappable 9 GB disk drives. The ninth drive on each RAID group is designated as a "hot spare". One channel also includes a Quantum DLT 7000 backup tape device.
On-controller memory 64 MB as qty 4 non-parity, non-ECC, non-EDO 60ns 16MB single-sided 72-pin SIMMs
Months in use: 10 months in heavy use.
OS kernel version, vendor and vendor version: 2.0.32, RedHat Linux 5.0
Use (news spool, file server, web server?): File server (directories for developers)
Support (1=bad, 5=excellent or n/a didn't try support): 3
Performance (1=very dissatisfied 5=highly satisfied or n/a): 4
Reliability (1=terrible 5=great or n/a no opinion): 4
Installation (1=hard, 5=very easy) (includes s/w install issues): 3
Overall satisfaction (1 to 5): 4
Comments: Regarding DPT support: Try DPT's troubleshooting web pages first. DPT's support staff does respond to e-mail, typically within one to two working days, and they do sometimes come up with interesting suggestions for work-arounds to try. But in my admittedly limited experience with DPT support staff as an end-user, unless you're truly stuck you're more likely to find a work-around to your problems before they do.

Regarding DPT documentation: The SmartRAID IV User's Manual is better than some documentation I've seen, but like most documentation it's nearly useless if you encounter a problem. The documentation does not appear to be completely accurate as regards hot spare allocation. And unsurprisingly, the printed documentation does not cover Linux.

Regarding DPT PM3334UW/2 installation: The following combinations of SCSI adapters and motherboards did not work for us:

  • DPT PM3334UW/2 + Adaptec 2940UW on an Intel Advanced/Endeavor Pentium motherboard.
  • DPT PM3334UW/2 + Mylex BT-950R on an Intel Advanced/Endeavor Pentium motherboard.
  • DPT PM3334UW/2 + NCR 53c815 on an ASUS P5A-B Pentium motherboard.
The following combinations of adapters and motherboards did work for us:
  • DPT PM3334UW/2 + Mylex BT-950R on an ASUS TX97-E Pentium motherboard.
  • DPT PM3334UW/2 + Adaptec 2940UW on an ASUS TX97-XE Pentium motherboard.
  • DPT PM3334UW/2 + Mylex BT-950R on an ASUS P2B Pentium-II motherboard.

Symptoms of non-working combinations may include that the Windows-based DPT Storage Manager application reports "Unable to connect to DPT Engine" or "There are no DPT HBAs on the local machine".

Regarding the DPT Storage Manager application: The Windows-based DPT Storage Manager application version 1.P6 must have all "options" installed, or it cannot run. Some variant of this application is required in order to build RAID groups.

The DPT Storage Manager application is dangerous--if you click on the wrong thing the application may immediately wipe out a RAID group, without confirmation and without hope of recovery. If you are adding a RAID group, you are advised to disconnect physically any other RAID groups on which you do not plan to operate, until you have finished working with the Storage Manager application. There is no Linux version of the Storage Manager application (or any other DPT run-time goodies) available at present.

Regarding Michael Neuffer's version 2.70a eata_dma.o driver for Linux: The eata_dma driver does appear to work, with the following minor problems:

  1. "Proc" information from, e.g. "cat /proc/scsi/eata_dma/1" is mostly incorrect, and therefore it is unlikely that one would be able to detect an alarm condition from it, among other things.
  2. The driver does not properly support more than two RAID groups. If you have three RAID groups, you will not be able to use the last RAID group--a device name of the form /dev/sdXN--that the driver reports at boot time.
  3. The -c (check) option in mke2fs falsely reports problems; omit that option when making a file system on a RAID group.
Another eata device driver exists, eata.o (as opposed to eata_dma.o), written by Dario Ballabio, but at the time of this writing I have not tried the eata.o driver. In the Red Hat 5.0 distribution, the eata.o driver is present in /usr/src/linux/drivers/scsi/.

Miscellaneous issues: if a hot spare is available, experiments appear to show that it is not possible to detect when the hot spare has been deployed automatically as the result of a drive failure. If a hot spare is not available, then an audible alarm sounds (earsplittingly) when a drive fails.

Author: (yourname and email or anonymous) Jerry Sweet (
Date: November 10, 1998

Manufacturer: DPT
Model Number: 3334UW
Number of disks, total size, raid-config: 3 x 9 GB => 17 GB (RAID 5)
On-controller memory 64 MB parity, non ECC
Months in use: 3 months, 2 weeks in heavy use
OS kernel version, vendor and vendor version: 2.0.30, RedHat Linux 4.2
Use (news spool, file server, web server?): File server (home directories)
Support (1=bad, 5=excellent or n/a didn't try support): n/a
Performance (1=very dissatisfied 5=highly satisfied or n/a): 4
Reliability (1=terrible 5=great or n/a no opinion): 4
Installation (1=hard, 5=very easy) (includes s/w install issues): 3
Overall satisfaction (1 to 5): 4
Comments: Works nicely, and installation was easy enough in DOS, they even have a Linux icon included now. What I really would benefit from would be dynamic partitioning a la AIX, but that is a file system matter as well.

If the kernel crashes on mkfs.ext2 right after boot, try generating some traffic on the disk (dd if=/dev/sdb of=/dev/null bs=512 count=100) before making the file system. (Thanks Mike!) (ed note: this is a well known Linux 2.0.30 bug; try using 2.0.29 instead).

Author: (yourname and email or anonymous) Oskari Jääskeläinen (
Date: October 1997


The following figures were submitted by interested readers. No effort has been made to verify their accuracy, or the test methodology. These figures might be incorrect or misleading. Use at your own risk!
The following has been submitted by the Dilog manufacturer:
Linux 2.1.58 with an Adaptec 2940 UW card, two IBM DCAS
drives and the DiLog 2XFR:

   -------Sequential Output-------- ---Sequential Input--
   -Per Char- --Block--- -Rewrite-- -Per Char- --Block---
   K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU
    8392 99.0 13533 61.2  5961 48.9  8124 96.4 15433 54.3

Same conditions, one drive only:

   -------Sequential Output-------- ---Sequential Input--
   -Per Char- --Block--- -Rewrite-- -Per Char- --Block---
   K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU
    6242 72.2  7248 32.4  3491 25.1  7356 84.2  7864 25.2

John Morris ( has submitted the following:

The following are comparisons of hardware and software RAID performance. The test machine is a dual-P2, 300MHz, with 512MB RAM, a BusLogic Ultra-wide SCSI controller, a DPT 3334UW SmartRAID IV controller w/64MB cache, and a bunch of Seagate Barracuda 4G wide-SCSI disks.

These are very impressive figures, highlighting the strength of software raid!

              -------Sequential Output-------- ---Sequential Input-- --Random--
              -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
Machine    MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU

(DPT hardware RAID5, 3 disks)
DPT3x4G  1000  1914 20.0  1985  2.8  1704  6.5  5559 86.7 12857 15.6 97.1  1.8

(Linux soft RAID5, 3 disks)
SOF3x4G  1000  7312 76.2 10908 15.5  5757 20.2  5434 86.4 14728 19.9 69.3  1.5

(DPT hardware RAID5, 6 disks)
DPT6x4G  1000  2246 23.4  2371  3.4  1890  7.1  5610 87.3  9381 10.9 112.1  1.9

(Linux soft RAID5, 6 disks)
SOF6x4G  1000  7530 76.8 16991 32.0  7861 39.9  5763 90.7 23246 49.6 145.4  3.7

(I didn't test DPT RAID5 w/8 disks because the disks kept failing,
even though it was the exact same SCSI chain as the soft RAID5, which
returned no errors; please interpolate!)

(Linux soft RAID5, 8 disks)
SOF8x4G  1000  7642 77.2 17649 33.0  8207 41.5  5755 90.6 22958 48.3 160.5  3.7

(Linux soft RAID0, 8 disks)
SOF8x4G  1000  8506 86.1 27122 54.2 11086 58.9  6077 95.9 27436 62.9 185.3  4.9

Tomas Pospisek maintains additional benchmarks at his Benchmarks page
Ram Samudrala reports the following:

Here's the output of the Bonnie program, on a DPT 2144 UW with 16MB of cache and three 9GB disks in a RAID 5 setup. The machine is on a dual processor Pentium Pro running Linux 2.0.32. For comparison, the Bonnie results for the IDE drive on that machine are also given. For comparison, some hardware raid figures are also given, for a Mylex controller on a DEC/OSF1 machine (KSPAC), with a 12 9 GB disk RAID. (Note that the test size is rather small, at 100MB, it tests memory performance as well as disk).

    -------Sequential Output-------- ---Sequential Input--  --Random--          
    -Per Char- --Block--- -Rewrite-- -Per Char- --Block---  --Seeks---          
 MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU   /sec  %CPU Machine 
100  3277 32.0  6325 23.5  2627 18.3  4818 44.8 59697 88.0  575.9  16.3 IDE
100  9210 96.8  1613  5.9   717  5.8  3797 36.1 90931 96.8 4648.2 159.2 DPT RAID
100  5384 42.3  5780 18.7  5287 42.1 12438 87.2 62193 83.9 4983.0  65.5 Mylex RAID


Last updated August 2002 by Linas Vepstas (

Copyright (c) 1996-1999, 2001-2002 Linas Vepstas.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.1; with no Invariant Sections, with no Front-Cover Texts, and with no Back-Cover Texts. A copy of the license is included at the URL, the web page titled "GNU Free Documentation License".

The phrase 'Enterprise Linux' is a trademark of Linas Vepstas.
All trademarks on this page are property of their respective owners.
Return to the Enterprise Linux(TM) Page
Go Back to Linas' Home Page