22.3.2. Creating a Multipath Device With mdadm
In addition to creating RAID arrays, mdadm
can also be used to take advantage of hardware supporting more than one I/O path to individual SCSI LUNs (disk drives). The goal of multipath storage is continued data availability in the event of hardware failure or individual path saturation. Because this configuration contains multiple paths (each acting as an independent virtual controller) accessing a common SCSI LUN (disk drive), the Linux kernel detects each shared drive once "through" each path. In other words, the SCSI LUN (disk drive) known as /dev/sda
may also be accessible as /dev/sdb
, /dev/sdc
, and so on, depending on the specific configuration.
To provide a single device that can remain accessible if an I/O path fails or becomes saturated, mdadm
includes an additional parameter to its level
option. This parameter
multipath
directs the md layer in the Linux kernel to re-route I/O requests from one pathway to another in the event of an I/O path failure.
To create a multipath device, edit the /etc/mdadm.conf
file to define values for the
DEVICE
and
ARRAY
lines that reflect your hardware configuration.
Note
Unlike the previous RAID example (where each device specified in /etc/mdadm.conf
must represent different physical disk drives), each device in this file refers to the same shared disk drive.
The command used for the creation of a multipath device is similar to that used to create a RAID device; the difference is the replacement of a RAID level parameter with the
multipath
parameter:
mdadm -C /dev/md0 --level=multipath --raid-devices=4 /dev/sda1 /dev/sdb1
/dev/sdc1 /dev/sdd1
Continue creating array? yes
mdadm: array /dev/md0 started.
Due to the length of the mdadm command line, it has been broken into two lines.
In this example, the hardware consists of one SCSI LUN presented as four separate SCSI devices, each accessing the same storage by a different pathway. Once the multipath device /dev/md0
is created, all I/O operations referencing /dev/md0
are directed to /dev/sda1
, /dev/sdb1
, /dev/sdc1
, or /dev/sdd1
(depending on which path is currently active and operational).
The configuration of /dev/md0
can be examined more closely using the command mdadm --detail /dev/md0
to verify that it is, in fact, a multipath device:
/dev/md0:
Version : 00.90.00
Creation Time : Tue Mar 2 10:56:37 2004
Raid Level : multipath
Array Size : 3905408 (3.72 GiB 3.100 GB)
Raid Devices : 1
Total Devices : 4
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Tue Mar 2 10:56:37 2004
State : dirty, no-errors
Active Devices : 1
Working Devices : 4
Failed Devices : 0
Spare Devices : 3
Number Major Minor RaidDevice State
0 8 49 0 active sync /dev/sdd1
1 8 17 1 spare /dev/sdb1
2 8 33 2 spare /dev/sdc1
3 8 1 3 spare /dev/sda1
UUID : 4b564608:fa01c716:550bd8ff:735d92dc
Events : 0.1
Another feature of mdadm
is the ability to force a device (be it a member of a RAID array or a path in a multipath configuration) to be removed from an operating configuration. In the following example, /dev/sda1
is flagged as being faulty, is then removed, and finally is added back into the configuration. For a multipath configuration, these actions would not affect any I/O activity taking place at the time:
# mdadm /dev/md0 -f /dev/sda1
mdadm: set /dev/sda1 faulty in /dev/md0
# mdadm /dev/md0 -r /dev/sda1
mdadm: hot removed /dev/sda1
# mdadm /dev/md0 -a /dev/sda1
mdadm: hot added /dev/sda1
#