Red Hat Enterprise Linux 9 Essentials Book now available.
Purchase a copy of Red Hat Enterprise Linux 9 (RHEL 9) Essentials
Red Hat Enterprise Linux 9 Essentials Print and eBook (PDF) editions contain 34 chapters and 298 pages
23.15.3. Create Software RAID
Note — Software RAIDs are unneccessary on System z
On System z, the storage subsystem uses RAID transparently. There is no need to set up a software RAID.
Redundant arrays of independent disks (RAIDs) are constructed from multiple storage devices that are arranged to provided increased performance and — in some configurations — greater fault tolerance. Refer to the Red Hat Enterprise Linux Deployment Guide for a description of different kinds of RAIDs.
To make a RAID device, you must first create software RAID partitions. Once you have created two or more software RAID partitions, select RAID to join the software RAID partitions into a RAID device.
RAID Partition
Choose this option to configure a partition for software RAID. This option is the only choice available if your disk contains no software RAID partitions. This is the same dialog that appears when you add a standard partition — refer to Section 23.15.2, “Adding Partitions” for a description of the available options. Note, however, that File System Type must be set to software RAID
The create a software RAID partition dialog.
Figure 23.31. Create a software RAID partition
RAID Device
Choose this option to construct a RAID device from two or more existing software RAID partitions. This option is available if two or more software RAID partitions have been configured.
The create a RAID device dialog.
Figure 23.32. Create a RAID device
Select the file system type as for a standard partition.
Anaconda automatically suggests a name for the RAID device, but you can manually select names from md0 to md15.
Click the checkboxes beside individual storage devices to include or remove them from this RAID.
The RAID Level corresponds to a particular type of RAID. Choose from the following options:
RAID 0 — distributes data across multiple storage devices. Level 0 RAIDs offer increased performance over standard partitions, and can be used to pool the storage of multiple devices into one large virtual device. Note that Level 0 RAIDS offer no redundancy and that the failure of one device in the array destroys the entire array. RAID 0 requires at least two RAID partitions.
RAID 1 — mirrors the data on one storage device onto one or more other storage devices. Additional devices in the array provide increasing levels of redundancy. RAID 1 requires at least two RAID partitions.
RAID 4 — distributes data across multiple storage devices, but uses one device in the array to store parity information that safeguards the array in case any device within the array fails. Because all parity information is stored on the one device, access to this device creates a bottleneck in the performance of the array. RAID 4 requires at least three RAID partitions.
RAID 5 — distributes data and parity information across multiple storage devices. Level 5 RAIDs therefore offer the performance advantages of distributing data across multiple devices, but do not share the performance bottleneck of level 4 RAIDs because the parity information is also distributed through the array. RAID 5 requires at least three RAID partitions.
RAID 6 — level 6 RAIDs are similar to level 5 RAIDs, but instead of storing only one set of parity data, they store two sets. RAID 6 requires at least four RAID partitions.
RAID 10 — level 10 RAIDs are nested RAIDs or hybrid RAIDs. Level 10 RAIDs are constructed by distributing data over mirrored sets of storage devices. For example, a level 10 RAID constructed from four RAID partitions consists of two pairs of partitions in which one partition mirrors the other. Data is then distributed across both pairs of storage devices, as in a level 0 RAID. RAID 10 requires at least four RAID partitions.
RAID Clone
Choose this option to set up a RAID mirror of an existing disk. This option is available if two or more disks are attached to the system.