RAID supports various configurations, including levels 0, 1, 4, 5, and
linear. These RAID types are defined as follows:
Level 0 — RAID level 0, often called
"striping," is a performance-oriented striped data mapping
technique. This means the data being written to the array is
broken down into strips and written across the member disks of the
array, allowing high I/O performance at low inherent cost but
provides no redundancy. The storage capacity of a level 0 array is
equal to the total capacity of the member disks in a Hardware RAID
or the total capacity of member partitions in a Software RAID.
Level 1 — RAID level 1, or "mirroring,"
has been used longer than any other form of RAID. Level 1 provides
redundancy by writing identical data to each member disk of the
array, leaving a "mirrored" copy on each disk. Mirroring remains
popular due to its simplicity and high level of data availability.
Level 1 operates with two or more disks that may use parallel
access for high data-transfer rates when reading but more commonly
operate independently to provide high I/O transaction rates. Level
1 provides very good data reliability and improves performance for
read-intensive applications but at a relatively high
cost.
[1]
The storage capacity of the level 1 array is equal to the
capacity of one of the mirrored hard disks in a Hardware RAID or
one of the mirrored partitions in a Software RAID.
Level 4 — Level 4 uses parity
[2]
concentrated on a single disk drive to protect data. It is better
suited to transaction I/O rather than large file
transfers. Because the dedicated parity disk represents an
inherent bottleneck, level 4 is seldom used without accompanying
technologies such as write-back caching. Although RAID level 4 is
an option in some RAID partitioning schemes, it is not an option
allowed in Red Hat Enterprise Linux RAID installations.
[3]
The storage capacity of Hardware RAID level 4 is equal to the
capacity of member disks, minus the capacity of one member
disk. The storage capacity of Software RAID level 4 is equal to
the capacity of the member partitions, minus the size of one of
the partitions if they are of equal size.
Level 5 — This is the most common type
of RAID. By distributing parity across some or all of an array's
member disk drives, RAID level 5 eliminates the write bottleneck
inherent in level 4. The only performance bottleneck is the parity
calculation process. With modern CPUs and Software RAID, that
usually is not a very big problem. As with level 4, the result is
asymmetrical performance, with reads substantially outperforming
writes. Level 5 is often used with write-back caching to reduce
the asymmetry. The storage capacity of Hardware RAID level 5 is
equal to the capacity of member disks, minus the capacity of one
member disk. The storage capacity of Software RAID level 5 is
equal to the capacity of the member partitions, minus the size of
one of the partitions if they are of equal size.
Linear RAID — Linear RAID is a simple
grouping of drives to create a larger virtual drive. In linear
RAID, the chunks are allocated sequentially from one member drive,
going to the next drive only when the first is completely filled.
This grouping provides no performance benefit, as it is unlikely
that any I/O operations will be split between member drives.
Linear RAID also offers no redundancy and, in fact, decreases
reliability — if any one member drive fails, the
entire array cannot be used. The capacity is the total of all
member disks.
RAID level 1 comes at a high cost because you write the same
information to all of the disks in the array, which wastes
drive space. For example, if you have RAID level 1 set up so
that your root (/) partition exists on
two 40G drives, you have 80G total but are only able to access
40G of that 80G. The other 40G acts like a mirror of the first
40G.
Parity information is calculated based on the contents of the
rest of the member disks in the array. This information can
then be used to reconstruct data when one disk in the array
fails. The reconstructed data can then be used to satisfy I/O
requests to the failed disk before it is replaced and to
repopulate the failed disk after it has been replaced.