|
|
|
|
2.3 Soft RAID Configuration
The purpose of RAID (redundant array of independent disks) is to combine
several hard disk partitions into one large virtual
hard disk to optimize performance, data security, or both. Most RAID
controllers use the SCSI protocol because it can address a larger number
of hard disks in a more effective way than the IDE protocol and is more
suitable for parallel processing of commands. There are some RAID
controllers that support IDE or SATA hard disks. Soft RAID provides the
advantages of RAID systems without the additional cost of hardware RAID
controllers. However, this requires some CPU time and has memory
requirements that make it unsuitable for real high performance computers.
openSUSE® offers the option of combining several hard disks into
one soft RAID system with the help. RAID implies several strategies for
combining several hard disks in a RAID system, each with different goals,
advantages, and characteristics. These variations are commonly known as
RAID levels.
Common RAID levels are:
- RAID 0
-
This level improves the performance of your data access by spreading
out blocks of each file across multiple disk drives. Actually, this is
not really a RAID, because it does not provide data backup, but the
name RAID 0 for this type of system has
become the norm. With RAID 0, two or more hard disks are pooled
together. The performance is very good, but the RAID system is
destroyed and your data lost if even one hard disk fails.
- RAID 1
-
This level provides adequate security for your data, because the data
is copied to another hard disk 1:1. This is known as hard
disk mirroring. If a disk is destroyed, a copy of its
contents is available on another one. All of them except one could be
damaged without endangering your data. However, if damage is not
detected, it also may happen that damaged data is mirrored to the
correct disk and data corruption happens that way. The writing
performance suffers a little in the copying process compared to when
using single disk access (10 to 20 % slower), but read access is
significantly faster in comparison to any one of the normal physical
hard disks, because the data is duplicated so can be parallel scanned.
Generally it can be said that Level 1 provides nearly twice the
read transaction rate of single disks and almost the same write
transaction rate as single disks.
- RAID 2 and RAID 3
-
These are not typical RAID implementations. Level 2 stripes data
at the bit level rather than the block level. Level 3 provides
byte-level striping with a dedicated parity disk and cannot service
simultaneous multiple requests. Both levels are only rarely used.
- RAID 4
-
Level 4 provides block-level striping just like Level 0
combined with a dedicated parity disk. In the case of a data disk
failure, the parity data is used to create a replacement disk. However,
the parity disk may create a bottleneck for write access. Nevertheless,
Level 4 is sometimes used.
- RAID 5
-
RAID 5 is an optimized compromise between Level 0 and
Level 1 in terms of performance and redundancy. The hard disk
space equals the number of disks used minus one. The data is
distributed over the hard disks as with RAID 0. Parity
blocks, created on one of the partitions, are there for
security reasons. They are linked to each other with XOR, enabling the
contents to be reconstructed by the corresponding parity block in case
of system failure. With RAID 5, no more than one hard disk can
fail at the same time. If one hard disk fails, it must be replaced as
soon as possible to avoid the risk of losing data.
- Other RAID Levels
-
Several other RAID levels have been developed (RAIDn, RAID 10,
RAID 0+1, RAID 30, RAID 50, etc.), some of them being
proprietary implementations created by hardware vendors. These levels
are not very widespread, so are not explained here.
2.3.1 Soft RAID Configuration with YaST
The YaST configuration can be reached from the
YaST Expert Partitioner, described in
Section 2.1, Using the YaST Partitioner. This partitioning tool
enables you to edit and delete existing partitions and create new ones
that should be used with soft RAID. There, create RAID partitions:
-
Select a hard disk from .
-
Change to the tab.
-
Click and enter the desired size of the raid
partition on this disk.
-
Use and change the
to . Do not mount this partition.
-
Repeat this procedure until you defined all the desired physical
volumes on the available disks.
For RAID 0 and RAID 1, at least two partitions are
needed—for RAID 1, usually exactly two and no more. If
RAID 5 is used, at least three partitions are required. It is
recommended to take only partitions of the same size. The RAID partitions
should be located on different hard disks to decrease the risk of losing
data if one is defective (RAID 1 and 5) and to optimize the
performance of RAID 0. After creating all the partitions to use with
RAID, click to start the RAID configuration.
In the next dialog, choose between RAID levels 0, 1, and 5. Then, select
all partitions with either the Linux RAID or Linux
native type that should be used by the RAID system. No swap or
DOS partitions are shown.
To add a previously unassigned partition to the selected RAID volume,
first click the partition then . Assign all
partitions reserved for RAID. Otherwise, the space on the partition
remains unused. After assigning all partitions, click
to select the available .
In the last step, set the file system to use as well as encryption and
the mount point for the RAID volume. After completing the configuration
with , see the /dev/md0
device and others indicated with RAID in the expert
partitioner.
2.3.2 Troubleshooting
Check the file /proc/mdstat to find out whether a
RAID partition has been damaged. In the event of a system failure, shut
down your Linux system and replace the defective hard disk with a new one
partitioned the same way. Then restart your system and enter the command
mdadm /dev/mdX --add /dev/sdX. Replace 'X' with your
particular device identifiers. This integrates the hard disk
automatically into the RAID system and fully reconstructs it.
Note that although you can access all data during the rebuild, you may
encounter some performance issues until the RAID has been fully rebuilt.
|
|
|