Depending on your past system administration experience, managing
storage under Red Hat Enterprise Linux is either mostly familiar or completely foreign.
This section discusses aspects of storage administration specific to
Red Hat Enterprise Linux.
As with all Linux-like operating systems, Red Hat Enterprise Linux uses device files
to access all hardware (including disk drives). However, the naming
conventions for attached storage devices varies somewhat between
various Linux and Linux-like implementations. Here is how these
device files are named under Red Hat Enterprise Linux.
Note
Device names under Red Hat Enterprise Linux are determined at boot-time.
Therefore, changes made to a system's hardware configuration can
result in device names changing when the system reboots. Because of
this, problems can result if any device name references in system
configuration files are not updated appropriately.
Under Red Hat Enterprise Linux, the device files for disk drives appear in the
/dev/ directory. The format for each file name
depends on several aspects of the actual hardware and how it has
been configured. The important points are as follows:
Following the two-letter device type are one or two letters
denoting the specific unit. The unit designator starts with "a"
for the first unit, "b" for the second, and so on. Therefore, the
first hard drive on your system may appear as
hda or sda.
Tip
SCSI's ability to address large numbers of devices
necessitated the addition of a second unit character to support
systems with more than 26 SCSI devices attached. Therefore, the
first 26 SCSI hard drives on a system would be named
sda through sdz, the
next 26 would be named sdaa through
sdaz, and so on.
The final part of the device file name is a number
representing a specific partition on the device, starting with
"1." The number may be one or two digits in length, depending on
the number of partitions written to the specific device. Once the
format for device file names is known, it is easy to understand
what each refers to. Here are some examples:
/dev/hda1 — The first partition
on the first ATA drive
/dev/sdb12 — The twelfth
partition on the second SCSI drive
/dev/sdad4 — The fourth
partition on the thirtieth SCSI drive
There are instances where it is necessary to access the entire
device and not just a specific partition. This is normally done
when the device is not partitioned or does not support standard
partitions (such as a CD-ROM drive). In these cases, the
partition number is omitted:
/dev/hdc — The entire third ATA
device
/dev/sdb — The entire second
SCSI device
However, most disk drives use partitions (more information on
partitioning under Red Hat Enterprise Linux can be found in Section 5.9.6.1 Adding Storage).
Because adding or removing mass storage devices can result in
changes to the device file names for existing devices, there is a
risk of storage not being available when the system reboots. Here
is an example of the sequence of events leading to this
problem:
The system administrator adds a new SCSI controller so that
two new SCSI drives can be added to the system (the existing
SCSI bus is completely full)
The original SCSI drives (including the first drive on the
bus: /dev/sda) are not changed in any
way
The system is rebooted
The SCSI drive formerly known as
/dev/sda now has a new name, because the
first SCSI drive on the new controller is now
/dev/sda
In theory, this sounds like a terrible problem. However, in
practice it rarely is. It is rarely a problem for a number of
reasons. First, hardware reconfigurations of this type happen
rarely. Second, it is likely that the system administrator has
scheduled downtime to make the necessary changes; downtimes require
careful planning to ensure the work being done does not take longer
than the alloted time. This planning has the side benefit of
bringing to light any issues related to device name changes.
However, some organizations and system configurations are more
likely to run into this issue. Organizations that require frequent
reconfigurations of storage to meet their needs often use hardware
capable of reconfiguration without requiring downtime. Such
hotpluggable hardware makes it easy to add or
remove storage. But under these circumstances device naming issues
can become a problem. Fortunately, Red Hat Enterprise Linux contains features that
make device name changes less of a problem.
Some file systems (which are discussed further in Section 5.9.2 File System Basics) have the ability to store a
label — a character string that can
be used to uniquely identify the data the file system contains.
Labels can then be used when mounting the file system,
eliminating the need to use the device name.
File system labels work well; however, file system labels must
be unique system-wide. If there is ever more than one file system
with the same label, you may not be able to access the file system
you intended to. Also note that system configurations which do
not use file systems (some databases, for example) cannot take
advantage of file system labels.
The devlabel software attempts to address
the device naming issue in a different manner than file system
labels. The devlabel software is run by Red Hat Enterprise Linux
whenever the system reboots (and whenever hotpluggable devices are
inserted or removed).
When devlabel runs, it reads its
configuration file (/etc/sysconfig/devlabel)
to obtain the list of devices for which it is responsible. For
each device on the list, there is a symbolic link (chosen by the
system administrator) and the device's UUID (Universal Unique
IDentifier).
The devlabel command makes sure the
symbolic link always refers to the originally-specified device
— even if that device's name has changed. In this way, a
system administrator can configure a system to refer to
/dev/projdisk instead of
/dev/sda12, for example.
Because the UUID is obtained directly from the device,
devlabel must only search the system for the
matching UUID and update the symbolic link appropriately.
For more information on devlabel, refer to
the Red Hat Enterprise Linux System Administration Guide.
Red Hat Enterprise Linux includes support for many popular file systems, making it
possible to easily access the file systems of other operating
systems.
This is particularly useful in dual-boot scenarios and when
migrating files from one operating system to another.
The supported file systems include (but are not limited
to):
EXT2
EXT3
NFS
ISO 9660
MSDOS
VFAT
The following sections explore the more popular file systems in
greater detail.
Until recently, the ext2 file system had been the standard file
system for Linux. As such, it has received extensive testing and is
considered one of the more robust file systems in use today.
However, there is no perfect file system, and ext2 is no
exception. One problem that is commonly reported is that an ext2
file system must undergo a lengthy file system integrity check if
the system was not cleanly shut down. While this requirement is not
unique to ext2, the popularity of ext2, combined with the advent of
larger disk drives, meant that file system integrity checks were
taking longer and longer. Something had to be done.
The next section describes the approach taken to resolve this
issue under Red Hat Enterprise Linux.
The ext3 file system builds upon ext2 by adding journaling
capabilities to the already-proven ext2 codebase. As a journaling
file system, ext3 always keeps the file system in a consistent
state, eliminating the need for lengthy file system integrity
checks.
This is accomplished by writing all file system changes to an
on-disk journal, which is then flushed on a regular basis. After an
unexpected system event (such as a power outage or system crash),
the only operation that needs to take place prior to making the
file system available is to process the contents of the journal; in
most cases this takes approximately one second.
Because ext3's on-disk data format is based on ext2, it is
possible to access an ext3 file system on any system capable of
reading and writing an ext2 file system (without the benefit of
journaling, however). This can be a sizable benefit in
organizations where some systems are using ext3 and some are still
using ext2.
In 1987, the International Organization for Standardization
(known as ISO) released standard 9660. ISO 9660 defines how files
are represented on CD-ROMs. Red Hat Enterprise Linux system administrators will likely
see ISO 9660-formatted data in two places:
CD-ROMs
Files (usually referred to as ISO
images) containing complete ISO 9660 file systems,
meant to be written to CD-R or CD-RW media
The basic ISO 9660 standard is rather limited in functionality,
especially when compared with more modern file systems. File names
may be a maximum of eight characters long and an extension of no
more than three characters is permitted. However, various
extensions to the standard have become popular over the years, among
them:
Rock Ridge — Uses some fields undefined in ISO 9660 to
provide support for features such as long mixed-case file names,
symbolic links, and nested directories (in other words,
directories that can themselves contain other
directories)
Joliet — An extension of the ISO 9660 standard, developed
by Microsoft to allow CD-ROMs to contain long file names, using
the Unicode character set
Red Hat Enterprise Linux is able to correctly interpret ISO 9660 file systems using
both the Rock Ridge and Joliet extensions.
Red Hat Enterprise Linux also supports file systems from other operating systems.
As the name for the msdos file system implies, the original
operating system supporting this file system was Microsoft's
MS-DOS®. As in MS-DOS, a
Red Hat Enterprise Linux system accessing an msdos file system is limited to 8.3 file
names. Likewise, other file attributes such as permissions and
ownership cannot be changed. However, from a file interchange
standpoint, the msdos file system is more than sufficient to get the
job done.
The vfat file system was first used by Microsoft's Windows® 95 operating system. An
improvement over the msdos file system, file names on a vfat file
system may be longer than msdos's 8.3. However, permissions and
ownership still cannot be changed.
To access any file system, it is first necessary to
mount it. By mounting a file system, you
direct Red Hat Enterprise Linux to make a specific partition (on a specific device)
available to the system. Likewise, when access to a particular file
system is no longer desired, it is necessary to
umount it.
To mount any file system, two pieces of information must be
specified:
A means of uniquely identifying the desired disk drive and
partition, such as device file name, file system label, or
devlabel-managed symbolic link
A directory under which the mounted file system is to be made
available (otherwise known as a mount
point)
The following section discusses mount points in more
detail.
Unless you are used to Linux (or Linux-like) operating systems,
the concept of a mount point will at first seem strange. However,
it is one of the most powerful and flexible methods of managing file
systems developed. With many other operating systems, a full file
specification includes the file name, some means of identifying the
specific directory in which the file resides, and a means of
identifying the physical device on which the file can be
found.
With Red Hat Enterprise Linux, a slightly different approach is used. As with
other operating systems, a full file specification includes the
file's name and the directory in which it resides. However, there
is no explicit device specifier.
The reason for this apparent shortcoming is the mount point. On
other operating systems, there is one directory hierarchy for each
partition. However, on Linux-like systems, there is only
one directory hierarchy system-wide and this
single hierarchy can span multiple partitions. The key is the mount
point. When a file system is mounted, that file system is made
available as a set of subdirectories under the specified mount
point.
This apparent shortcoming is actually a strength. It means that
seamless expansion of a Linux file system is possible, with every
directory capable of acting as a mount point for additional disk
space.
As an example, assume a Red Hat Enterprise Linux system contained a directory
foo in its root directory; the full path to the
directory would be /foo/. Next, assume that
this system has a partition that is to be mounted, and that the
partition's mount point is to be /foo/. If
that partition had a file by the name of
bar.txt in its top-level directory, after the
partition was mounted you could access the file with the following
full file specification:
/foo/bar.txt
In other words, once this partition has been mounted, any file
that is read or written anywhere under the
/foo/ directory will be read from or written to
that partition.
A commonly-used mount point on many Red Hat Enterprise Linux systems is
/home/ — that is because the login
directories for all user accounts are normally located under
/home/. If /home/ is used
as a mount point, all users' files are written to a dedicated
partition and will not fill up the operating system's file
system.
Tip
Since a mount point is just an ordinary directory, it is
possible to write files into a directory that is later used as a
mount point. If this happens, what happens to the files that were
in the directory originally?
For as long as a partition is mounted on the directory, the
files are not accessible (the mounted file system appears in place
of the directory's contents). However, the files will not be
harmed and can be accessed after the partition is
unmounted.
The /proc/mounts file is part of the proc
virtual file system. As with the other files under
/proc/, the mounts
"file" does not exist on any disk drive in your Red Hat Enterprise Linux
system.
In fact, it is not even a file; instead it is a representation
of system status made available (by the Linux kernel) in file
form.
Using the command cat /proc/mounts, we can
view the status of all mounted file systems:
As we can see from the above example, the format of
/proc/mounts is very similar to that of
/etc/mtab. There are a number of file
systems mounted that have nothing to do with disk drives. Among
these are the /proc/ file system itself
(along with two other file systems mounted under
/proc/), pseudo-ttys, and shared
memory.
While the format is admittedly not very user-friendly, looking
at /proc/mounts is the best way to be 100%
sure of seeing what is mounted on your Red Hat Enterprise Linux system, as the kernel
is providing this information. Other methods can, under rare
circumstances, be inaccurate.
However, most of the time you will likely use a command with
more easily-read (and useful) output. The next section describes
that command.
While using /etc/mtab or
/proc/mounts lets you know what file systems
are currently mounted, it does little beyond that. Most of the
time you are more interested in one particular aspect of the file
systems that are currently mounted — the amount of free
space on them.
For this, we can use the df command. Here
is some sample output from df:
Several differences from /etc/mtab and
/proc/mount are immediately obvious:
An easy-to-read heading is displayed
With the exception of the shared memory file system, only
disk-based file systems are shown
Total size, used space, free space, and percentage in use
figures are displayed
That last point is probably the most important because every
system administrator eventually has to deal with a system that has
run out of free disk space. With df it is very
easy to see where the problem lies.
As the name implies, the Network File System (more commonly
known as NFS) is a file system that may be accessed via a network
connection. With other file systems, the storage device must be
directly attached to the local system. However, with NFS this is
not a requirement, making possible a variety of different
configurations, from centralized file system servers to entirely
diskless computer systems.
However, unlike the other file systems, NFS does not dictate a
specific on-disk format. Instead, it relies on the server operating
system's native file system support to control the actual I/O to
local disk drive(s). NFS then makes the file system available to
any operating system running a compatible NFS client.
While primarily a Linux and UNIX technology, it is worth noting
that NFS client implementations exist for other operating systems,
making NFS a viable technique to share files with a variety of
different platforms.
The file systems an NFS server makes available to clients is
controlled by the configuration file
/etc/exports. For more information, see the
exports(5) man page and the
Red Hat Enterprise Linux System Administration Guide.
SMB stands for Server Message Block and
is the name for the communications protocol used by various
operating systems produced by Microsoft over the years. SMB makes
it possible to share storage across a network. Present-day
implementations often use TCP/IP as the underlying transports;
previously NetBEUI was the transport.
Red Hat Enterprise Linux supports SMB via the Samba server program. The
Red Hat Enterprise Linux System Administration Guide includes information on configuring
Samba.
When a Red Hat Enterprise Linux system is newly-installed, all the disk partitions
defined and/or created during the installation are configured to be
automatically mounted whenever the system boots. However, what
happens when additional disk drives are added to a system after the
installation is done? The answer is "nothing" because the system was
not configured to mount them automatically. However, this is easily
changed.
The answer lies in the /etc/fstab file. This
file is used to control what file systems are mounted when the system
boots, as well as to supply default values for other file systems that
may be mounted manually from time to time. Here is a sample
/etc/fstab file:
Each line represents one file system and contains the following
fields:
File system specifier — For disk-based file systems,
either a device file name
(/dev/sda1), a file system label
specification (LABEL=/), or a
devlabel-managed symbolic link
(/dev/homedisk)
Mount point — Except for swap partitions, this field
specifies the mount point to be used when the file system is
mounted (/boot)
File system type — The type of file system present on
the specified device (note that auto may be
specified to select automatic detection of the file system to be
mounted, which is handy for removable media units such as diskette
drives)
Mount options — A comma-separated list of options that
can be used to control mount's behavior
(noauto,owner,kudzu)
Dump frequency — If the dump backup
utility is used, the number in this field controls
dump's handling of the specified file
system
File system check order — Controls the order in which
the file system checker fsck checks the
integrity of the file systems
While most of the steps required to add or remove storage depend
more on the system hardware than the system software, there are
aspects of the procedure that are specific to your operating
environment. This section explores the steps necessary to add and
remove storage that are specific to Red Hat Enterprise Linux.
The process of adding storage to a Red Hat Enterprise Linux system is relatively
straightforward. Here are the steps that are specific to
Red Hat Enterprise Linux:
Partitioning
Formatting the partition(s)
Updating /etc/fstab
The following sections explore each step in more detail.
Once the disk drive has been installed, it is time to create
one or more partitions to make the space available to
Red Hat Enterprise Linux.
There is more than one way of doing this:
Using the command-line fdisk utility
program
Using parted, another command-line
utility program
Although the tools may be different, the basic steps are the
same. In the following example, the commands necessary to perform
these steps using fdisk are included:
Select the new disk drive (the drive's name can be
identified by following the device naming conventions outlined
in Section 5.9.1 Device Naming Conventions). Using
fdisk, this is done by including the device
name when you start fdisk:
fdisk /dev/hda
View the disk drive's partition table, to ensure that the
disk drive to be partitioned is, in fact, the correct
one. In our example, fdisk displays the
partition table by using the p
command:
Command (m for help): p
Disk /dev/hda: 255 heads, 63 sectors, 1244 cylinders
Units = cylinders of 16065 * 512 bytes
Device Boot Start End Blocks Id System
/dev/hda1 * 1 17 136521 83 Linux
/dev/hda2 18 83 530145 82 Linux swap
/dev/hda3 84 475 3148740 83 Linux
/dev/hda4 476 1244 6176992+ 83 Linux
Delete any unwanted partitions that may already be present
on the new disk drive. This is done using the
d command in
fdisk:
Command (m for help): d
Partition number (1-4): 1
The process would be repeated for all unneeded partitions
present on the disk drive.
Create the new partition(s), being sure to specify the
desired size and file system type. Using
fdisk, this is a two-step process —
first, creating the partition (using the n
command):
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-767): 1
Last cylinder or +size or +sizeM or +sizeK: +512M
Second, by setting the file system type (using the
t command):
Command (m for help): t
Partition number (1-4): 1
Hex code (type L to list codes): 82
Partition type 82 represents a Linux swap partition.
Save your changes and exit the partitioning program. This
is done in fdisk by using the
w command:
Command (m for help): w
Warning
When partitioning a new disk drive, it is
vital that you are sure the disk drive you
are about to partition is the correct one. Otherwise, you may
inadvertently partition a disk drive that is already in use,
resulting in lost data.
Also make sure you have decided on the best partition size.
Always give this matter serious thought, because changing it
later is much more difficult than taking a bit of time now to
think things through.
Formatting partitions under Red Hat Enterprise Linux is done using the
mkfs utility program. However,
mkfs does not actually do the work of writing
the file-system-specific information onto a disk drive; instead it
passes control to one of several other programs that actually
create the file system.
This is the time to look at the
mkfs.<fstype>
man page for the file system you have selected. For example, look
at the mkfs.ext3 man page to see the options
available to you when creating a new ext3 file system. In
general, the
mkfs.<fstype>
programs provide reasonable defaults for most configurations;
however here are some of the options that system administrators
most commonly change:
Setting a volume label for later use in
/etc/fstab
On very large hard disks, setting a lower percentage of
space reserved for the super-user
Setting a non-standard block size and/or bytes per inode
for configurations that must support either very large or very
small files
Checking for bad blocks before formatting
Once file systems have been created on all the appropriate
partitions, the disk drive is properly configured for use.
Next, it is always best to double-check your work by manually
mounting the partition(s) and making sure everything is in order.
Once everything checks out, it is time to configure your Red Hat Enterprise Linux
system to automatically mount the new file system(s) whenever it
boots.
As outlined in Section 5.9.5 Mounting File Systems Automatically with /etc/fstab, you
must add the necessary line(s) to /etc/fstab
to ensure that the new file system(s) are mounted whenever the
system reboots. Once you have updated
/etc/fstab, test your work by issuing an
"incomplete" mount, specifying only the device
or mount point. Something similar to one of the following
commands is sufficient:
mount /home
mount /dev/hda3
(Replacing /home or
/dev/hda3 with the mount point or device for
your specific situation.)
If the appropriate /etc/fstab entry is
correct, mount obtains the missing information
from it and completes the mount operation.
At this point you can be relatively confident that
/etc/fstab is configured properly to
automatically mount the new storage every time the system boots
(although if you can afford a quick reboot, it would not hurt to
do so — just to be sure).
The process of removing storage from a Red Hat Enterprise Linux system is
relatively straightforward. Here are the steps that are specific to
Red Hat Enterprise Linux:
Remove the disk drive's partitions from
/etc/fstab
Unmount the disk drive's active partitions
Erase the contents of the disk drive
The following sections cover these topics in more detail.
Using the text editor of your choice, remove the line(s)
corresponding to the disk drive's partition(s) from the
/etc/fstab file. You can identify the proper
lines by one of the following methods:
Matching the partition's mount point against the
directories in the second column of
/etc/fstab
Matching the device's file name against the file names in
the first column of /etc/fstab
Tip
Be sure to look for any lines in
/etc/fstab that identify swap partitions on
the disk drive to be removed; they can be easily
overlooked.
Next, all access to the disk drive must be terminated. For
partitions with active file systems on them, this is done using
the umount command. If a swap partition exists
on the disk drive, it must be either be deactivated with the
swapoff command, or the system should be
rebooted.
Unmounting partitions with the umount
command requires you to specify either the device file name, or
the partition's mount point:
umount /dev/hda2
umount /home
A partition can only be unmounted if it is not currently in
use. If the partition cannot be unmounted while at the normal
runlevel, boot into rescue mode and remove the partition's
/etc/fstab entry.
When using swapoff to disable swapping to a
partition, you must specify the device file name representing the
swap partition:
swapoff /dev/hda4
If swapping to a swap partition cannot be disabled using
swapoff, boot into rescue mode and remove the
partition's /etc/fstab entry.
Erasing the contents of a disk drive under Red Hat Enterprise Linux is a
straightforward procedure.
After unmounting all of the disk drive's partitions, issue the
following command (while logged in as root):
badblocks -ws <device-name>
Where
<device-name>
represents the file name of the disk drive you wish to erase,
excluding the partition number. For example,
/dev/hdb for the second ATA hard
drive.
The following output is displayed while
badblocks runs:
Writing pattern 0xaaaaaaaa: done
Reading and comparing: done
Writing pattern 0x55555555: done
Reading and comparing: done
Writing pattern 0xffffffff: done
Reading and comparing: done
Writing pattern 0x00000000: done
Reading and comparing: done
Keep in mind that badblocks is actually
writing four different data patterns to every block on the disk
drive. For large disk drives, this process can take a long time
— quite often several hours.
Important
Many companies (and government agencies) have specific
methods of erasing data from disk drives and other data storage
media. You should always be sure you
understand and abide by these requirements; in many cases there
are legal ramifications if you fail to do so. The example above
should in no way be considered the ultimate method of wiping a
disk drive.
However, it is much more effective than using the
rm command. That is because when you delete
a file using rm it only marks the file as
deleted — it does not erase the
contents of the file.
Red Hat Enterprise Linux is capable of keeping track of disk space usage on a
per-user and per-group basis through the use of disk quotas. The
following section provides an overview of the features present in disk
quotas under Red Hat Enterprise Linux.
Disk quotas under Red Hat Enterprise Linux can be used on a per-file-system
basis. In other words, disk quotas can be enabled or disabled for
each file system individually.
This provides a great deal of flexibility to the system
administrator. For example, if the /home/
directory was on its own file system, disk quotas could be enabled
there, enforcing equitable disk usage by all users. However the
root file system could be left without disk quotas, eliminating
the complexity of maintaining disk quotas on a file system where
only the operating system itself resides.
Disk quotas can perform space accounting on a per-user basis.
This means that each user's space usage is tracked individually.
It also means that any limitations on usage (which are discussed
in later sections) are also done on a per-user basis.
Having the flexibility of tracking and enforcing disk usage
for each user individually makes it possible for a system
administrator to assign different limits to individual users,
according to their responsibilities and storage needs.
Disk quotas can also perform disk usage tracking on a
per-group basis. This is ideal for those organizations that use
groups as a means of combining different users into a single
project-wide resource.
By setting up group-wide disk quotas, the system administrator
can more closely manage storage utilization by giving individual
users only the disk quota they require for their personal use,
while setting larger disk quotas that would be more appropriate
for multi-user projects. This can be a great advantage to those
organizations that use a "chargeback" mechanism to assign data
center costs to those departments and teams that use data center
resources.
Disk quotas track disk block usage. Because all the data
stored on a file system is stored in blocks, disk quotas are able
to directly correlate the files created and deleted on a file
system with the amount of storage those files take up.
In addition to tracking disk block usage, disk quotas also can
track inode usage. Under Red Hat Enterprise Linux, inodes are used to store various
parts of the file system, but most importantly, inodes hold
information for each file. Therefore, by tracking (and
controlling) inode usage, it is possible to control the creation
of new files.
A hard limit is the absolute maximum number of disk blocks (or
inodes) that can be temporarily used by a user (or group). Any
attempt to use a single block or inode above the hard limit
fails.
A soft limit is the maximum number of disk blocks (or inodes)
that can be permanently used by a user (or group).
The soft limit is set below the hard limit. This allows users
to temporarily exceed their soft limit, permitting them to finish
whatever they were doing, and giving them some time in which to go
through their files and trim back their usage to below their soft
limit.
As stated earlier, any disk usage above the soft limit is
temporary. It is the grace period that determines the length of
time that a user (or group) can extend their usage beyond their
soft limit and toward their hard limit.
If a user continues to use more than the soft limit and the
grace period expires, no additional disk usage will be permitted
until the user (or group) has reduced their usage to a point below
the soft limit.
The grace period can be expressed in seconds, minutes, hours,
days, weeks, or months, giving the system administrator a great
deal of freedom in determining how much time to give users to get
their disk usages below their soft limits.
The following sections provide a brief overview of the steps
necessary to enable disk quotas under Red Hat Enterprise Linux. For a more in-depth
treatment of this subject, see the chapter on disk quotas in the
Red Hat Enterprise Linux System Administration Guide.
To use disk quotas, you must first enable them. This process
involves several steps:
Modifying /etc/fstab
Remounting the file system(s)
Running quotacheck
Assigning quotas
The /etc/fstab file controls the mounting
of file systems under Red Hat Enterprise Linux. Because disk quotas are implemented
on a per-file-system basis, there are two options —
usrquota and grpquota —
that must be added to that file to enable disk quotas.
The usrquota option enables user-based disk
quotas, while the grpquota option enables
group-based quotas. One or both of these options may be enabled by
placing them in the options field for the desired file
system.
The affected file system(s) then must be unmounted and remounted
for the disk quota-related options to take affect.
Next, the quotacheck command is used to
create the disk quota files and to collect the current usage
information from already existing files. The disk quota files
(named aquota.user and
aquota.group for user- and group-based quotas)
contain the necessary quota-related information and reside in the
file system's root directory.
To assign disk quotas, the edquota command is
used.
This utility program uses a text editor to display the quota
information for the user or group specified as part of the
edquota command. Here is an example:
Disk quotas for user matt (uid 500):
Filesystem blocks soft hard inodes soft hard
/dev/md3 6618000 0 0 17397 0 0
This shows that user matt is currently using over 6GB of disk
space, and over 17,000 inodes. No quota (soft or hard) has yet been
set for either disk blocks or inodes, meaning that there is no limit
to the disk space and inodes that this user can currently
use..
Using the text editor displaying the disk quota information, the
system administrator can then modify the soft and hard limits as
desired:
Disk quotas for user matt (uid 500):
Filesystem blocks soft hard inodes soft hard
/dev/md3 6618000 6900000 7000000 17397 0 0
In this example, user matt has been given a soft limit of 6.9GB
and a hard limit of 7GB. No soft or hard limit on inodes has been
set for this user.
Tip
The edquota program can also be used to
set the per-file-system grace period by using the
-t option.
There is little actual management required to support disk quotas
under Red Hat Enterprise Linux. Essentially, all that is required is:
Generating disk usage reports at regular intervals (and
following up with users that seem to be having trouble
effectively managing their allocated disk space)
Making sure that the disk quotas remain accurate
Creating a disk usage report entails running the
repquota utility program. Using the command
repquota /home produces this output:
*** Report for user quotas on device /dev/md3
Block grace time: 7days; Inode grace time: 7days
Block limits File limits
User used soft hard grace used soft hard grace
----------------------------------------------------------------------
root -- 32836 0 0 4 0 0
matt -- 6618000 6900000 7000000 17397 0 0
More information about repquota can be found
in the Red Hat Enterprise Linux System Administration Guide, in the chapter on disk
quotas.
Whenever a file system is not unmounted cleanly (due to a system
crash, for example), it is necessary to run
quotacheck. However, many system administrators
recommend running quotacheck on a regular basis,
even if the system has not crashed.
The process is similar to the initial use of
quotacheck when enabling disk quotas.
Here is an example quotacheck command:
quotacheck -avug
The easiest way to run quotacheck on a
regular basis is to use cron. Most system
administrators run quotacheck once a week, though
there may be valid reasons to pick a longer or shorter interval,
depending on your specific conditions.
In addition to supporting hardware RAID solutions, Red Hat Enterprise Linux supports
software RAID. There are two ways that software RAID arrays can be
created:
During the normal Red Hat Enterprise Linux installation process, RAID arrays can be
created. This is done during the disk partitioning phase of the
installation.
To begin, you must manually partition your disk drives using
Disk Druid. You must first create a new
partition of the type "software RAID." Next, select the disk drives
that you want to be part of the RAID array in the
Allowable Drives field. Continue by selecting
the desired size and whether you want the partition to be a primary
partition.
Once you have created all the partitions required for the RAID
array(s) that you want to create, you must then use the
RAID button to actually create the arrays.
You are then presented with a dialog box where you can select the
array's mount point, file system type, RAID device name, RAID level,
and the "software RAID" partitions on which this array is to be
based.
Once the desired arrays have been created, the installation
process continues as usual.
Tip
For more information on creating software RAID arrays during
the Red Hat Enterprise Linux installation process, refer to the
Red Hat Enterprise Linux System Administration Guide.
Creating a RAID array after Red Hat Enterprise Linux has been installed is a bit
more complex. As with the addition of any type of disk storage, the
necessary hardware must first be installed and properly
configured.
Partitioning is a bit different for RAID than it is for single
disk drives. Instead of selecting a partition type of "Linux" (type
83) or "Linux swap" (type 82), all partitions that are to be part of
a RAID array must be set to "Linux raid auto" (type fd).
Next, it is necessary to create the
/etc/raidtab file. This file is responsible
for the proper configuration of all RAID arrays on your system. The
file format (which is documented in the
raidtab(5) man page) is relatively
straightforward. Here is an example
/etc/raidtab entry for a RAID 1 array:
Some of the more notable sections in this entry are:
raiddev — Shows the
device file name for the RAID array[2]
raid-level — Defines
the RAID level to be used by this RAID array
nr-raid-disks — Indicates
how many physical disk partitions are to be part of this array
nr-spare-disks — Software
RAID under Red Hat Enterprise Linux allows the definition of one or more spare disk
partitions; these partitions can automatically take the place of
a malfunctioning disk
device,
raid-disk — Together,
they define the physical disk partitions that make up the RAID
array
Next, it is necessary to actually create the RAID array. This
is done with the mkraid program. Using our
example /etc/raidtab file, we would create the
/dev/md0 RAID array with the following
command:
mkraid /dev/md0
The RAID array /dev/md0 is now ready to be
formatted and mounted. The process at this point is no different
than for formatting and mounting a single disk drive.
There is little that needs to be done to keep a RAID array
operating. As long as no hardware problems crop up, the array should
function just as if it were a single physical disk drive. However,
just as a system administrator should periodically check the status of
all disk drives on the system, the RAID arrays' status should be
checked as well.
The file /proc/mdstat is the easiest way to
check on the status of all RAID arrays on a particular system. Here
is a sample mdstat (view with the command
cat /proc/mdstat):
Should /proc/mdstat show that a problem
exists with one of the RAID arrays, the
raidhotadd utility program should be used to
rebuild the array. Here are the steps that would need to be
performed:
Determine which disk drive contains the failed partition
Correct the problem that caused the failure (most likely by
replacing the drive)
Partition the new drive so that the partitions on it are
identical to those on the other drive(s) in
the array
Issue the following command:
raidhotadd <raid-device><disk-partition>
Monitor /proc/mdstat to watch the
rebuild take place
Tip
Here is a command that can be used to watch the rebuild as it
takes place:
watch -n1 cat /proc/mdstat
This command displays the contents of
/proc/mdstat, updating it every second.
Red Hat Enterprise Linux includes support for LVM. LVM may be configured while
Red Hat Enterprise Linux is installed, or it may be configured after the installation is
complete. LVM under Red Hat Enterprise Linux supports physical storage grouping,
logical volume resizing, and the migration of data off a specific
physical volume.
For more information on LVM, refer to the
Red Hat Enterprise Linux System Administration Guide.
Note that
since the RAID array is composed of partitioned disk space, the
device file name of a RAID array does not reflect any
partition-level information.