The beginning of the boot process varies depending on the
hardware platform being used. However, once the kernel is found and
loaded by the boot loader, the default boot process is identical
across all architectures. This chapter focuses primarily on the x86
architecture.
When an x86 computer is booted, the processor looks at the end
of system memory for the Basic Input/Output
System or BIOS program and runs it.
The BIOS controls not only the first step of the boot process, but
also provides the lowest level interface to peripheral devices. For
this reason it is written into read-only, permanent memory and is
always available for use.
Other platforms use different programs to perform low-level
tasks roughly equivalent to those of the BIOS on an x86 system. For
instance, Itanium-based computers use the Extensible Firmware Interface (EFI) Shell.
Once loaded, the BIOS tests the system, looks for and checks
peripherals, and then locates a valid device with which to boot the
system. Usually, it checks any diskette drives and CD-ROM drives
present for bootable media, then, failing that, looks to the
system's hard drives. In most cases, the order of the drives
searched while booting is controlled with a setting in the BIOS,
and it looks on the master IDE device on the primary IDE bus. The
BIOS then loads into memory whatever program is residing in the
first sector of this device, called the Master
Boot Record or MBR. The MBR is only
512 bytes in size and contains machine code instructions for
booting the machine, called a boot loader, along with the partition
table. Once the BIOS finds and loads the boot loader program into
memory, it yields control of the boot process to it.
This section looks at the default boot loader for the x86
platform, GRUB. Depending on the system's architecture, the boot
process may differ slightly. Refer to
Section 1.2.2.1 Boot Loaders for Other Architectures for
a brief overview of non-x86 boot loaders. For more information
about configuring and using GRUB, see Chapter 2 The GRUB Boot Loader.
A boot loader for the x86 platform is broken into at least two
stages. The first stage is a small machine code binary on the MBR.
Its sole job is to locate the second stage boot loader and load the
first part of it into memory.
GRUB has the advantage of being able to read ext2 and ext3
partitions and load its configuration
file — /boot/grub/grub.conf —
at boot time. Refer to Section
2.7 GRUB Menu Configuration File for information on how
to edit this file.
|
Tip |
|
If upgrading the kernel using the Red Hat
Update Agent, the boot loader configuration file is updated
automatically. More information on Red Hat Network can be found
online at the following URL: https://rhn.redhat.com/.
|
Once the second stage boot loader is in memory, it presents the
user with a graphical screen showing the different operating
systems or kernels it has been configured to boot. On this screen a
user can use the arrow keys to choose which operating system or
kernel they wish to boot and press [Enter]. If no key is pressed, the boot loader
loads the default selection after a configurable period of time has
passed.
|
Note |
|
If Symmetric Multi-Processor (SMP) kernel support is installed,
more than one option is presented the first time the system is
booted. In this situation GRUB displays Red Hat Enterprise Linux (<kernel-version>-smp), which is
the SMP kernel, and Red Hat Enterprise
Linux (<kernel-version>), which is for
single processors.
If any problems occur using the SMP kernel, try selecting the a
non-SMP kernel upon rebooting.
|
Once the second stage boot loader has determined which kernel to
boot, it locates the corresponding kernel binary in the /boot/ directory. The kernel binary is named using
the following format — /boot/vmlinuz-<kernel-version> file (where
<kernel-version> corresponds to the
kernel version specified in the boot loader's settings).
For instructions on using the boot loader to supply command line
arguments to the kernel, refer to Chapter 2
The GRUB Boot Loader. For information on changing the
runlevel at the boot loader prompt, refer Section 2.8 Changing Runlevels at Boot
Time.
The boot loader then places one or more appropriate initramfs images into memory. Next, the kernel
decompresses these images from memory to /boot/, a RAM-based virtual file system, via
cpio. The initramfs is used by the kernel to load drivers and
modules necessary to boot the system. This is particularly
important if SCSI hard drives are present or if the systems use the
ext3 file system.
Once the kernel and the initramfs
image(s) are loaded into memory, the boot loader hands control of
the boot process to the kernel.
For a more detailed overview of the GRUB boot loader, refer to
Chapter 2 The GRUB Boot
Loader.
Once the kernel loads and hands off the boot process to the
init command, the same sequence of events
occurs on every architecture. So the main difference between each
architecture's boot process is in the application used to find and
load the kernel.
For example, the Itanium architecture uses the ELILO boot
loader, the IBM eServer pSeries architecture uses YABOOT, and the
IBM eServer zSeries and IBM S/390 systems use the z/IPL boot
loader.
Consult the Red Hat Enterprise Linux
Installation Guide specific to these platforms for information
on configuring their boot loaders.
When the kernel is loaded, it immediately initializes and
configures the computer's memory and configures the various
hardware attached to the system, including all processors, I/O
subsystems, and storage devices. It then looks for the compressed
initramfs image(s) in a predetermined
location in memory, decompresses it directly to /sysroot/, and loads all necessary drivers. Next,
it initializes virtual devices related to the file system, such as
LVM or software RAID, before completing the initramfs processes and freeing up all the memory
the disk image once occupied.
The kernel then creates a root device, mounts the root partition
read-only, and frees any unused memory.
At this point, the kernel is loaded into memory and operational.
However, since there are no user applications that allow meaningful
input to the system, not much can be done with the system.
To set up the user environment, the kernel executes the
/sbin/init program.
The /sbin/init program (also called
init) coordinates the rest of the boot
process and configures the environment for the user.
When the init command starts, it
becomes the parent or grandparent of all of the processes that
start up automatically on the system. First, it runs the /etc/rc.d/rc.sysinit script, which sets the
environment path, starts swap, checks the file systems, and
executes all other steps required for system initialization. For
example, most systems use a clock, so rc.sysinit reads the /etc/sysconfig/clock configuration file to
initialize the hardware clock. Another example is if there are
special serial port processes which must be initialized, rc.sysinit executes the /etc/rc.serial file.
The init command then runs the
/etc/inittab script, which describes how
the system should be set up in each SysV init
runlevel. Runlevels are a state, or mode, defined by the services listed in the SysV
/etc/rc.d/rc<x>.d/ directory, where <x> is the number of the runlevel. For
more information on SysV init runlevels, refer to Section 1.4 SysV Init
Runlevels.
Next, the init command sets the source
function library, /etc/rc.d/init.d/functions, for the system, which
configures how to start, kill, and determine the PID of a
program.
The init program starts all of the
background processes by looking in the appropriate rc directory for the runlevel specified as the
default in /etc/inittab. The rc directories are numbered to correspond to the
runlevel they represent. For instance, /etc/rc.d/rc5.d/ is the directory for runlevel
5.
When booting to runlevel 5, the init
program looks in the /etc/rc.d/rc5.d/
directory to determine which processes to start and stop.
Below is an example listing of the /etc/rc.d/rc5.d/ directory:
K05innd -> ../init.d/innd
K05saslauthd -> ../init.d/saslauthd
K10dc_server -> ../init.d/dc_server
K10psacct -> ../init.d/psacct
K10radiusd -> ../init.d/radiusd
K12dc_client -> ../init.d/dc_client
K12FreeWnn -> ../init.d/FreeWnn
K12mailman -> ../init.d/mailman
K12mysqld -> ../init.d/mysqld
K15httpd -> ../init.d/httpd
K20netdump-server -> ../init.d/netdump-server
K20rstatd -> ../init.d/rstatd
K20rusersd -> ../init.d/rusersd
K20rwhod -> ../init.d/rwhod
K24irda -> ../init.d/irda
K25squid -> ../init.d/squid
K28amd -> ../init.d/amd
K30spamassassin -> ../init.d/spamassassin
K34dhcrelay -> ../init.d/dhcrelay
K34yppasswdd -> ../init.d/yppasswdd
K35dhcpd -> ../init.d/dhcpd
K35smb -> ../init.d/smb
K35vncserver -> ../init.d/vncserver
K36lisa -> ../init.d/lisa
K45arpwatch -> ../init.d/arpwatch
K45named -> ../init.d/named
K46radvd -> ../init.d/radvd
K50netdump -> ../init.d/netdump
K50snmpd -> ../init.d/snmpd
K50snmptrapd -> ../init.d/snmptrapd
K50tux -> ../init.d/tux
K50vsftpd -> ../init.d/vsftpd
K54dovecot -> ../init.d/dovecot
K61ldap -> ../init.d/ldap
K65kadmin -> ../init.d/kadmin
K65kprop -> ../init.d/kprop
K65krb524 -> ../init.d/krb524
K65krb5kdc -> ../init.d/krb5kdc
K70aep1000 -> ../init.d/aep1000
K70bcm5820 -> ../init.d/bcm5820
K74ypserv -> ../init.d/ypserv
K74ypxfrd -> ../init.d/ypxfrd
K85mdmpd -> ../init.d/mdmpd
K89netplugd -> ../init.d/netplugd
K99microcode_ctl -> ../init.d/microcode_ctl
S04readahead_early -> ../init.d/readahead_early
S05kudzu -> ../init.d/kudzu
S06cpuspeed -> ../init.d/cpuspeed
S08ip6tables -> ../init.d/ip6tables
S08iptables -> ../init.d/iptables
S09isdn -> ../init.d/isdn
S10network -> ../init.d/network
S12syslog -> ../init.d/syslog
S13irqbalance -> ../init.d/irqbalance
S13portmap -> ../init.d/portmap
S15mdmonitor -> ../init.d/mdmonitor
S15zebra -> ../init.d/zebra
S16bgpd -> ../init.d/bgpd
S16ospf6d -> ../init.d/ospf6d
S16ospfd -> ../init.d/ospfd
S16ripd -> ../init.d/ripd
S16ripngd -> ../init.d/ripngd
S20random -> ../init.d/random
S24pcmcia -> ../init.d/pcmcia
S25netfs -> ../init.d/netfs
S26apmd -> ../init.d/apmd
S27ypbind -> ../init.d/ypbind
S28autofs -> ../init.d/autofs
S40smartd -> ../init.d/smartd
S44acpid -> ../init.d/acpid
S54hpoj -> ../init.d/hpoj
S55cups -> ../init.d/cups
S55sshd -> ../init.d/sshd
S56rawdevices -> ../init.d/rawdevices
S56xinetd -> ../init.d/xinetd
S58ntpd -> ../init.d/ntpd
S75postgresql -> ../init.d/postgresql
S80sendmail -> ../init.d/sendmail
S85gpm -> ../init.d/gpm
S87iiim -> ../init.d/iiim
S90canna -> ../init.d/canna
S90crond -> ../init.d/crond
S90xfs -> ../init.d/xfs
S95atd -> ../init.d/atd
S96readahead -> ../init.d/readahead
S97messagebus -> ../init.d/messagebus
S97rhnsd -> ../init.d/rhnsd
S99local -> ../rc.local
|
As illustrated in this listing, none of the scripts that
actually start and stop the services are located in the /etc/rc.d/rc5.d/ directory. Rather, all of the
files in /etc/rc.d/rc5.d/ are symbolic links pointing to scripts located in the
/etc/rc.d/init.d/ directory. Symbolic
links are used in each of the rc
directories so that the runlevels can be reconfigured by creating,
modifying, and deleting the symbolic links without affecting the
actual scripts they reference.
The name of each symbolic link begins with either a K or an S. The K links are processes that are killed on
that runlevel, while those beginning with an S are started.
The init command first stops all of the
K symbolic links in the
directory by issuing the /etc/rc.d/init.d/<command> stop command, where
<command> is the process to be
killed. It then starts all of the S symbolic links by issuing /etc/rc.d/init.d/<command> start.
|
Tip |
|
After the system is finished booting, it is possible to log in
as root and execute these same scripts to start and stop services.
For instance, the command /etc/rc.d/init.d/httpd stop stops the Apache HTTP
Server.
|
Each of the symbolic links are numbered to dictate start order.
The order in which the services are started or stopped can be
altered by changing this number. The lower the number, the earlier
it is started. Symbolic links with the same number are started
alphabetically.
After the init command has progressed
through the appropriate rc directory for
the runlevel, the /etc/inittab script
forks an /sbin/mingetty process for each
virtual console (login prompt) allocated to the runlevel. Runlevels
2 through 5 have all six virtual consoles, while runlevel 1 (single
user mode) has one, and runlevels 0 and 6 have none. The /sbin/mingetty process opens communication pathways
to tty devices, sets their
modes, prints the login prompt, accepts the user's username and
password, and initiates the login process.
In runlevel 5, the /etc/inittab runs a
script called /etc/X11/prefdm. The
prefdm script executes the preferred X
display manager — gdm,
kdm, or xdm,
depending on the contents of the /etc/sysconfig/desktop file.
Once finished, the system operates on runlevel 5 and displays a
login screen.