|
|
|
|
NOTE: CentOS Enterprise Linux is built from the Red Hat Enterprise Linux source code. Other than logo and name changes CentOS Enterprise Linux is compatible with the equivalent Red Hat version. This document applies equally to both Red Hat and CentOS Enterprise Linux.
Use the following section to identify the hardware components
required for the cluster configuration.
Hardware |
Quantity |
Description |
Required |
Cluster nodes |
16 (maximum supported) |
Each node must provide enough PCI slots, network slots, and
storage adapters for the cluster hardware configuration. Because
attached storage devices must have the same device special file on
each node, it is recommended that the nodes have symmetric I/O
subsystems. It is also recommended that the processor speed and
amount of system memory be adequate for the processes run on the
cluster nodes. Refer to Section 2.3.1
Installing the Basic Cluster Hardware for more
information. |
Yes |
Table 2-4. Cluster Node Hardware
Table
2-5 includes several different types of fence devices.
A single cluster requires only one type of power switch.
Type |
Description |
Models |
Network-attached power switches. |
Remote (LAN, Internet) fencing using RJ45 Ethernet connections
and remote terminal access to the device. |
APC MasterSwitch 92xx/96xx; WTI NPS-115/NPS-230, IPS-15,
IPS-800/IPS-800-CE and TPS-2 |
Fabric Switches. |
Fence control interface integrated in several models of fabric
switches used for Storage Area Networks (SANs). Used as a way to
fence a failed node from accessing shared data. |
Brocade Silkworm 2x00, McData
Sphereon, Vixel 9200 |
Integrated Power Management Interfaces |
Remote power management features in various brands of server
systems; can be used as a fencing agent in cluster systems |
HP Integrated Lights-out (iLO), IBM BladeCenter with firmware
dated 7-22-04 or later |
Table 2-5. Fence Devices
Table
2-7 through Table 2-8 show
a variety of hardware components for an administrator to choose
from. An individual cluster does not
require all of the components listed in these tables.
Hardware |
Quantity |
Description |
Required |
Network interface |
One for each network connection |
Each network connection requires a network interface installed
in a node. |
Yes |
Network switch or hub |
One |
A network switch or hub allows connection of multiple nodes to
a network. |
Yes |
Network cable |
One for each network interface |
A conventional network cable, such as a cable with an RJ45
connector, connects each network interface to a network switch or a
network hub. |
Yes |
Table 2-6. Network Hardware Table
Hardware |
Quantity |
Description |
Required |
Host bus adapter |
One per node |
To connect to shared disk storage, install either a parallel
SCSI or a Fibre Channel host bus adapter in a PCI slot in each
cluster node. |
For parallel SCSI, use a low voltage differential (LVD) host
bus adapter. Adapters have either HD68 or VHDCI connectors. |
|
Yes |
External disk storage enclosure |
At least one |
Use Fibre Channel or single-initiator parallel SCSI to connect
the cluster nodes to a single or dual-controller RAID array. To use
single-initiator buses, a RAID controller must have multiple host
ports and provide simultaneous access to all the logical units on
the host ports. To use a dual-controller RAID array, a logical unit
must fail over from one controller to the other in a way that is
transparent to the OS. |
SCSI RAID arrays that provide simultaneous access to all
logical units on the host ports are recommended. |
To ensure symmetry of device IDs and LUNs, many RAID arrays
with dual redundant controllers must be configured in an
active/passive mode. |
Refer to Appendix A Supplementary
Hardware Information for more information. |
|
Yes |
SCSI cable |
One per node |
SCSI cables with 68 pins connect each host bus adapter to a
storage enclosure port. Cables have either HD68 or VHDCI
connectors. Cables vary based on adapter type. |
Only for parallel SCSI configurations |
SCSI terminator |
As required by hardware configuration |
For a RAID storage enclosure that uses "out" ports (such as
FlashDisk RAID Disk Array) and is connected to single-initiator
SCSI buses, connect terminators to the "out" ports to terminate the
buses. |
Only for parallel SCSI configurations and only as necessary for
termination |
Fibre Channel hub or switch |
One or two |
A Fibre Channel hub or switch may be required. |
Only for some Fibre Channel configurations |
Fibre Channel cable |
As required by hardware configuration |
A Fibre Channel cable connects a host bus adapter to a storage
enclosure port, a Fibre Channel hub, or a Fibre Channel switch. If
a hub or switch is used, additional cables are needed to connect
the hub or switch to the storage adapter ports. |
Only for Fibre Channel configurations |
Table 2-7. Shared Disk Storage Hardware Table
Hardware |
Quantity |
Description |
Required |
UPS system |
One or more |
Uninterruptible power supply (UPS)
systems protect against downtime if a power outage occurs. UPS
systems are highly recommended for cluster operation. Connect the
power cables for the shared storage enclosure and both power
switches to redundant UPS systems. Note that a UPS system must be
able to provide voltage for an adequate period of time, and should
be connected to its own power circuit. |
Strongly recommended for availability |
Table 2-8. UPS System Hardware Table
Hardware |
Quantity |
Description |
Required |
Terminal server |
One |
A terminal server enables you to manage many nodes
remotely. |
No |
KVM |
One |
A KVM enables multiple nodes to share one keyboard, monitor,
and mouse. Cables for connecting nodes to the switch depend on the
type of KVM. |
No |
Table 2-9. Console Switch Hardware Table
|
|
|