|
|
|
|
|
Red Hat Enterprise Linux 9 Essentials Book now available.
Purchase a copy of Red Hat Enterprise Linux 9 (RHEL 9) Essentials Red Hat Enterprise Linux 9 Essentials Print and eBook (PDF) editions contain 34 chapters and 298 pages
|
6.2. Deleting or Adding a Node
6.2.1. Deleting a Node from a Cluster
Deleting a node from a cluster consists of shutting down the cluster software on the node to be deleted and updating the cluster configuration to reflect the change.
If deleting a node from the cluster causes a transition from greater than two nodes to two nodes, you must restart the cluster software at each node after updating the cluster configuration file.
To delete a node from a cluster, perform the following steps:
-
-
[root@example-01 ~]# service rgmanager stop
Stopping Cluster Service Manager: [ OK ]
[root@example-01 ~]# service gfs2 stop
Unmounting GFS2 filesystem (/mnt/gfsA): [ OK ]
Unmounting GFS2 filesystem (/mnt/gfsB): [ OK ]
[root@example-01 ~]# service clvmd stop
Signaling clvmd to exit [ OK ]
clvmd terminated [ OK ]
[root@example-01 ~]# service cman stop
Stopping cluster:
Leaving fence domain... [ OK ]
Stopping gfs_controld... [ OK ]
Stopping dlm_controld... [ OK ]
Stopping fenced... [ OK ]
Stopping cman... [ OK ]
Waiting for corosync to shutdown: [ OK ]
Unloading kernel modules... [ OK ]
Unmounting configfs... [ OK ]
[root@example-01 ~]#
-
At any node in the cluster, edit the /etc/cluster/cluster.conf to remove the clusternode section of the node that is to be deleted. For example, in Example 6.1, “Three-node Cluster Configuration”, if node-03.example.com is supposed to be removed, then delete the clusternode section for that node. If removing a node (or nodes) causes the cluster to be a two-node cluster, you can add the following line to the configuration file to allow a single node to maintain quorum (for example, if one node fails):
<cman two_node="1" expected_votes="1"/>
-
Update the config_version attribute by incrementing its value (for example, changing from config_version="2" to config_version="3"> ).
-
Save /etc/cluster/cluster.conf .
-
(Optional) Validate the updated file against the cluster schema (cluster.rng ) by running the ccs_config_validate command. For example:
[root@example-01 ~]# ccs_config_validate
Configuration validates
-
Run the cman_tool version -r command to propagate the configuration to the rest of the cluster nodes.
-
Verify that the updated configuration file has been propagated.
-
If the node count of the cluster has transitioned from greater than two nodes to two nodes, you must restart the cluster software as follows:
-
[root@example-01 ~]# service rgmanager stop
Stopping Cluster Service Manager: [ OK ]
[root@example-01 ~]# service gfs2 stop
Unmounting GFS2 filesystem (/mnt/gfsA): [ OK ]
Unmounting GFS2 filesystem (/mnt/gfsB): [ OK ]
[root@example-01 ~]# service clvmd stop
Signaling clvmd to exit [ OK ]
clvmd terminated [ OK ]
[root@example-01 ~]# service cman stop
Stopping cluster:
Leaving fence domain... [ OK ]
Stopping gfs_controld... [ OK ]
Stopping dlm_controld... [ OK ]
Stopping fenced... [ OK ]
Stopping cman... [ OK ]
Waiting for corosync to shutdown: [ OK ]
Unloading kernel modules... [ OK ]
Unmounting configfs... [ OK ]
[root@example-01 ~]#
-
[root@example-01 ~]# service cman start
Starting cluster:
Checking Network Manager... [ OK ]
Global setup... [ OK ]
Loading kernel modules... [ OK ]
Mounting configfs... [ OK ]
Starting cman... [ OK ]
Waiting for quorum... [ OK ]
Starting fenced... [ OK ]
Starting dlm_controld... [ OK ]
Starting gfs_controld... [ OK ]
Unfencing self... [ OK ]
Joining fence domain... [ OK ]
[root@example-01 ~]# service clvmd start
Starting clvmd: [ OK ]
Activating VG(s): 2 logical volume(s) in volume group "vg_example" now active
[ OK ]
[root@example-01 ~]# service gfs2 start
Mounting GFS2 filesystem (/mnt/gfsA): [ OK ]
Mounting GFS2 filesystem (/mnt/gfsB): [ OK ]
[root@example-01 ~]# service rgmanager start
Starting Cluster Service Manager: [ OK ]
[root@example-01 ~]#
-
At any cluster node, run cman_tools nodes to verify that the nodes are functioning as members in the cluster (signified as "M" in the status column, "Sts"). For example:
[root@example-01 ~]# cman_tool nodes
Node Sts Inc Joined Name
1 M 548 2010-09-28 10:52:21 node-01.example.com
2 M 548 2010-09-28 10:52:21 node-02.example.com
-
At any node, using the clustat utility, verify that the HA services are running as expected. In addition, clustat displays status of the cluster nodes. For example:
[root@example-01 ~]#clustat
Cluster Status for mycluster @ Wed Nov 17 05:40:00 2010
Member Status: Quorate
Member Name ID Status
------ ---- ---- ------
node-02.example.com 2 Online, rgmanager
node-01.example.com 1 Online, Local, rgmanager
Service Name Owner (Last) State
------- ---- ----- ------ -----
service:example_apache node-01.example.com started
service:example_apache2 (none) disabled
|
|
|