-
There are two different methods by which the backup may be
started.
-
Method A:
This method requires that the cluster backup process was
previously enabled on the master server, prior to
starting the replication process. This can be done by
including the line
ndb-connectstring=management_host
[:port
]
in a [MYSQL_CLUSTER]
section in the
my.cnf file
, where
management_host
is the IP
address or hostname of the NDB management server for the
master cluster, and port
is
the management server's port number. Note that the port
number needs to be specified only if the default port
(1186) is not being used. (See
Section 16.3.3, “Multi-Computer Configuration”, for more information
about ports and port allocation in MySQL Cluster.)
In this case, the backup can be started by executing
this statement on the replication master:
shellM
> ndb_mgm -e "START BACKUP"
-
Method B:
If the my.cnf
file does not specify
where to find the management host, you can start the
backup process by passing this information to the
NDB
management client as part of the
START BACKUP
command, like this:
shellM
> ndb_mgm management_host
:port
-e "START BACKUP"
where management_host
and
port
are the hostname and
port number of the management server. In our scenario as
outlined earlier (see
Section 16.7.5, “Preparing the Cluster for Replication”),
this would be executed as follows:
shellM
> ndb_mgm rep-master:1186 -e "START BACKUP"
In either case, it is highly advisable to allow any pending
transactions to be completed before beginning the backup,
and then not to permit any new transactions to begin during
the backup process.
-
Copy the cluster backup files to the slave that is being
brought on line. Each system running an
ndbd process for the master cluster will
have cluster backup files located on it, and
all of these files must be copied to
the slave to ensure a successful restore. The backup files
can be copied into any directory on the computer where the
slave management host resides, so long as the MySQL and NDB
binaries have read permissions in that directory. In this
case, we will assume that these files have been copied into
the directory /var/BACKUPS/BACKUP-1
.
It is not necessary that the slave cluster have the same
number of ndbd processes (data nodes) as
the master; however, it is highly recommended this number be
the same. It is necessary that the
slave be started with the
--skip-slave-start
option, to prevent
premature startup of the replication process.
Create any databases on the slave cluster that are present
on the master cluster that are to be replicated to the
slave. Important: A
CREATE SCHEMA
statement corresponding to
each database to be replicated must be executed on each data
node in the slave cluster.
-
Reset the slave cluster using this statement in the MySQL
Monitor:
mysqlS
> RESET SLAVE;
It is important to make sure that the slave's
apply_status
table does not contain any
records prior to running the restore process. You can
accomplish this by running this SQL statement on the slave:
mysqlS
> DELETE FROM cluster_replication.apply_status;
-
You can now start the cluster restoration process on the
replication slave using the ndb_restore
command for each backup file in turn. For the first of
these, it is necessary to include the -m
option to restore the cluster metadata:
shellS
> ndb_restore -c slave_host
:port
-n node-id
\
-b backup-id
-m -r dir
dir
is the path to the directory
where the backup files have been placed on the replication
slave. For the ndb_restore commands
corresponding to the remaining backup files, the
-m
option should not
be used.
For restoring from a master cluster with four data nodes (as
shown in the figure in
Section 16.7, “MySQL Cluster Replication”) where the
backup files have been copied to the directory
/var/BACKUPS/BACKUP-1
, the proper
sequence of commands to be executed on the slave might look
like this:
shellS
> ndb_restore -c rep-slave:1186 -n 2 -b 1 -m \
-r ./VAR/BACKUPS/BACKUP-1
shellS
> ndb_restore -c rep-slave:1186 -n 3 -b 1 \
-r ./VAR/BACKUPS/BACKUP-1
shellS
> ndb_restore -c rep-slave:1186 -n 4 -b 1 \
-r ./VAR/BACKUPS/BACKUP-1
shellS
> ndb_restore -c rep-slave:1186 -n 5 -b 1 \
-r ./VAR/BACKUPS/BACKUP-1
This sequence of commands causes the most recent epoch
records to be written to the slave's
apply_status
table.
-
Next, it is necessary to make all nodes in the slave cluster
aware of the new tables. (This is due to the fact that the
NDB Cluster
storage engine does not
currently support autodiscovery of schema changes. See
Section 16.7.9.2, “Initiating Discovery of Schema Changes”.)
You can accomplish this using these commands:
mysqlS*
> USE db_name
;
mysqlS*
> SHOW TABLES;
db_name
is the name of the
database which was backed up and restored. Where multiple
databases have been backed up and then restored, it is
necessary to issue the USE
and
SHOW
statements for each database in
turn. Note also that these commands must be issued on each
host acting as a data node in the slave cluster.
-
Now you need to obtain the most recent epoch from the
binlog_index
table on the slave (as
discussed in
Section 16.7.8, “Implementing Failover with MySQL Cluster”):
mysqlS
> SELECT @latest:=MAX(epoch)
FROM cluster_replication.apply_status;
-
Using @latest
as the epoch value obtained
in the previous step, you can obtain the correct starting
position @pos
in the correct binary
logfile @file
from the master's
cluster_replication.binlog_index table
using the query shown here:
mysqlM
> SELECT
-> @file:=SUBSTRING_INDEX(File, '/', -1),
-> @pos:=Position
-> FROM cluster_replication.binlog_index
-> WHERE epoch > @latest
-> ORDER BY epoch ASC LIMIT 1;
-
Using the values obtained in the previous step, you can now
issue the appropriate CHANGE MASTER TO
statement in the slave's mysql client:
mysqlS
> CHANGE MASTER TO
-> MASTER_LOG_FILE='@file',
-> MASTER_LOG_POS=@pos;
-
Now that the slave “knows” from what point in
which binlog
file to start reading data
from the master, you can cause the slave to begin
replicating with this standard MySQL statement:
mysqlS
> START SLAVE;