THIS PAGE HAS BEEN MIGRATED TO CONFLUENCE:https://uwaterloo.atlassian.net/wiki/x/iIEYUAo
CS MySQL Database High-Availability Operations Manual
This document is used to administer the CS Highly Available clustered mysql database for general computing.
MySQL High Availability Preamble
This document assumes:
- You have credentials to access mysql-10[246].cs.uwaterloo.ca and InfoBlox.
- The high-availability architecture is one master with two slaves using asynchronous replication
- the three hosts in the cluster are
mysql-102.cs.uwaterloo.ca
, mysql-104.cs.uwaterloo.ca
, and mysql-106.cs.uwaterloo.ca
-
mysql.cs.uwaterloo.ca
is the hostname of the service that clients connect to and has the same IP address as the master. (by design, clients cannot connect to the slaves).
Find which servers are up and identify master and slaves
- From anywhere,
host mysql.cs.uwaterloo.ca
will return an IP address which should correspond to one of: mysql-102.cs, mysql-104.cs, mysql-106.cs
.
- This is what the world thinks is the master. If things are operating normally, you can stop here.
- If you are diagnosing problems, please proceed to determine the status of master and slaves.
- linux.cscf#
for i in mysql-102.cs mysql-104.cs mysql-106.cs ; do echo ; echo $i ; ssh $i "mysqladmin status processlist" ; done
- Normally, all 3 hosts have a mysqld running with significant uptime.
- The master processlist should include two slave threads, one for each slave server.
- Each slave processlist should include two replication threads, one reading events from the master and one executing them.
- Need to know which of mysql-10[2,4,6].cs.uwaterloo.ca is current master (if any) and/or which slaves are currently/recently slaving to.
- If no current master, most recent master needs to be unambiguously powered down before continuing. Proceeding without verified removal of recent master risks corruption of data!
- master, slaves -
mysql> show slave status\G
and look for Master_Host
value. On slaves, Master_Host is recognized master. On master, Master_Host should return Empty_set
- If slaves report different Master_Host values, stop troubleshooting and report outage to responsible CSCF staff. Continuing risks data loss.
- If master as reported consistently by all slaves is not reachable, follow as in below
Restoring Broken Old Master After Failover
: # lxc-info --name mysql-102.cs.uwaterloo.ca
and verify State: STOPPED
. If not stopped, force container stop lxc-stop mysql-102.cs.uwaterloo.ca
and recheck state. Do not proceed until former master is verified stopped.
Normal operation
Nagios should accurately report that the cluster is operating normally. In case you would like additional verification, the following are manual diagnostics to be certain of normal operation.
On denoted systems {master, slaves}:
- master, slaves -
# systemctl status mysql
(or service mysql status
on older servers) shows mysql service is running.
- master, slaves -
# ps -lfp `cat /run/mysqld/mysqld.pid`
shows mysqld is running.
- master, slaves -
# lsof -i | grep LISTEN | grep mysql
shows mysqld listening on the correct interface(s).
- master, slaves -
# mysql -e 'show variables like "read_only"'
shows read_only is OFF for master and ON for slaves.
- master, slaves -
# mysqladmin status
shows reasonable values. Almost all queries on a slave correspond to write operations on the master.
- master, slaves -
# mysqladmin processlist | sed 's/^| //;s/ | / /g;s/[-+| ]*$//'
on the master shows a slave thread for each slave (plus lots of client threads). On each slave it shows an I/O thread and an execution thread.
- master -
# mysql -e 'show master status'
shows the binlog file and position. Other fields should be empty.
- slaves -
# mysql -e 'show slave status\G'
shows approximately 40 lines of output. Especially useful are:
Slave_IO_State: Waiting for master to send event
Master_Host: ...
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Last_Error:
Seconds_Behind_Master: 0
- master, slaves -
# ls -ld `grep -hr '^[[:blank:]]*log_bin[[:blank:]]*=' /etc/mysql | sed 's/[^=]*=//'`*
lists the binary log files. binlog.index
is updated when the log is rolled, typically at least once per day. The most recent file is appended with every database write. In our current environment that's typically every second.
- master, slaves -
# ls -l /var/log/mysql/
lists the error log files. They usually contain just periodic status info from cron and messages about backups. When mysqld is restarted, they contain a standard sequence of shutdown and startup messages. There should be no errors and almost no warnings. Use # tail /var/log/mysql/error.log
to check the most recent status. Use # zgrep -Ehv 'Status:|Vitals:|TopTableRows:|Warning.*REPLACE INTO `percona`.`checksums`' /var/log/mysql/error.log*
to ignore status messages and pt-table-checksum warnings.
- master -
# pt-table-checksum --defaults-file=/etc/mysql/pt_checksum.cnf --replicate=percona.checksums --ignore-databases mysql --no-check-binlog-format
checks that the master and slaves have the same data. In our current environment this takes 2 hours to run.
Problem Diagnosis
There are various failure modes of the MySQL cluster; this section will help identify which recovery method to use. The scenarios below are listed in decreasing order of emergency.
Nagios reports Master is down
The automated Nagios check may report that the master is down. This is likely an emergency- all operations that rely on our mysql database will fail until this is fixed.
See
MySQLHAMasterFailure for recovery steps.
Applications are failing to connect to mysql.cs
Production applications could fail with messages such as "Cannot connect to database mysql.cs.uwaterloo.ca". This is likely an emergency- unless there are network issues with those applications, you can assume all operations that rely on our mysql database will fail until this is fixed.
See
MySQLHAMasterFailure for recovery steps.
Nagios reports Master is out of sync with both slaves
The automated Nagios check may report that the master is out of sync with both slaves. This is an intermediate-level emergency; recovery can wait until the next business day.
You may treat this situation as if both of the slaves are down. See
MySQLHASlaveFailure for recovery steps.
Nagios reports Master is out of sync with a slave
The automated Nagios check may report that the master is out of sync with one slave. This is
not a time-criticial emergency, and recovery can wait until the next business day.
You may treat this situation as if the slave is down. See
MySQLHASlaveFailure for recovery steps.
Nagios reports Slave is down
The automated Nagios check may report that a slave is down. This is
not a time-criticial emergency, and recovery can wait until the next business day.
See
MySQLHASlaveFailure for recovery steps.
Nagios reports Master is out of sync with a slave
The automated Nagios check may report that the master is out of sync with one slave. This is
not a time-criticial emergency, and recovery can wait until the next business day.
You may treat this situation as if the slave is down. See
MySQLHASlaveFailure for recovery steps.
Failure of the Master Server
If you have performed
problem diagnosis and believe the Master Server has failed, these instructions will assist with emergency recovery.
Can you quickly determine why the Master Server has failed?
If you know why the master server has failed, or a few minutes of diagnosis makes it obvious, consider whether the downtime of fixing the server will be less than approximately 10 minutes. That is roughly the amount of time it will take to promote one of the slaves to become a master. This assessment of course requires taking a few minutes to check; following are some checks to make.
- SSH into the master
- Check to ensure that / and /data are mounted and not full
- Check to see if the mysql daemon is running
- Check firewall rules in /etc/shorewall/rules. On the master, the line reading
MySQL(REJECT) net $FW
MUST be commented out
- Check /var/log/mysql.err and /var/log/mysql.log for cause of failure/shutdown
- If either filesystem is full and cannot be written to, log into container host and use logical volume and fs tools to extend the partitions and filesystems.
- If SSHing to master failed
- Check to see host is receiving power using LOM
- Check to see if host is up using LOM
- Check to see if host network port is up using Ona
Recover Master Server without performing failover
If it looks like recovery will be speedy, you can proceed with them, leaving the database down while you do so. Rebooting the server will require manually starting mysql, because we leave mysql not enabled on boot by default, to prevent multiple masters from starting up at the same time.
service mysql start
should work.
Failover: a Slave becomes Master, and Old Master Becomes a Slave
Caution, follow this procedure carefully; there are subtle reasons for doing these operations and in this order.
- Decide that failover is appropriate (see above).
- Check that slaves are equally up to date, e.g.: on both, perform
mysql> show slave status
and confirm that Relay_Log_Pos:
is the same value. (If not, choose the one with the higher value to become the new master).
- Find the IP address of the new master.
# for i in mysql-102 mysql-104 mysql-106 ; do host "$i".cs.uwaterloo.ca ; done
mysql-102.cs.uwaterloo.ca has address 172.19.152.27
mysql-104.cs.uwaterloo.ca has address 172.19.154.27
mysql-106.cs.uwaterloo.ca has address 172.19.155.27
- Redirect clients to use new master via DNS
- Use infoblox to change IP address and PTR record for mysql.cs to the new master.
- This can either be performed in the infoblox GUI interface or via Infoblox Command Line Interface --
- via InfoBLox GUI, search for "mysql.cs" with search tool in upper right corner of interface. Open record, find IP address, and update it to the new master IP address. Secondly, find the PTR record and update it to the new master IP address as well.
- Take down former master and ensure it cannot come back by accident. This must happen before new master becomes writeable.
- If possible: on former master, stop salt and prevent it from restarting by accident. (Bug: The -disabled part doesn't actually disable it.)
# service salt-minion stop
# mv /etc/init.d/salt-minion{,-disabled}
-
- If possible: on former master change firewall to block all client traffic:
- Uncomment
MySQL(REJECT) net $FW
on final line of /etc/shorewall/rules and then restart firewall by running shorewall restart
- Of the following 4 options, do only the first one possible:
- On the former master
# service mysql stop
and mv /etc/init.d/mysql{,-disabled}
to prevent mysql from restarting as master automatically
- lxc container stop; and break container by altering config file:
# lxc-info --name mysql-102.cs.uwaterloo.ca
and verify State: STOPPED. If not stopped, force container stop lxc-stop mysql-102.cs.uwaterloo.ca
and recheck state. Do not proceed until former master is verified stopped.
- Take down the IAAS lxc host and prevent it from starting
- Stop the world from reaching host - ona port shutoff (with comment): Search for former master port at ONA by MAC/IP address:
mysql-102 = 00:16:3e:20:7b:cf/10.15.152.27, mysql-104 = 00:16:3e:86:2b:a9/10.15.154.27, mysql-106 = 00:16:3e:3c:8d:61/10.15.155.27
- Once the above is done,
# service salt-minion stop
on former slaves to avoid changes being pushed from salt master during these steps.
- Firewall rule change: comment out
MySQL(REJECT) net $FW
on new master and then restart firewall by running shorewall restart
-- At this point clients that have transitioned to 104 will find it works, but is read-only.
- mysql on the other slave: change master to new master (needs file position: run
show master status
on new master, record log file name and log position.)
- Replace the MASTER PASSWORD with the password recorded on each host in
/etc/mysql/{$host}_slave.cnf
(the passwords are the same for all three hosts).
(Bug: What is the second CHANGE MASTER line for?)
mysql-106# mysql -e 'stop slave''
mysql-106# mysql -e 'CHANGE MASTER TO MASTER_HOST="mysql-104.cs.uwaterloo.ca",MASTER_USER="mysql-106_slave", MASTER_PASSWORD="{$slave_password}", MASTER_LOG_FILE="binlog.000002", MASTER_LOG_POS=332'
mysql-106# mysql -e 'CHANGE MASTER TO MASTER_HOST="mysql-104.cs.uwaterloo.ca",MASTER_USER="mysql-106_slave", MASTER_LOG_FILE="binlog.000002", MASTER_LOG_POS=332'
mysql-106# mysql -e 'start slave'
- mysql on new master:
stop slave; reset slave all;
edit /etc/mysql/conf.d/server-id.cnf
to comment out read_only
and service mysql restart
At this point, 104 will have active client writes.
- Typically, the most important client host is the web server, currently cs.uwaterloo.ca. If it still has the old IP address cached, clear the cache.
- On the web server:
nscd --invalidate hosts
- Test normal operation ... especially the example
pt-table-checksum
on new master
- On both the new master and slave (but NOT the old master):
service salt-minion start
The normal procedure is that the new master stays master; the former master is repaired as required and returned to service as a slave of the new master. This completes the emergency process. The process of
MySQL HA Restoring Broken Master can proceed by trained personnel later.
Slave Failure
If a slave has failed, we perform a restore-from-backups from the master.
In this example, we use
mysql-104
as the example slave to recover, and
mysql-102
as the example master.
The backup-machine listed below is
linux.cscf
.
The
${MYSQL_DATA_DIR}
is
/data/mysql
0.0 On the slave to recover, stop mysql.
root@mysql-104:~# service mysql stop
0.1 Verify MySQL is actually stopped. The following should return nothing.
root@mysql-104:~# ps -ef | grep '[m]ysql'
1. Clean up default databases/tables created by MySQL package installation.
root@mysql-104:~# rm -rf ${MYSQL_DATA_DIR}; mkdir -p ${MYSQL_DATA_DIR}
2. Create local space for storing backups in /tmp.
root@mysql-104:~# mkdir -p /tmp/mysql-restore
root@mysql-104:~# chmod 700 /tmp/mysql-restore
3. Verify that last backup of the node configuration data is suitable.
Simple sanity checks include checking the filesize, creation time etc.
root@backup-machine:/backup/mysql/mysql-104.cs# ls -ld | grep 'etcmysql.tar' | tail -n 1
4. Copy remote slave configuration backup into local directory.
root@backup-machine:/backup/mysql/mysql-104.cs# rsync -a $(ls | grep 'etcmysql.tar' | tail -n 1) root@mysql-104:/tmp/mysql-restore/
5. Verify that last backup of the CURRENT MASTER NODE is the backup you
want. Simple sanity checks include checking the filesize, creation time
etc.
root@backup-machine:/backup/mysql/mysql-master# ls -ld *-innobackupex | tail -n 1
6. Copy remote master backup into local directory.
root@backup-machine:/backup/mysql/mysql-master# rsync -a $(ls | grep 'innobackupex' | tail -n 1) root@mysql-104:/tmp/mysql-restore/
7. Create temporary holding directory for configuration data.
root@mysql-104:~# mkdir /tmp/mysql-restore/config
root@mysql-104:~# chmod 700 /tmp/mysql-restore/config
8. Unpack slave configuration data.
root@mysql-104:~# tar -xf /tmp/mysql-restore/*-etcmysql.tar -C /tmp/mysql-restore/config/
9. Remove any existing MySQL configuration on the system.
root@mysql-104:~# rm -rf /etc/mysql/{.*,*}
10. Restore slave configuration data.
root@mysql-104:~# mv /tmp/mysql-restore/config/etc/mysql/{.*,*} /etc/mysql/
11. Prepare percona innobackup. This is "destructive" and will modify the
backups you've copied over.
root@mysql-104:~# innobackupex --use-memory=2G --apply-log /tmp/mysql-restore/*-innobackupex
Should look more or less as follows:
root@mysql-104:/tmp/mysql-restore/config/etc/mysql# innobackupex --use-memory=2G --apply-log /tmp/mysql-restore/*-innobackupex
160315 19:09:17 innobackupex: Starting the apply-log operation
IMPORTANT: Please check that the apply-log run completes successfully.
At the end of a successful apply-log run innobackupex
prints "completed OK!".
innobackupex version 2.3.3 based on MySQL server 5.6.24 Linux (x86_64) (revision id: 525ca7d)
xtrabackup: cd to /tmp/mysql-restore/mysql-102-1458034021-innobackupex
xtrabackup: This target seems to be not prepared yet.
xtrabackup: xtrabackup_logfile detected: size=2097152, start_lsn=(320319246)
xtrabackup: using the following InnoDB configuration for recovery:
xtrabackup: innodb_data_home_dir = ./
xtrabackup: innodb_data_file_path = ibdata1:10M:autoextend
xtrabackup: innodb_log_group_home_dir = ./
xtrabackup: innodb_log_files_in_group = 1
xtrabackup: innodb_log_file_size = 2097152
xtrabackup: using the following InnoDB configuration for recovery:
xtrabackup: innodb_data_home_dir = ./
xtrabackup: innodb_data_file_path = ibdata1:10M:autoextend
xtrabackup: innodb_log_group_home_dir = ./
xtrabackup: innodb_log_files_in_group = 1
xtrabackup: innodb_log_file_size = 2097152
xtrabackup: Starting InnoDB instance for recovery.
xtrabackup: Using 2147483648 bytes for buffer pool (set by --use-memory parameter)
InnoDB: Using atomics to ref count buffer pool pages
InnoDB: The InnoDB memory heap is disabled
InnoDB: Mutexes and rw_locks use GCC atomic builtins
InnoDB: Memory barrier is not used
InnoDB: Compressed tables use zlib 1.2.8
InnoDB: Using CPU crc32 instructions
InnoDB: Initializing buffer pool, size = 2.0G
InnoDB: Completed initialization of buffer pool
InnoDB: Highest supported file format is Barracuda.
InnoDB: The log sequence numbers 319143451 and 319143451 in ibdata files do not match the log sequence number 320319246 in the ib_logfiles!
InnoDB: Database was not shutdown normally!
InnoDB: Starting crash recovery.
InnoDB: Reading tablespace information from the .ibd files...
InnoDB: Restoring possible half-written data pages
InnoDB: from the doublewrite buffer...
InnoDB: 128 rollback segment(s) are active.
InnoDB: Waiting for purge to start
InnoDB: 5.6.24 started; log sequence number 320319246
xtrabackup: Last MySQL binlog file position 838665, file name /var/log/mysql/binlog.000018
xtrabackup: starting shutdown with innodb_fast_shutdown = 1
InnoDB: FTS optimize thread exiting.
InnoDB: Starting shutdown...
InnoDB: Shutdown completed; log sequence number 320320828
xtrabackup: using the following InnoDB configuration for recovery:
xtrabackup: innodb_data_home_dir = ./
xtrabackup: innodb_data_file_path = ibdata1:10M:autoextend
xtrabackup: innodb_log_group_home_dir = ./
xtrabackup: innodb_log_files_in_group = 2
xtrabackup: innodb_log_file_size = 50331648
InnoDB: Using atomics to ref count buffer pool pages
InnoDB: The InnoDB memory heap is disabled
InnoDB: Mutexes and rw_locks use GCC atomic builtins
InnoDB: Memory barrier is not used
InnoDB: Compressed tables use zlib 1.2.8
InnoDB: Using CPU crc32 instructions
InnoDB: Initializing buffer pool, size = 2.0G
InnoDB: Completed initialization of buffer pool
InnoDB: Setting log file ./ib_logfile101 size to 48 MB
InnoDB: Setting log file ./ib_logfile1 size to 48 MB
InnoDB: Renaming log file ./ib_logfile101 to ./ib_logfile0
InnoDB: New log files created, LSN=320320828
InnoDB: Highest supported file format is Barracuda.
InnoDB: 128 rollback segment(s) are active.
InnoDB: Waiting for purge to start
InnoDB: 5.6.24 started; log sequence number 320321036
xtrabackup: starting shutdown with innodb_fast_shutdown = 1
InnoDB: FTS optimize thread exiting.
InnoDB: Starting shutdown...
InnoDB: Shutdown completed; log sequence number 320326703
160315 19:09:21 completed OK!
root@mysql-104:/tmp/mysql-restore/config/etc/mysql#
12. Restore the prepared backup.
root@mysql-104:~# innobackupex --copy-back /tmp/mysql-restore/*-innobackupex
Should start and end as follows:
root@mysql-104:/etc/mysql# innobackupex --copy-back /tmp/mysql-restore/*-innobackupex
160315 19:11:16 innobackupex: Starting the copy-back operation
IMPORTANT: Please check that the copy-back run completes successfully.
At the end of a successful copy-back run innobackupex
prints "completed OK!".
innobackupex version 2.3.3 based on MySQL server 5.6.24 Linux (x86_64) (revision id: 525ca7d)
160315 19:11:16 [01] Copying ib_logfile0 to /data/mysql/ib_logfile0
160315 19:11:16 [01] ...done
160315 19:11:16 [01] Copying ib_logfile1 to /data/mysql/ib_logfile1
160315 19:11:16 [01] ...done
160315 19:11:16 [01] Copying ibdata1 to /data/mysql/ibdata1
160315 19:11:17 [01] ...done
160315 19:11:17 [01] Copying ./blah/db.opt to /data/mysql/blah/db.opt
160315 19:11:17 [01] ...done
160315 19:11:17 [01] Copying ./blah/tags.MYD to /data/mysql/blah/tags.MYD
160315 19:11:17 [01] ...done
160315 19:11:17 [01] Copying ./blah/tags.MYI to /data/mysql/blah/tags.MYI
160315 19:11:17 [01] ...done
160315 19:11:17 [01] Copying ./blah/tags.frm to /data/mysql/blah/tags.frm
[...]
160315 19:11:17 [01] Copying ./sbtest/db.opt to /data/mysql/sbtest/db.opt
160315 19:11:17 [01] ...done
160315 19:11:17 [01] Copying ./sbtest/sbtest1.frm to /data/mysql/sbtest/sbtest1.frm
160315 19:11:17 [01] ...done
160315 19:11:17 [01] Copying ./xtrabackup_info to /data/mysql/xtrabackup_info
160315 19:11:17 [01] ...done
160315 19:11:17 [01] Copying ./xtrabackup_binlog_pos_innodb to /data/mysql/xtrabackup_binlog_pos_innodb
160315 19:11:17 [01] ...done
160315 19:11:17 completed OK!
root@mysql-104:/etc/mysql#
13. Fix database file ownership and permissions.
root@mysql-104:~# chown -R mysql:mysql ${MYSQL_DATA_DIR}
root@mysql-104:~# chmod 700 ${MYSQL_DATA_DIR}
14. Remove spurious innodb logfiles.
root@mysql-104:~# rm ${MYSQL_DATA_DIR}/ib_logfile*
15. Restart MySQL service
root@mysql-104:~# service mysql restart
16. Retrieve master log position to start slaving from.
root@mysql-104:~# cat /tmp/mysql-restore/*-innobackupex/xtrabackup_binlog_info
17. Issue CHANGE MASTER command.
root@mysql-104:~# mysql -e 'CHANGE MASTER TO MASTER_HOST="${CURRENT_MASTER_FQDN}",MASTER_USER="mysql-104_slave", MASTER_PASSWORD="the_password", MASTER_LOG_FILE="${LOG_FILE_NAME}", MASTER_LOG_POS=${LOG_FILE_POS;'
18. Begin slaving.
root@mysql-104:~# mysql -e 'SLAVE START;'
19. Verify slaving.
root@mysql-104:~# mysql -e 'SHOW SLAVE STATUS\G;'
...should return:
Slave_IO_Running: Yes
,
Slave_SQL_Running: Yes
,
Last_Error:
(blank)
mysql> SHOW SLAVE STATUS\G
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: mysql-102.cs.uwaterloo.ca
Master_User: mysql-104_slave
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: binlog.000024
Read_Master_Log_Pos: 49711
Relay_Log_File: mysqld-relay-bin.000009
Relay_Log_Pos: 49854
Relay_Master_Log_File: binlog.000024
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Master_Log_Pos: 49711
Relay_Log_Space: 50154
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: No
Last_IO_Errno: 0
Last_IO_Error:
Last_SQL_Errno: 0
Last_SQL_Error:
Replicate_Ignore_Server_Ids:
Master_Server_Id: 102
1 row in set (0.00 sec)
If
Seconds_Behind_Master:
is not 0, wait a few seconds and re-run the command until it is
0
, to ensure that it catches up with the master properly.
Restoring Broken Old Master After Failover
Caution: this process is meant to cover most situations- output of commands should be reviewed to make sure they make sense. We haven't documented all of the edge-cases of failover and recovery, so if there are missing parts, they should be filled in below.
- decide that we're ready to fix the old master (does NOT need to happen during/right after the emergency of failing over the database- it will wait until there's time).
- if the old master lxc container (eg, 102) is up and you can ssh in from linux.cscf, skip ahead to the "mysql config" section below. Otherwise:
- To avoid some trial-and-error checking, you should find out how the old master was brought down- restore will depend in part on their work. Check the ST for the failover, and possibly check with the staff-person who failed over the machine. If there are no hints, you can follow the following steps in increasing order of complexity.
- you need to know the IAAS lxc host, which you can find by checking the Infrastructure containers list: https://cs.uwaterloo.ca/cscf/internal/infrastructure/inventory/virtual-host-index/
- If that list isn't conclusive, search for your LXC container in inventory and hopefully either the inventory record, or its eDocs page, will list its last known location.
- if the IAAS lxc host is up and you can ssh in from linux.cscf, do so and restart the lxc container, in a similar fashion to:
ubuntu1404-202:~# ssh dc-3558-411
Welcome to Ubuntu 14.04.4 LTS (GNU/Linux 3.19.0-56-generic x86_64)
root@dc-3558-411:~# lxc-info --name mysql-102.cs.uwaterloo.ca
Name: mysql-102.cs.uwaterloo.ca
State: STOPPED
root@dc-3558-411:~# lxc-start -d -n mysql-102.cs.uwaterloo.ca
- if the IAAS lxc host is not up, you can restart it by: finding it in inventory and determining its
lom-cs{barcode}.cs.uwaterloo.ca
hostname.
See
https://cs.uwaterloo.ca/cscf/internal/infrastructure/inventory/IAAS-hardware-index/ for current data. Last seen 2016-04-04:
Note that the lom-cs* interfaces can only be reached from systems on VLAN 15 (eg. CSCF workstations).
-
- visit the URL for the
lom-cs
hostname. Credentials for lom-cs* interfaces can be found in CSCF safe under "LOM".
- once logged in, confirm that the machine is powered off and select the "Power ON / OFF" option to turn it on. A minute later, the LOM page should auto-update from displaying a thumbnail of a booting machine and show a thumbnail of a command-prompt. it should now be accessible via ssh; proceed with the "IAAS lxc is up" item above.
- if, after restarting the machine, it is not ssh accessible, the ona port might have been shut off.
- search for the IAAS lxc host's regular IP address (ie., the IP for dc-3558-411, not the IP for the LOM); find its port. On the port's page, look for its status (enabled or disabled); if disabled, a comment explaining why should be present. If it's something you are comfortable re-enabling, erase the comment and re-enable; hopefully ssh to the IAAS lxc host will succeed and and you may proceed with the "IAAS lxc is up" item above.
Mysql Config:
- ssh to the old master host.
- Check that firewall is configured to reject incoming connections from MySQL clients. Ensure
MySQL(REJECT) net $FW
is NOT commented out at the bottom of /etc/shorewall/rules
- Restart firewall by running
shorewall restart
- edit
/etc/mysql/conf.d/server-id.cnf
to uncomment read_only
- "unbreak" /usr/sbin/mysqld if it was renamed to 'mysqld-x' - possibly =mv /usr/sbin/mysqld{-x,}
-
service mysql start
should work and result in "mysql start/running, process nnnnnn" not "start: Job failed to start".
- verify mysql is running clean: check
/var/log/mysql/error.log
for problems, and run mysql
from the command-line. If it runs, onward to configuring as a slave.
Configuring as a slave:
- currently this host is operating as a read-only master with no slaves.
-
show slave status;
should return an empty set.
-
show master status;
should return a binlog file and position. Note these results.
- The master binlog on this host will have stopped just before that position; hopefully cleanly. You can review the previous binlog file running
mysqlbinlog /var/log/mysql/binlog.{nnnnnn} |less
- to configure as a slave, we need to find the binlog transaction on the new master which corresponds to the last transaction on this machine.
- If there is a clean match, the new master was caught up to this old master. If there isn't, you might need to be clever.
- Follow the above instructions for re-slaving 106 from 104: "mysql on 106: change master to 104 ( needs file position: run show master status on 104 )"
Move Some (but not all) Databases from Old MySQL server to New HA Cluster
This section assumes:
- mysql 5.5
- the high-availability architecture is one master with two slaves using asynchronous replication
- 102 denotes mysql-102.cs, the first slave of 104
- 104 denotes mysql-104.cs, the master
- 106 denotes mysql-106.cs, the second slave of 104
- 172 denotes mysql-172.cs, the old server
- mysql.cs is a CNAME or extra A record for mysql-172.cs
- mysqltest.cs is a CNAME or extra A record for mysql-104.cs
First, copy the selected databases from the old server to the new (and prepare for replication). This is not time sensitive.
- On the old mysql server:
- In general, you can use mysqldump to dump the databases you want to move. For the March 2016 move of mysql.cs, use mysqlbackupgrants to dump all the grants and mysqldumpexcept to dump all the databases except a few we need to ignore. Run the dump twice, separated by enough time to get an unambiguous binlog position.
-
PATH="/usr/local/lib/mysql/bin:$PATH"
-
mysqlbackupgrants ; mysqldumpexcept ; echo ; sleep 5 ; mysql --defaults-file=/etc/mysql/debian.cnf -e 'show master status' ; echo ; sleep 5 ; mysqldumpexcept ; mysqlbackupgrants
- Compare the dumps.
-
diff `ls -tr /var/backups/mysql/*-show-grants.sql | tail -2`
-
diff -rs `ls -dtr /var/backups/mysql/*-mysqldumpexcept | tail -2`
- Repeat the above dump and compare until there is no difference in the dumps. For the current load on mysql.cs, this almost always succeeds the first time.
- Note the binlog file name and position, for use in the CHANGE MASTER command later.
- If it doesn't already exist, grant replication slave for the new master.
- if there is already a replication slave, note the password recorded on each host in
/etc/mysql/{$host}_slave.cnf
(the passwords are the same for all three hosts).
- On asgard:
- Wait for the hourly rsync cron job to copy your dumps. Or rsync it now.
-
rsync -a mysql.cs:/var/backups/mysql/ /mysql/backups/mysql.cs
- Then, copy one of the identical dumps to the new master.
-
thegrants=`ls -tr /mysql/backups/mysql.cs | grep .-show-grants.sql | tail -1`
-
echo thegrants=$thegrants ; ls -l /mysql/backups/mysql.cs/$thegrants # found the correct grants?
-
thedump=`ls -tr /mysql/backups/mysql.cs | grep .-mysqldumpexcept | tail -1`
-
echo thedump=$thedump ; ls -ld /mysql/backups/mysql.cs/$thedump # found the correct dump?
-
tmpdir=`ssh mysqltest.cs mktemp -d`
-
echo tmpdir=$tmpdir # got a good tmpdir?
-
rsync -a /mysql/backups/mysql.cs/$thegrants /mysql/backups/mysql.cs/$thedump mysqltest.cs:$tmpdir
- On the new master:
- Init.
-
thegrants=value-from-above
-
thedump=value-from-above
-
tmpdir=value-from-above
-
cd $tmpdir
- Load the grants needed by the selected databases(s). E.g.:
- (Details to be added later...)
- Load the selected database(s). E.g.:
-
ls -l $thedump/equipment.sql # looks OK?
-
mysql <$thedump/equipment.sql
Second, start replication of the copied database(s) from the old server to the new.
- On the new master:
- Start replication from the old mysql server.
-
CHANGE MASTER TO MASTER_HOST='mysql-172.cs.uwaterloo.ca', MASTER_USER='mysql-104_slave', MASTER_PASSWORD='slavepassword-from-above', MASTER_LOG_FILE='binlog-File-from-above', MASTER_LOG_POS=Position-from-above;
- Create a temporary config file.
mysql-104# cat /etc/mysql/conf.d/temp-slave.cnf
# Configuration used only while we are moving the indicated database to this host.
[mysqld]
replicate-do-db = 'equipment'
-
-
- Restart mysqld
service mysql restart
- Confirm the slave has caught up
mysql -e 'show slave status\G'
- On the old mysql server (172), set up grants (if they don't already exist) for pt-table-checksum.
-
GRANT PROCESS, REPLICATION SLAVE, SELECT, SUPER ON . TO 'pt_checksum'@'localhost' IDENTIFIED BY PASSWORD '*33...9E';
-
GRANT ALL PRIVILEGES ON `percona`.* TO 'pt_checksum'@'localhost';
-
GRANT PROCESS, REPLICATION SLAVE, SELECT, SUPER ON . TO 'pt_checksum'@'mysql-%.cs.uwaterloo.ca' IDENTIFIED BY PASSWORD '*33...9E';
-
GRANT ALL PRIVILEGES ON `percona`.* TO 'pt_checksum'@'mysql-%.cs.uwaterloo.ca';
- On the old server, check that old and new servers have identical copies of the database being moved.
- 172#
pt-table-checksum --defaults-file=/etc/mysql/pt_checksum.cnf --replicate=percona.checksums --databases equipment --no-check-binlog-format --no-check-replication-filters
- On the new server, check slave is still OK, because with this temporary setup, some types of pt-table-checksum errors can break replication.
- 104#
mysql -e 'show slave status\G'
At this point, the old and new servers have identical copies of the database, clients are using the old server, and any changes on the old server are replicated to the new.
- Optionally, check the stability of the temporary slave setup by leaving it running overnight.
The following preparation should be done shortly before client visible downtime starts.
- If telling the clients to use a different server is done via DNS changes:
- Find the current TTL for the server hostname clients use. It should already be low (5 minutes is typical), not the default of 3 hours.
- Decrease the TTL to 1 minute. This is a major part of client visible downtime.
- Wait for the previous TTL to expire.
- Take one more backup in case of disaster in the switch-over to follow.
- 172#
/usr/local/lib/mysql/bin/mysqlbackupgrants ; /usr/local/lib/mysql/bin/mysqldumpexcept
- Immediately before moving the clients from the old server to the new, check that they are still synchronized.
- 104#
mysql -e 'show slave status\G'
Did replication break overnight?
- 172#
pt-table-checksum --defaults-file=/etc/mysql/pt_checksum.cnf --replicate=percona.checksums --databases equipment --no-check-binlog-format --no-check-replication-filters
Did the two copies of the database diverge overnight?
- 104#
mysql -e 'show slave status\G'
Did pt-table-checksum break replication now?
Caution: Client visible downtime starts here. For most of the downtime, clients of all the mysql databases on the old and new servers can read, but not write the database. This section should be done quickly, but carefully. Any mistakes may cause permanent data loss and extended down time.
You will need at least 2 windows. Each of the old and new server will need its own mysql session.
- Make the old server read-only! (except for threads with SUPER or REPLICATION SLAVE privilege). This prevents clients from losing data by using the old server after replication stops.
- 172# mysql>
SET GLOBAL read_only = ON;
- Make the new server read-only! (except for threads with SUPER or REPLICATION SLAVE privilege). This prevents clients from losing data by using the new server before it is ready.
- 104# mysql>
SET GLOBAL read_only = ON;
- Show binlog position on old server.
- 172# mysql>
SHOW MASTER STATUS;
- Check that the new server is caught up (at least check: binlog file and position match above; 0 seconds behind master).
- 104# mysql>
show slave status\G
- Check that old and new servers have identical copies of the database being moved.
- 172#
pt-table-checksum --defaults-file=/etc/mysql/pt_checksum.cnf --replicate=percona.checksums --databases equipment --no-check-binlog-format --no-check-replication-filters
- Tell clients to stop using the old server and start using the new. Depending on the database(s) being moved, use the correct one of the following methods:
- Change the server hostname in all appropriate client config.
- Or, use infoblox to delete and recreate the appropriate CNAME pointing to the new server.
- Or, use infoblox to delete and recreate the appropriate virtual host (extra A record) with the IP address of the new server. Fix the PTR if it is missing or incorrect.
- Infoblox operations on A records may be performed via the Infoblox Command Line Interface. Read the instructions first; then as root on
www152.cs
you can su cs-inv
and run: ~/infoblox-scripts/ibclispeedy -e 'conf zone "" modify host {hostname.cs.uwaterloo.ca} {new IP address}'
- At this point (if using DNS changes), some clients are using the new server and some are still using the old until their TTL expires. Both servers are still read-only.
- IMPORTANT: Stop replication on the new server! This prevents the database from being dropped on the new server in the next step.
- 104# mysql>
STOP SLAVE; RESET SLAVE ALL;
- Verify that replication has stopped on the new server! The following output should be empty.
- 104# mysql>
SHOW SLAVE STATUS\G
- On the old server, drop the database being moved! This prevents clients from losing data (or seeing incorrect old data) by accidentally using the old server. After this point, any client still trying to use the moved database on the old server will fail.
- 172# mysql>
DROP DATABASE equipment;
- On the old server, return to normal operation with the remaining databases.
- 172# mysql>
SET GLOBAL read_only = OFF;
- On the new server, return to normal operation with addition of the moved database(s).
- 104# mysql>
SET GLOBAL read_only = OFF;
- Delete the temporary config file on the new server and restart mysqld.
- 104
rm /etc/mysql/conf.d/temp-slave.cnf
- 104
service mysql restart
At this point, client visible downtime is over and all operations should be back to normal (assuming the 1 minute TTL has expired for all clients). Some essential clean-up work remains.
- Perform tests to ensure old and new server are working normally.
- Are all the databases present that should be?
- Are all applications working correctly?
- Does the data look correct, e.g. most recent transactions in a history table.
- If telling the clients to use a different server is done via DNS changes:
- Change TTL back to normal for the service. 5 minutes is typical.
- If you are using the extra A record method, check that the PTR is correct.
At this point, slaves 102 and 106 probably will have replication holes in the moved database(s).
- First, you must run pt-table-checksum on the master to find and record any differences between master and slaves.
- 104#
pt-table-checksum --defaults-file=/etc/mysql/pt_checksum.cnf --replicate=percona.checksums --no-check-binlog-format --quiet
- Then, if there are diffs, run pt-table-sync to fix them.
- 104#
pt-table-sync --defaults-file=/root/.my.cnf --verbose --execute --replicate=percona.checksums h=localhost
Appendix
Sub-pages
The following sub-pages are expanded/included in this document.
Reference
The following pages are useful for reference.