Saturday, June 20, 2015

mysql backup restoration and disaster recovery planning

Failover with the MySQL Utilities: Part 2 – mysqlfailover


Failover with the MySQL Utilities: Part 2 – mysqlfailover


In the previous post of this series we saw how you could use mysqlrpladmin to perform manual failover/switchover when GTID replication is enabled in MySQL 5.6. Now we will review mysqlfailover(version 1.4.3), another tool from the MySQL Utilities that can be used for automatic failover.

Summary

  • mysqlfailover can perform automatic failover if MySQL 5.6’s GTID-replication is enabled.
  • All slaves must use --master-info-repository=TABLE.
  • The monitoring node is a single point of failure: don’t forget to monitor it!
  • Detection of errant transactions works well, but you have to use the --pedantic option to make sure failover will never happen if there is an errant transaction.
  • There are a few limitations such as the inability to only fail over once, or excessive CPU utilization, but they are probably not showstoppers for most setups.

Setup

We will use the same setup as last time: one master and two slaves, all using GTID replication. We can see the topology using mysqlfailover with the health command:
Note that --master-info-repository=TABLE needs to be configured on all slaves or the tool will exit with an error message:

Failover

You can use 2 commands to trigger automatic failover:
  • auto: the tool tries to find a candidate in the list of servers specified with --candidates, and if no good server is found in this list, it will look at the other slaves to see if one can be a good candidate. This is the default command
  • elect: same as auto, but if no good candidate is found in the list of candidates, other slaves will not be checked and the tool will exit with an error.
Let’s start the tool with auto:
The monitoring console is visible and is refreshed every --interval seconds (default: 15). Its output is similar to what you get when using the health command.
Then let’s kill -9 the master to see what happens once the master is detected as down:
Looks good! The tool is then ready to fail over to another slave if the new master becomes unavailable.
You can also run custom scripts at several points of execution with the --exec-before--exec-after--exec-fail-check--exec-post-failover options.
However it would be great to have a --failover-and-exit option to avoid flapping: the tool would detect master failure, promote one of the slaves, reconfigure replication and then exit (this is what MHA does for instance).

Tool registration

When the tool is started, it registers itself on the master by writing a few things in the specific table:
This is nice as it avoids that you start several instances of mysqlfailover to monitor the same master. If we try, this is what we get:
With the fail command, mysqlfailover will monitor replication health and exit in the case of a master failure, without actually performing failover.

Running in the background

In all previous examples, mysqlfailover was running in the foreground. This is very good for demo, but in a production environment you are likely to prefer running it in the background. This can be done with the --daemon option:
and it can be stopped with:

Errant transactions

If we create an errant transaction on one of the slaves, it will be detected:
However this does not prevent failover from occurring! You have to use --pedantic:

Limitations

  • Like for mysqlrpladmin, the slave election process is not very sophisticated and it cannot be tuned.
  • The server on which mysqlfailover is running is a single point of failure.
  • Excessive CPU utilization: once it is running, mysqlfailover hogs one core. This is quite surprising.

Conclusion

mysqlfailover is a good tool to automate failover in clusters using GTID replication. It is flexible and looks reliable. Its main drawback is that there is no easy way to make it highly available itself: if mysqlfailovercrashes, you will have to manually restart it.