Failover with the MySQL Utilities – Part 1: mysqlrpladmin
MySQL Utilities are a set of tools provided by Oracle to perform many kinds of administrative tasks. When GTID-replication is enabled, 2 tools can be used for slave promotion:
mysqlrpladmin
and mysqlfailover
. We will review mysqlrpladmin
(version 1.4.3) in this post.Summary
mysqlrpladmin
can perform manual failover/switchover when GTID-replication is enabled.- You need to have your servers configured with
--master-info-repository = TABLE
or to add the--rpl-user
option for the tool to work properly. - The check for errant transactions is failing in the current GA version (1.4.3) so be extra careful when using it or watch bug #73110 to see when a fix is committed.
- There are some limitations, for instance the inability to pre-configure the list of slaves in a configuration file or the inability to check that the tool will work well without actually doing a failover or switchover.
Failover vs switchover
mysqlrpladmin can help you promote a slave to be the new master when the master goes down and then automate replication reconfiguration after this slave promotion. There are 2 separate scenarios: unplanned promotion (failover) and planned promotion (switchover). Beyond the words, it has implications on the way you have to execute the tool.
Setup for this test
To test the tool, our setup will be a master with 2 slaves, all using GTID replication.
mysqlrpladmin
can show us the current replication topology with the health
command:
As you can see, we have to specify how to connect to the master (no surprise) but instead of listing all the slaves, we can let the tool discover them.
Simple failover scenario
What will the tool do when performing failover? Essentially we will give it the list of slaves and the list of candidates and it will:
- Run a few sanity checks
- Elect a candidate to be the new master
- Make the candidate as up-to-date as possible by making it a slave of all the other slaves
- Configure replication on all the other slaves to make them replicate from the new master
After killing -9 the master, let’s try failover:
This time, the master is down so the tool has no way to automatically discover the slaves. Thus we have to specify them with the
--slaves
option.
However we get an error:
The error message is clear, but it would have been nice to have such details when running the
health
command (maybe a warning instead of an error). That would allow you to check beforehand that the tool can run smoothly rather than to discover in the middle of an emergency that you have to look at the documentation to find which option is missing.
Let’s choose to specify the replication user:
Simple switchover scenario
Let’s now restart the old master and configure it as a slave of the new master (by the way, this can be done with
mysqlreplicate
, another tool from the MySQL Utilities). If we want to promote the old master, we can run:
Notice that the master is available in this case so we can use the
discover-slaves-login
option. Also notice that we can tune the verbosity of the tool by using --quiet
or --verbose
or even log the output in a file with --log
.
We also used
--demote-master
to make the old master a slave of the new master. Without this option, the old master will be isolated from the other nodes.Extension points
In general doing switchover/failover at the database level is one thing but informing the other components of the application that something has changed is most often necessary for the application to keep on working correctly.
This is where the extension points are handy: you can execute a script before switchover/failover with
--exec-before
and after switchover/failover with --exec-after
.
For instance with these simple scripts:
We can execute:
And looking the /tmp/before and /tmp/after, we can see that our scripts have been executed:
If the external script does not seem to work, using –verbose can be useful to diagnose the issue.
What about errant transactions?
We already mentioned that errant transactions can create lots of issues when a new master is promoted in a cluster running GTIDs. So the question is: how
mysqlrpladmin
behaves when there is an errant transaction?
Let’s create an errant transaction:
and let’s try to promote localhost:13003 as the new master:
Oops! Although it is suggested by the documentation, the tool does not check errant transactions. This is a major issue as you cannot run failover/switchover reliably with GTID replication if errant transactions are not correctly detected.
The documentation suggests errant transactions should be checked and a quick look at the code confirms that, but it does not work! So it has been reported.
Some limitations
Apart from the missing errant transaction check, I also noticed a few limitations:
- You cannot use a configuration file listing all the slaves. This becomes boring once you have a large amount of slaves. In such a case, you should write a wrapper script around
mysqlrpladmin
to generate the right command for you - The slave election process is either automatic or it relies on the order of the servers given in the
--candidates
option. This is not very sophisticated. - It would be useful to have a –dry-run mode which would validate that everything is configured correctly but without actually failing/switching over. This is something MHA does for instance.
Conclusion
mysqlrpladmin
is a very good tool to help you perform manual failover/switchover in a cluster using GTID replication. The main caveat at this point is the failing check for errant transactions, which requires a lot of care before executing the tool.
No comments:
Post a Comment