Commit 21004460 authored by Iustin Pop's avatar Iustin Pop
Browse files

Fix bootstrap.MasterFailover race with watcher



This fixes a recently diagnosed race condition between master failover
and the watcher.

Currently, the master failover first stops the master daemon, checks
that the IP is no longer reachable, and then distributes the updated
configuration. Between the stop and the distribution, it can happen that
the watcher starts the master daemon on the old node again, since ssconf
still points the master to it (and all nodes vote so).

In even more weird cases, the master daemon starts and before it manages
to open the configuration file, it is updated, which means the master
will respond to QueryClusterInfo with another node as the real master.

This patch reorders the actions during master failover:

- first, we redistribute a fixed config; this means the old master will
  refuse to update its own config file and ssconf, and that most jobs
  that change state will fail to finish
- we then immediately kill it; after this step, the watcher will be
  unable to start it, since the master will refuse startup
- and only then we check for IP reachability, etc.

I've tested the new version against concurrent launch of the watcher;
while my tests are not very exhaustive, two things can happen: watcher
see the daemons as dead, and tries to restart them, which also fail; or
it simply get an error while reading from the master daemon. Both these
should be OK.
Signed-off-by: default avatarIustin Pop <iustin@google.com>
Reviewed-by: default avatarMichael Hanselmann <hansmi@google.com>
parent bd407597
......@@ -602,12 +602,36 @@ def MasterFailover(no_voting=False):
logging.info("Setting master to %s, old master: %s", new_master, old_master)
try:
# instantiate a real config writer, as we now know we have the
# configuration data
cfg = config.ConfigWriter()
cluster_info = cfg.GetClusterInfo()
cluster_info.master_node = new_master
# this will also regenerate the ssconf files, since we updated the
# cluster info
cfg.Update(cluster_info, logging.error)
except errors.ConfigurationError, err:
logging.error("Error while trying to set the new master: %s",
str(err))
return 1
# if cfg.Update worked, then it means the old master daemon won't be
# able now to write its own config file (we rely on locking in both
# backend.UploadFile() and ConfigWriter._Write(); hence the next
# step is to kill the old master
logging.info("Stopping the master daemon on node %s", old_master)
result = rpc.RpcRunner.call_node_stop_master(old_master, True)
msg = result.fail_msg
if msg:
logging.error("Could not disable the master role on the old master"
" %s, please disable manually: %s", old_master, msg)
logging.info("Checking master IP non-reachability...")
master_ip = sstore.GetMasterIP()
total_timeout = 30
# Here we have a phase where no master should be running
......@@ -622,15 +646,7 @@ def MasterFailover(no_voting=False):
" continuing but activating the master on the current"
" node will probably fail", total_timeout)
# instantiate a real config writer, as we now know we have the
# configuration data
cfg = config.ConfigWriter()
cluster_info = cfg.GetClusterInfo()
cluster_info.master_node = new_master
# this will also regenerate the ssconf files, since we updated the
# cluster info
cfg.Update(cluster_info, logging.error)
logging.info("Starting the master daemons on the new master")
result = rpc.RpcRunner.call_node_start_master(new_master, True, no_voting)
msg = result.fail_msg
......@@ -639,6 +655,7 @@ def MasterFailover(no_voting=False):
" %s, please check: %s", new_master, msg)
rcode = 1
logging.info("Master failed over from %s to %s", old_master, new_master)
return rcode
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment