Woops, looks like you missed the list.  This is good information.  As
I remember my slave kept losing its internet connection due some
config problems on the U network and it did not take long before it
could not recover.  However, this was 3+ years ago...

---------- Forwarded message ----------
From: Chad Walstrom <chewie at wookimus.net>
Date: Aug 10, 2007 4:13 PM
Subject: Re: [tclug-list] Duplicate MySQL server in two seperate locations
To: Brock Noland <brockn at gmail.com>


It is a pain to restart an application after writes are so far out of
sync it cannot catch-up.  However, MySQL should be able to operate well
under most downtime conditions.  The exception to this is if you try to
do a fault-event takeover.  Both the slave and master are capable of
writing to the database, but the slave writes just never make it
up-stream.  If you have a master/slave relationship, then the only way
to really have a slave-takeover-for-master scenario is to break any
slave conditions until you can manually start them up again.

For example, we're using ucarp to multicast/coordinate an IP address
takeover scheme.  If for some reason the slave takes over as master, I
force it to drop all knowledge it had of being a slave.  Should the
master server come back up, it wouldn't try to re-sync from the point it
left off.  If the master notices itself being downgraded to a slave, I
have it shut down and refuse to come back up until I've had a chance to
re-sync it with the current master.

Having the old master sync with the new is possible to script, but you
need a backup snapshot and information about the binlog (location and
position).  I will need to write such an application soon, and plan to
do it in something like perl or python.  The reason being is that you
must keep your mysql sessions open while placing read locks on the
tables.  If your session ends, the locks are cleared.  Trying to do this
in portable shell is a challenge.  The locks are only necessary while
taking your backup (or while you create your LVM snapshot) and
retrieving the "show master status" data.  Once the backup is made,
you're in the clear.  One thing I did notice is that you can run a FLUSH
LOGS style command to generate a new binlog with a predictable file
position.

So, it is scriptable, just need to do the work to make it happen.  I
can't that I've successfully found such a script on the net yet.