I think the easiest and the most powerful syncing app for Linux is Dropbox client.
It offers 2GB free cloud space and up to 18GB free cloud space if you refer friends.
And they have promotions if you buy a new phone (Samsung +50GB, HTC+25GB, etc)
It has a super and very useful features and you can always access to your files from the web too.

I would use rsync + cron to to backup files in such situations. cron will run the rsync (or bash script) at the specified times. for example, every 2 hours dump the database to a file, compress it, and upload it to the backup server. I think this is the way to go, unless there is an easier way of doing it.

I cannot provide a guide unless I know what exactly you want. I've read through your post several times and I'm a bit between two edges.

1. You either want to built a failover that automatically changes to the next alive server out of a pool of available servers.
2. You want to sync a whole server to a second server.

Option 2 is actually quite pointless because it could simply break the destination server and would waste a lot of resources. If you still want it you can use the search function and look for the rsync guide by @rudra. I was never fond of syncing up whole servers.

For option 1 you need to do some research and find some solution for your own HA failover setup. I can't point into directions though because I've never bothered to do such setups. However a group I'm in has developed such a tool and currently uses it successfully on our own servers for failover in case of downtimes.

latest commit of picored seems to be one year ago. Another point of importance is that it works by monitoring signals groom each on the pool and then running a script once or more to change the DNS entries. It does not clone or keeps updating a backup system incrementally.

I read somewhere that a program called heartbeat and another for incremental backup might together work to do a nice job here if what you want is a fail-safe.

Note that backup or incremental backup when applied to only the relevant application files works much better.

If you want a way to do a fast setup of a clone system then may be rsync. A fresh one is always better unless we are talking about a very compatible or identical system here.

Oh, I forgot to mention that in addition to picored we use rsync to only sync the website files (instead of the whole server) and we use the Galera MySQL cluster for the database clustering HA failover.

I was looking for a mechanism for cloning entire files, one of them was, the open VZ dump, in case of a open VZ container, but it requires the intervention of the provider and not all providers are willing to provide the VZ dump. I then learnt about rsync, but i didnt find it convincing. Apart from that I didnt find anyways of finding ways to sync an entire container as per say.