Project: Automated offsite backups for an NSLU2 – part 11

I’m setting up automated offsite backups from my NSLU2 to Amazon S3. With suprisingly little effort, I’ve managed to get a tool called s3sync running on the “slug” (as it’s known). s3sync is a Ruby script, so in order to run it, I had to install Ruby, which in turn meant that I had to replace the slug’s firmware with a different version of Linux, called Unslung. All of this worked pretty much as advertised in the tools’ respective documentation – for the details, see the previous posts in this series.

Having confirmed that s3sync worked as I’d expect it to, I needed to install it in a sensible place – I’d previously just put it in /tmp – set it up so that I could use SSL to encrypt the data while it was on its way to Amazon, and then write a script to synchronise at least one of the directories I want backed up. I’d then be able to test the script, schedule it, test the scheduling, and then I’d be done!

First things first – I was getting annoyed with not having some of my favourite packages installed on the slug, so:

The chmod was required to stop non-root users (of whom I naturally have hordes on the slug :-) from being able to read the private key. Better to be safe than sorry. The directory I was syncing is a very small subdirectory of the area I want to back up to S3.

Next, I created the <my key ID>.Backups bucket using jets3t Cockpit, and then ran the upload script:

-bash-3.1# ./upload.sh
-bash-3.1#

A quick check confirmed that the data had been uploaded. However, I found myself thinking – I’d like the tool to log a bit more than that. s3sync’s usage said that there was a “-v” option to run it in verbose mode, so I set that in the upload script and reran it. There was still no output, but I suspected that that was simply because there were no changes to upload… so I deleted the data from S3 using jets3t Cockpit, and reran. This time I got output:

Hooray! So, finally, I decided to try syncing up my entire “user data” share on an cron job, set to execute very soon. I modified the upload.sh script to point to the correct directory, and then edited /etc/crontab, adding a line saying:

42 22 * * * root /home/s3sync/upload.sh &> /tmp/s3sync.log

And then I waited until 10:42pm by the slug’s time (which, incidentally, seemed to have drifted a minute or so since the previous evening). At 10:42pm, I checked what processes were running:

Excellent. The logfile was there; nothing had been written yet, but checking the bucket showed that data was already being copied up. My best guess was that the logfile would be flushed at a later point.

At this point, all I could really do was wait – so it was time to leave the slug for the day, ready to check the next. If everything had synchronised up correctly – and a download to another machine worked – then I would be able to say that I’d completed the project :-)