As frequent readers may or may not remember, I rebuilt my primary server last year, and in the process set up a fairly hefty RAID-5 array (24 terabytes) to store data. As one might reasonably expect, backing all of that stuff up is fairly difficult. I'd need to buy enough external hard drives to fit a copy of everything on there, plus extra space to store incremental backups for some length of time. Another problem is that both Leandra and the backup drives would be in the same place at the same time, so if anything happened at the house I'd not only not have access to Leandra anymore, but there's an excellent chance that the backups would be wrecked, leaving me doubly screwed.

Here are the requirements I had for making offsite backups:

Backups of Leandra had to be offsite, i.e., not in the same state, ideally not on the same coast.

Reasonably low cost. I ran the numbers on a couple of providers and paying a couple of hundred dollars a month to back up one server was just too expensive.

Linux friendly.

My data gets encrypted with a key only I know before it gets sent to the backup provider.

A number of different backup applications had to support the provider, in case one was no longer supported.

Easy to restore data from backup.

After a week or two of research and experimentation, as well as pinging various people to get their informed opinions, I decided to go with Backblaze as my offsite backup provider, and Duplicity as my backup software. Here's how I went about it, as well as a few gotchas I ran into along the way.

Let's say there's a website that you want to make a local mirror of. This means that you can refer to it offline, and you can make offline backups of it for archival. Let's further state that you have access to some server someplace with enough disk space to hold the copy, and that you can start a task, disconnect, and let it run to completion some time later, with GNU Screen for example. Let's further state that you want the local copy of the site to not be broken when you load it in a browser; all the links should work, all the images should load, and so forth. One of the quickest and easiest ways to do this is with the wget utility.

A couple of weeks back, somebody I know asked me how I went about deploying SSL certificates from the Let's Encrypt project across all of my stuff. Without going into too much detail about what SSL and TLS are (but here's a good introduction to them), the Let's Encrypt project will issue SSL certificates to anyone who wants one, provided that they can prove somehow that they control what they're cutting a certificate for. You can't use Let's Encrypt to generate a certificate for google.com because they'd try to communicate with the server (there isn't any such thing but bear with me) google.com to verify the request, not be able to, and error out. The actual process is complex and kind of involved (it's crypto so this isn't surprising) but the nice thing is that there are a couple of software packages out there that automate practically everything so all you have to do is run a handful of commands (which you can then copy into a shell script to automate the process) and then turn it into a cron job. The software I use on my systems is called Acme Tiny, and here's what I did to set everything up...

Regular readers have probably been wondering what's been going on that I haven't posted much. The short form, and the honest answer, is that I haven't had it in me to really post, aside from some stuff that I copy-and-pasted out of my notes, polished up a bit, and saved. The holiday season is always a busy time, and my life is no different from anyone else's in that regard.

Lyssa and I flew back to Pennsylvania at more or less the last minute about halfway through the month to celebrate an early Yule with our respective parents. Some last minute jiggery-pokery landed us a pair of get-seats-at-the-gate redeye flights to and from the other coast, which resulted in the peculiar combination of jet lag and sleep deprivation. This resulted in my getting sick again not long after arrival. I was in a fair amount of pain for several weeks due to this particular illness. Frequent readers are somewhat aquainted with my dental history, which reads like a classic farce as written by Hunter S. Thompson. Suffice it to say that I was living in a haze of pain that took most of the wind out of my sails without actually being overtly incapacitating. At least Lyssa and I spent some quality time with our nieces and nephews, and everyone seemed to enjoy their gifts.

I know I haven't posted much this month. The holiday season is in full effect and life, as I'm sure you know, has been crazy. I wanted to take the time to throw a quick tip up that I just found out about which, if nothing else, will make it easier to get up and running on a Raspberry Pi that you've received as a gift. Here's the situation:

You have a new account on a machine that you want to SSH into easily. So, you want to quickly and easily transfer over one or more of your SSH public keys to make it easier to log in automatically, and maybe make running Ansible a bit faster. Now, you could do it manually (which I did for many, many years) but you'll probably mess it up at least once if you're anything like me. Or, you could use the ssh-copy-id utility (which comes for free with SSH) to do it for you. Assuming that you already have SSH authentication keys this is all you have to do:

[drwho@windbringer ~]$ ssh-copy-id -i .ssh/id_ecdsa.pub pi@jukebox
/bin/ssh-copy-id: INFO: Source of key(s) to be installed: ".ssh/id_ecdsa.pub"
/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out
any that are already installed
/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now
it is to install the new keys
pi@jukebox's password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh 'pi@jukebox'"
and check to make sure that only the key(s) you wanted were added.

You can run this command again and again with a different pubkey, and it'll append it to the appropriate file on the other machine (~/.ssh/authorized_keys). And there you have it; your SSH pubkey has been installed all in one go. I wish I'd known about this particular trick... fifteen years ago?

As you may or may not be aware, I've been a customer of Dreamhost for many years now (if you want to give them a try, here's my referral link). Both professionally and personally, I've been hosting stuff with them without many complaints (their grousing about my websites being too large is entirely reasonable given that I'm on their shared hosting plan). Something always got me about their SSL support, though, was that you had to buy a unique IP address from them if you wanted to use it. That cost a pretty penny, almost as much as I pay every year for hosting service. After all, there's the SNI protocol which essentially lets you put SSL on multiple websites hosted at the same IP address. It's been around since 2006 and has been supported by Apache since v2.2.12 so there wasn't any real reason to not offer it. On the other hand, though, IPv4 addresses are getting pretty thin on the ground so paying for the privilege so I could have SSL on my website was worth it. Plus, Dreamhost has to sell services to stay in business, and sometimes that means paying for perks as much as you or I might be annoyed by it.

A couple of years ago Dreamhost started offering free SSL certificates through their partnership with the Let's Encrypt project if you were a customer. The idea is that you could click a couple of buttons in their control panel and they'd hook you up with an automatically renewing SSL cert for your website. So, of course I jumped at the opportunity because I got tired of the self-signed certificate errors everybody was getting. Comes with the territory.

Last weekend, for whatever reason I got it in my head to e-mail customer support and ask them if I had to keep paying for a unique IP address if I was using a Let's Encrypt certificate on my website. I use acme-tiny to maintain the certs on my servers (I should write up how I do that one of these days), so... I figured the worst they could do was say "No."

As it turns out, if you use Let's Encrypt on Dreamhost, you do not have to keep paying for a unique IP address. It's safe to go into your control panel, click that tiny little 'x' button, and save yourself some money every year. I did so earlier today (about a week ago, as you'll reckon it) and everything seems copacetic. This also means it's safe to turn on SSL for every site you have there, and it won't cost you any more money. Though it would be good to donate to the Let's Encrypt project to support their work.

Let's assume that your management workstation has SSH, the Tor Browser Bundle and Ansible installed. Ansible does all over its work over an SSH connection, so there's no agent to install on any of your servers.

Let's assume that you only use SSH public key authentication to log into those servers. Password authentication is disabled with the directive PasswordAuthentication no in the /etc/ssh/sshd_config file.

Let's assume that you have sudo installed on all of those servers, and at least one account can use sudo without needing to supply a password. Kind of dodgy, kind of risky, mitigated by only being able to log in with the matching public key. That seems to be the devopsy way to do stuff these days.

Problem: How to use Ansible to log into and run commands on those servers over the Tor network?

Regular readers of my site no doubt noticed that my site was offline for a little while a few days ago (today, by the timestamp, because Bolt doesn't let me postdate articles, only postdate when they go live) because I was upgrading the software to the latest stable version. It went remarkably smoothly this time, modulo the fact that I had to manually erase the disk cache so the upgrade process could finish and not error out. Deleting the cache alone took nearly an hour, and in the process I discovered something I wish I'd known about when I first started using Bolt.