Sysadmin #2 - Meet the Flustercluck

5 May 2018 11:57 pm

What a flustercluck

Proxmox is clever, a lot of the fiddly console incantations required to keep LXC containers working have been placed behind a rather nice web GUI. Under the old system it took effort to create a new container, and I could never quite remember the process. This lead to old containers lying around as I couldn't remember the correct way to dispose of their filesystems, and even worse couldn't figure out an automated way to back them up.

The solution to this was to make Proxmox automatically back the containers up to my NAS every night, and to set up a second machine onto which I can replicate my containers, should the main machine go down. Doing this requires the creation of a cluster, the first step being to give it a name, so I chose a suitable one.

Replication

For the past week I have been slowly recreating each LXC container within Proxmox. For some this just required some basic installation of software, and a "reboot" of the container. I'm pretty good at installing Apache and PHP now, I've got it down to a five minute job. Even setting up MySQL was easy, after I figured out how to dump the database to a file. Setting up my mail was a task I wasn't looking forward to. EMail is complicated, full of arcane protocols and added on new protocols that try to limit the amount of spam zooming across the Internet. A few years ago my machine accidentally turned into a spam factory, and getting it off the blacklists took a fair amount of effort.

Strangely, I followed a very complex set of instructions that left me with an IMAP and SMTP server that worked first time. Well, they worked for receiving email, and I could read it so long as my clients were connected to the LAN. Trying to use my mobile phone on a 4G connection didn't work at all. Fixing this required some fussing in Proxmox to set a static IP address for my mail server, otherwise the machine wouldn't report its FQDN, meaning the SSL cert was incorrect, meaning email clients went "ooh no, we barely like your self-signed certificate. We're not having an incorrect domain name in there. Nope". System admin is full of weird chains of consequences like this.

Finally after a week of gradual deployment I had Proxmox working properly, serving everything and I could turn off the old server, install Proxmox on that and then migrate the containers across.

"Sign here to say you accept this has no guarantee"

Being of a moderately cautious nature, I didn't just stick a USB boot stick in my old server and wipe out the drive. Instead I found a random unused 500GB HDD (because everyone has those lying around, right?) and installed it on that. If it later turns out something very important was on my old server, I can still get the data back.

To see what was on this random HDD, I needed a SATA-USB caddy. I used to have one, but it set on fire, frying a drive in the process. Fortunately our local Maplin hasn't quite shut down yet - they're still selling stock, but are also selling the shop fittings. A rummage through their back shelves, which now resembles something like a jumble sale, resulted in a USB-SATA dongle, a PSU and a VGA cable that would be handy for my KVM switcher. I also found a plastic box designed to hide power strips and wires. To be allowed to buy this unboxed stuff, I had to sign a thing to say I accepted the items might not work, and that I couldn't return them.

The VGA cable had the wrong connectors on it. I need what is called a "VGA extension" lead, the one I bought was male/male. Into the box of purgatory that will go.

The USB-SATA dongle doesn't work. The PC recognises it, the PC sort of knows a HDD is connected... but it thinks the HDD is zero bytes in size. I guess I'm proof that you can sell any old crap to people, if you make it cheap enough. The 12V PSU might be useful... maybe.

The plastic box fits perfectly under the TV and hides a horrid mess of wires... so at least my trip wasn't a complete waste.

Migration Successful

Migrating the containers from one node of the cluster to another was easy. I don't know what, or how it works, but it does. Now I have a server that I feel confident in using, and with a backup strategy that I know works. Mostly though, I now have a setup that can cope with total hardware failure, without requiring me to spend hours reinstalling everything and hand copying files out of tar archives.

Virtualisation is cool

The servers I have running are in containers, which are best thought of as chroot jails with their own network stack. They conveniently turn entire Linux installations into folders of files that are easy to transport. Combine this with a filesystem that allows datasets and snapshots, and moving machines is easy - they neither know nor care that they've been moved from one host to another. If things go very bad, I can just fire up VirtualBox on my desktop PC and run them off that.

Virtual machines are also cool. Before starting this journey on the real hardware and live data that I own, I tried it out using VirtualBox instances on my desktop PC. I had a good play around with creating containers, replicating them on a cluster, restoring backups. Being able to fire up a machine, play with it and just delete the whole thing afterwards really speeds up the learning process. I used to do things like this by swapping HDDs in my desktop PC and it wasn't very good.

Now I have a nicely working server, I can get on with using it.

Not included in this wonderful sounding arrangement: Four hours of fruitless debugging, fussing and temporarily breaking it all because I couldn't upload the banner image for this post to the server. Turns out I was missing the php-gd library, not that anything told me this. The best I got was "502 Bad Gateway" which is the Internet's way of saying "it's broken...". I got to sit in the sun and do this though, which was nice. And there's beer in the fridge, and tomorrow is a Bank Holiday.