Latest Posts

Categories

Category: Adventures with VPS

Continuing my series on the fun I’m having with GitLab CI, it’s time to discuss using it to automate nginx builds. I’ve previouslywritten about how and why I use a custom build, and a few months ago, I decided to remove the monotony from the process.

Now, when a new mainline release is available, or OpenSSL releases a new version, I update a few CI variables, run a pipeline, and a new nginx binary is available within a few minutes. I’m also able to build for multiple versions of Debian.

I got started in WordPress development as many do, by writing a plugin. It, along with nearly a dozen others that I’ve released in intervening years, don’t require much attention, so I’ve generally neglected even the most-basic of maintenance: confirming each is compatible with the latest WordPress release and updating the readme accordingly. I’ve felt guilty about this for some time now, but it wasn’t until this weekend that that guilt compelled me to action.

After I set up GitLab’s continuous integration using DigitalOcean Droplets for autoscaling, I noticed that sometimes, a runner would fail, abandoning a Droplet that I’d manually need to destroy. While this problem was infrequent, it was troublesome-enough to warrant some automated solution. Not finding anything readily available, and thanks to DigitalOcean’s godo library, I put together a Go program to periodically cull stale Droplets.

I recently wondered how long it had been since I added ethitter.com to the Strict Transport Security preload list, as it’s been several years. I added my domain without fully understanding the consequences, though I’ve been fortunate to avoid any problems that could’ve resulted.

Getting back to my original question, version control made it easy to find August 18, 2014 as the date my domain landed in Chromium’s list: https://src.chromium.org/viewvc/chrome?view=revision&revision=290306. At the time, it was one of less than 1,000 domains in the preload list; the bulk of the list was comprised of Google’s own domains. There are currently over 40,000 preloaded domains in Chrome/Chromium. It’s a good thing preloading hasn’t been an issue, it takes a while to get off the list.

Two weeks ago, the VPS that hosts this site moved to a machine that had been patched for the Spectre vulnerabilities. Immediately, I began receiving warnings about high load, and these alerts continued unabated for over a week. I tried moving services to other hosts, and I reduced the resources allocated to nginx and php-fpm, all to no avail.

As I continued to monitor and debug the situation, fail2ban regularly appeared among the top resource consumers, but I didn’t think much of it; fail2ban has always been a voracious resource user, but it’s an indispensable tool, so removing it wasn’t an option.

Earlier today, I switched from custom PHP builds to sury.org's PHP 7.2 builds. I've been using his builds on my Photon server for some time now, and I've lost interest in maintaining my own builds, so this seemed like a natural progression.

Previously, I used curl to trigger dyndnsd updates via my Raspberry Pis. This worked well for many months, but lacked IPv6 support as dyndnsd was interpreting my IP from the request. Fortunately, the daemon accepts parameters for IPv4 and IPv6 addresses, so I wrote a Go program to handle regular updates. It still relies on cron, but passes explicit IP values and moves all options to a configuration file.

After a few successful months of testing Packet.net, I've once again moved git.ethitter.com. The decision was purely financial–my GitLab instance doesn't receive enough traffic to warrant Packet.net's pricing. As far as reliability and value were concerned, Packet.net was excellent. I would've appreciated built-in backups, but otherwise, I have no complaints about the service.

It will likely come as little surprise that git.ethitter.com is back on Linode. Compared to Digital Ocean, Linode is slightly more-generous with its resources, and GitLab wants all the resources it can get.

The migration itself was quite easy, with most of the time was spent preparing the server; GitLab's backup/restore process did most of the hard work. Now I just have to finish the ancillary setup, like monitoring.