The good people at EFF have done some amazing work and integrated automatic wildcard certificate creation/renewal in their API client, certbot. The ability to handle wildcard certificates was finally released with ACME API v2 in March 2018 and certbot v0.22.0 and newer. This made a lot of people on the Internet very happy 🙂

Another great thing about certbot is it’s modular structure which allows you to use modules to install the newly created certificates in your web servers or to automatically perform the domain validation for you to prove that you own the domain.

Instructions

Let’s talk about an example where you have multiple domains with DigitalOcean, one of the leading cloud providers in the market. I’m expecting you to be on a Debian-based system, maybe Ubuntu 16.04. Ubuntu 16.04 is a good example because the version of certbot that ships with it is newer than v0.22.0 and thus can handle wildcard certificates.

The next you will want to do is create a new dedicated API key for this, call it something like “letsencrypt” so you will know in future what the key is used for. You need to store that key on the your machine – replace “KEY” with your actual DigitalOcean API key in the commands below:

If you’re like me, and you like to be in charge of the certificate installation yourself, certbot lets you pass in the sub-command certonly which will only fetch the certificates for the domains you’ve specified and not modify your web server configuration. In saying that, I did do a test run with nginx and certbot and the nginx installer plugin did a magnificent job of updating the nginx configuration. I recommend that you look have a look at the other certbot sub-commands while for this example we’ll be using certonly.

You can find your certificate files in /etc/letsencrypt/live/domain1.tld/* which are symbolically linked to /etc/letsencrypt/archive/domain1.tld/. These certificates are now valid for 90 days which isn’t too long, surely you don’t want to have to log on to your server every 3 months and renew your SSL/TLS certificates. For that reason, certbot installs a cron-job in /etc/cron.d/certbot which will automatically renew your certificates based on a configuration file /etc/letsencrypt/renewal/domain1.tld.conf. You’re advised to have a look at these files and make sure they look alright.

You will see that /etc/cron.d/certbot runs the sub-command renew which luckily comes with a flag –dry-run, this can be your final test before forgetting about SSL/TLS certificates for a long time 🙂

sudo certbot --dry-run renew

The output of this should perform the DigitalOcean DNS challenge regardless of whether the certificates are due for renewal or not. Your certificates will be marked for renewal every 60 days which gives the daily cron-job /etc/cron.d/certbot more than enough time to generate new certificates for you.

Every now and then a Linux-newbie approaches me while trying to learn the ways of the force, i.e. to become more familiar with Linux. This post is the start of a hopefully comprehensive collection of easy ways to get deeper into Linux.

PodCasts are a great way of getting the latest and greatest information about various topics in an easily digestible format. I personally used to listen to a lot of PodCasts but don’t have the time any more so I only stick to a few. At the moment I’m listening to The Linux Action Show which I recommend. Also, especially if you’re new to Linux subscribe your favourite PodCast player to “Going Linux“.

About

Intro

Many, many moons ago I bought myself a Netgear ReadyNAS – a small 2 bay unit for not much money and at first I was very happy with it. But I’m a nerd! So naturally over time I want to play with things and get more out of the unit than the manufacturer wanted to give me.

What it can do

It really helped to install the root extension and be able ssh into the unit. That meant I could install dnsmasq and define some hosts in /etc/ethers and /etc/hosts so dhcp and DNS was sorted. I also installed transmission which made the unit that little bit more useful.

What it can’t do

I was always hoping for an easy way to install OpenVPN on it – after all, my unit is an x86 box running Linux; how hard can it be?! Turns out it ain’t easy – not that I tried 🙂 I read up on it and gave up before starting. So that bugged me. Friends around me started buying things like the HP ProLiant N54L MicroServer which hangs in the same price range but totally wins in every category when compared to my ReadyNAS.

The straw that broke the camel’s back was when I moved into my new house. I now share the house with non-nerds who probably should be on a separate network to me. Also, it’s always bugged me that the WiFi is bridged into my home network – so just for some added security and safety I was going to separate the house into 2 networks using the 2x1GigE nics in the NAS, only to learn that the unit can’t do NAT! There is no support for it in the kernel and I was surely not going to compile a kernel on this unit without the ability to fix things that may break.

The solution

After a bit of reading on the web I found out that you can use a serial console port on the back of the unit to get a KVM connection, so you can get a keyboard and a screen connected. You can then also boot from a USB drive and install your operating system of choice. I had previously played with FreeNAS and I do think the filesystems are better in BSD-landi but my BSD experience is non-existent and so I opted for Ubuntu server.

Ideas started to form

With the possibility of installing Ubuntu on the ReadyNAS so many things seemed suddenly possible.

Like so many others I’m affected by a buggy out of the box modem/router which can’t do port forwarding – this can go in bridge mode and the NAS can do this.

Separate my home network into “WiFi and others” and “privileged” 🙂

Install an ntpd, transmission, OpenVPN

…

How?

The serial connection

Buy the hardware

Buy yourself a little serial to USB adapter if you haven’t got one already. I didn’t and opted for the naked version (pl2303HX) which cost me just AU$6.20 including shipping! (I bought mine from top_electronics_au on ebay but am not affiliated with them in any way).

Connect the hardware

On my unit the serial port was covered by a sticker – peel that off and connect your serial adapter to the pins underneath this.

Note that you will have to connect the RX/TX lines crossed. From right to left, you’ll have +5V, then TX, RX and Ground. My serial to USB adapter mentions 3.3V and since all components are powered in some way I didn’t hook 5V and ground up – only RX/TX (crossed).

Set up the software

I was able to get minicom to work using 9600 baud and 8N1 on /dev/ttyUSB0, YMMV. After plugging in the USB to serial adapter, have a look at `dmesg | tail` to see what device has been assigned to it. Then run `minicom -c on -s` to enter setup mode and configure your connection. Exit setup mode and hit return a few times, your screen should update.

Word of warning: like with any serial connection it has issues when you hammer it – so don’t hold the backspace key and wait for your line to be deleted. You’ll have to feed in one after the other key stroke.

Back things up

Ever since I bricked a Samsung Galaxy SIII in a similar operation and didn’t have a backup I can’t stress enough how important it is that you back things up! I ran the following commands first when ssh’d into the NAS:

This will define a new command “log_command” which simply puts all arguments into a text file and then executes the arguments and puts the output in the same text file. This comes in really handy in case you’d like to restore.

Boot from the thumb drive

Now that we’re a bit more familiar with the NAS let’s boot of a USB thumb drive. The NAS won’t boot of a USB CD-ROM btw.

Create the thumb drive

This gives you a fairly standard bootable USB key but we do want the console output redirected to our serial console cable and have to set up syslinux for it first.

Redirect output to the serial console

Edit the syslinux.cfg on the root of the USB key and add the following 2 lines as the first lines:

SERIAL 0
CONSOLE 0

“SERIAL 0” tells syslinux to print the output to the first serial console (0) and needs to be the first line. The second line stops syslinux from printing anything to the standard console. With those 2 lines you will get a boot menu over the serial console. The next step is to change the boot entries to also redirect output to the serial console.

Removing “quiet” will give you the output that’s otherwise suppressed and replacing vga=788 with console=ttyS0,9600n8 again tells syslinux which serial port and other connection parameters to use.
I went through and did that for all stanzas so I could easily boot into the rescue image or the memtest.

Boot the NAS of the thumb drive

Plug the prepared USB drive into any of the USB ports and hold down the “backup” button at the front of the unit as you power it on.
In minicom immediately start hitting the ESC key until you see the below screen:

Hit return for the boot menu or [tab] for all entries in plain text. After you’ve hit return you should see this screen:

Boot into Rescue mode

I opted to first boot into rescue mode and take a backup of all partitions. When booted from the USB key in rescue mode I found the following partitions:

/dev/sda1 (vfat) 126MB bootable partition

/dev/sdb1 my USB I booted from

/dev/sdc & /dev/sdd == The 2 bays

/dev/md125 == 2TB (c VG)

/dev/md126 == 0.5GB (swap)

/dev/md127 == 4GB (root)

/dev/sda1 is the bootable partition the NAS starts from but it’s inaccessible when booted normally.

Back things up

In case something went horribly wrong I wanted to be able to restore the partitions as they currently are. I didn’t have to restore anything so this is untested but dd’ing the partitions away seemed reasonable.

This worked although there is something weird going on. For some reason the reported filesystem capacity and disk usage doesn’t match up. “df” shows a total capacity of 3.7TB with 3.4TB in use. While “du” shows 1.7TB total usage on my 2TB raid1.

So I’ve finally found the motivation and the time to set up OTP for my ssh logins!

DISCLAIMER: Let me say upfront that there are plenty of articles on using Google’s GAuthenticator which I didn’t want to use. Since the whole PRISM thing was leaked I try to decentralise my data as much as possible and distrust big names more since they are surely a more attractive target than the small players.

After a bit of searching I found oathtool, a piece of software that seems to be relatively well maintained compared to S/KEY and others. The concept is simple, you can choose between two types of OTPs, event based and time based. Event based OTPs are expired once they’ve been used for an event (e.g. a log in) or time based OTPs which expire every x seconds (default: x = 30 seconds).

I’ve chosen time based OTPs since I use this method to protect other online accounts like github.com and others. I use my Android device and the “FreeOTP” to keep track of my time-based OTPs. With TOTPs you have the following parameters that play a role at creation time.

The start time from which to count the TOTPs (see below why I’m not using this).

As you can see by the example output in the above test, if you’ve followed the instructions correctly you will be asked for a TOTP first, and then for your user password afterwards. \o/

I’d have liked to set a different start-time for my TOTPs, oathtool supports ‘–start-time’ however there is no way to enter that in FreeOTP so it always assumes the default of ‘1970-01-01 00:00:00 UTC’. Took me a while to work that out, I had assumed that the start time was somehow mixed into the base32 secret but that’s not the case.

Remember the days when people put up their .rc files on their blogs? I used to do that too but these days, I *would* like to link people to my puppet manifest on github. (EDIT: Despite being secure i.e. not containing any passwords, there was a some information leakage and my employer encouraged me to make the repository private. I use bitbucket for that.) If you don’t know what Puppet is, you should definitively check out puppetlabs.com.

The only reason this page still exists because back in the day I worked for this lady who got angry when I changed global settings on the server to sensible defaults. Like “alias grep=’grep –color'” got me into trouble o.O So I created my own little environment that stayed persistent even when I sudo’d. This was mainly done via the following lines: