All posts by roger

We’ve been using the free ESXi ghettoVCB backup utility for the last 5 years to backup about 150 VMs daily without a glitch. ghettoVCB snapshots the VM, copies away the files (with a configurable retention period) and then removes the snapshot. The resulting backup is a snapshot of the VM which means that when you need it, you can directly run the backup copy of the VM with ESXi and start it without having to restore it. ghettoVCB is fast and reliable (it copies sparse disks correctly to an NFS backup share so the resulting backup is as compact as the original VM disks).

ghettoVCB has no deduplication capabilities, so its usually not appropriate for offsite backup of VMs. We use borg to archive the ghettoVCB backups offsite. This has the advantage that you get indefinite retention offsite (since borg does very efficient deduplication). You can also mount any borg backup (using its FUSE mounting), so you can run any backed up VM straight out of the archive.

Borg backup (https://github.com/borgbackup) is an open source backup tool which, in addition to the usual backup features like strong client-side encryption and compression, has several important characteristics which make it particularly suitable for handling large offsite backups (like virtual machine backups):

Deduplication: this ensures that even if the source files move or change names, that they will not be re-backed up unnecessarily.

The backup can be moved. Borg backups are just directories – this means that you can make the first, full backup locally, copy it to the destination via a USB disk and the continue incremental backups over the network.

We are currently using borg for several offsite backups, including a weekly offsite backup of VMWare ghettovcb local backups (VM image clones) and the resulting backup traffic is less than 10% of the file size of the local backups (this obviously depends on how much change is occurring between backups, but 90% deduplication is the average over 15 VMs backed up offsite).

The steps to do an initial backup of the directory (data_dir) locally, move it to a remote server (server1) and resume incremental backups to that server (via ssh to user1@server) are as follows:

Notes:
(1) borg should be installed on both the client machine and the server machine.
(2) When you move the backup (repository), the name of the directory you move it to does not need to match that which it was moved from.

Overview:

We recently carried out a short introductory Docker workshop, starting from from scratch, installing Docker and taking it through to the point where a software stack, consisting of several linked containers, are deployed using docker-compose. Here’s what we covered.

Docker concepts:

Docker containers are easy-to-deploy units of software, analogous to the shipping containers used by the transport industry, which simplifies the job of shipping diverse goods around the world.

Docker images are the templates for the containers. Every Docker container is started from an image. Images are defined by a Dockerfile which contains instructions for building the image, based on an existing image (for for instance, a web-server image will be based on an OS image, simply adding a layer of web-server software to it).

A Docker registry is where images are stored. Every machine where Docker is installed has a local registry. Additionally, Docker provides a central registry (from which images are fetched if they aren’t available locally). And finally, you can host your own private registries.

Starting point:

A freshly installed Ubuntu 16.04 server, called docker-test.

Installing Docker:

We won’t use the standard Docker package available from the Ubuntu repositories because Docker is changing fast – instead we’ll add the Docker apt-repository and install the latest version from there.

The last two commands ensure that we can run Docker commands without sudo.

Check that Docker is correctly installed with:

$ docker run hello-world

If everything is OK, Docker will download the hello-world image from the central Docker registry and run it.

Building a simple website container:

The purpose of Docker is to allow you to build and deploy something like a website without having to worry about details like which OS is to be used, which web-server etc. What we want is to have an image called say “website” which takes some files we give it and publishes them via a web-server.

First create a directory “website” where we will work on creating our container.

This defines a new image, based on the existing image “alpine”, a compact Linux OS image. A web-server, “lighttpd” is installed. Then our website content (index.html) is copied to the web-server content folder. and finally the web-server is started.

This will bring up the website container based on our “website” image and publishing the web content on port 8088 (the -p parameter maps the host port 8088 to the standard web-server port 80 within the container).

Persistent data (stateful containers):

When a container is rebuilt from an image, it loses any changes which were made to it since it was last built. In order to preserve state (for instance, a database container will usually need to preserve its database contents, even if the container is rebuilt), this state must be maintained by the host and provided to the container by means of “volumes”.

To illustrate this, we’ll create a database container using the postgresql database server. First we create a data directory on the docker host, which will maintain the persistent state of the database.

$ cd
$ mkdir data

Then we create a database container based on a standard “postgres” image from the main Docker registry. We pass the -v parameter instructing docker to map the host directory (~/data) to the container path /var/lib/postgresql/data (where postgresql stores its database contents).

If you now look in the host directory ~/data, you will see that postgresql has created a set of database files there. Note: you’ll need to use sudo to list the files because postgresql has modified the file permissions.

$ sudo ls data

Now connect to the new database server (docker-test:5432, user: postgres, password: postgres) with a postgres client (e.g. pgadmin) and create a database called “test”.

To demonstrate that the host is maintaining state for the container, we’ll now recreate the container and image from scratch.

$ docker rm -f database
$ docker rmi postgres

Then we’ll run the container again (which will again fetch the image from the main Docker registry because we removed it from the local Docker registry with the “docker rmi” command).

Reconnect with postgres client – our new “test” database is still there even though we rebuilt the container (because we specified a persistent volume).

docker-compose – deploying a whole stack

The philosophy of a container is that its supposed to do just one thing well – this reduces complexity and increases reusability. So you shouldn’t use a single container to deploy several components. For instance, if you have a web application stack which consists of a database, a REST server, a client web application and a proxy server, then this stack should be deployed as four containers.

For this workshop, we’ll deploy a database and a REST server as a stack, using docker-compose to deploy the stack in a single operation.

We first need to install docker-compose (its an add-on tool for Docker).

$ sudo apt-get install docker-compose

Now we’ll create a small REST server in python and deploy it in a container.

Point a browser at http://docker-test:5000/hello and the REST server should return “Hello from the rest server!”

So now we have a REST server running as a container. The next step is to hook up the rest server to the database, so instead of always returning a fixed text string, it can do a more real-world task of returning the result of a query against the database.

Using the postgres client, connect to the database and create a database “test”, with a table “test” and one int column “test”. Insert a few values which our REST server will sum.

Point a browser at http://docker-test:5000/hello and the REST server should now return “20” (the sum of the two values in the database).

So we now have two containers, one of which uses the other.

However, we are still starting both separately and in a specific order. We also currently have ports 5000 (rest server http) and 5432 (postgres) open and we have a hard-coded reference to “docker-test” in the rest server code. We could of course allow the database server to be passed in as a command line variable or as an environment variable, but docker-compose provides a better way, by linking the containers, so that the database is first started, then a private network is created to link the two containers and the address of the database server on that network is passed to the rest container, who can then communicate privately with its database container.

Lets update our REST server code, to reference the database as “database” instead of the host name “docker-test”. We’ll then use docker-compose to ensure that the host name “database” will be pointing to the database container.

Now we’ll create a docker-compose.yml file to link the REST server to the database server, by means of the host-name “database” (defined by the name of the service in the docker-compose.yml file). We no longer need to expose the postgresql port (5432), since docker-compose will provide a private network between the containers, allowing the REST server to access port 5432 inside the database container without it being exposed to the host.

The depends_on instruction ensures that the REST server container is started only after the database server container has been started.

Check it again by browsing to http://docker-test:5000/hello (the REST server should still return “20”).

If you run the “docker ps” command, you’ll see that the containers are no longer called “rest” and “database”, but that docker-compose has constructed names based on the service names in the docker-compose.yml file.

You’ll also noticed that the database container is no longer publishing port 5432 – its only available on the private network which docker-compose creates between the containers. So this means that only port 5000 – the port published by the rest server – is now exposed to the outside world.

Note that we’re doing everything (building the image and deploying the container) on a single machine – in the real world, the docker images are built during the development cycle on a developer workstation (or a continuous integration server like jenkins) and pushed to a remote registry (like docker hub or a private registry). During deployment, the images are pulled from the remote registry and the containers started on the production server.

Docker swarm

Docker swarm is a very interesting extension to Docker which allows you to deploy your software stacks across a cluster of worker machines. Its easy to set up – one Docker host creates the swarm and becomes the swarm manager, and other hosts join the swarm. Using an extension to the docker-compose.yml, software stacks can be deployed across the cluster with replicas for fault tolerance and load balancing. We didn’t have time to cover docker swarm in this workshop, but we’ll cover it soon.

Contact us!

We’ve already gathered lot of experience using Docker to help our customers efficiently deploy their software stacks. If you’re interested in having us help your organisation get up and running with containers, get in touch with us at info@armstrongconsulting.com

We’ve been recently running rest APIs on active-active server pairs (docker containers running on pairs of VMs on separate hosts) with postgres-BDR (multi-master bidirectional replication) for fault-tolerant storage and a pair of fault-tolerant HAProxy instances for incoming request routing. This is a robust setup which provides zero downtime during rolling updates or hardware maintenance or failure.

However, clustering scheduled jobs (i.e. ensuring that scheduled jobs execute exactly once) becomes a problem in this configuration. Multi-master replicated databases avoid a single point of failure but they are not suitable for use with database-based clustered schedulers like Quartz, so we needed to consider other options. There are complex clustered job schedulers, but we wanted to keep it simple and use Linux crond for scheduling. We finally settled on using keepalived to maintain a single master across the cluster and schedule the cron jobs identically on all servers, using a small if_master.sh script to ensure that a crontab entry only runs if the server is currently the keepalived master. The crontab entries then look like:

0 * * * * ./if_master.sh "./command.sh"

if_master.sh checks if the server is currently keepalived master (by looking for the floating ip address)

We live in an old house, on three levels. Its always been a challenge to achieve consistent wifi coverage throughout the house. We neglected to install ethernet cabling when we renovated and have been struggling with wifi issues ever since. We tried power-line networking (Devolo, TP-Link) and, although it worked most of the time, it provided very inconsistent performance and it was impossible to figure out why. We then reverted to a central wireless router and range extenders (Apple, TP-Link). Coverage was pretty bad in many parts of the house. Last weekend, we installed the AmplifiHD mesh networking system from Ubiquiti and we finally have the full performance of our internet provider (40-60Mbps LTE, depending on the time of day) from any or all devices, anywhere in the house.

I initially installed ESX on the NUC and ran an ubuntu VM with iptables, DNS, DHCP etc. However, I wanted to put the firewall between the home network and the LTE router, so I needed two network interfaces. The NUC only has one, so I thought I’d use VLANs to split the network. That turned out to be pretty complicated to manage so I ended up buying a USB3 ethernet adapter (AX88179) for the NUC instead.

Getting that to work with ESX was a pain (I tried pass-through, but couldn’t get it to work reliably), so in the end I replaced ESX on the NUC with KVM. Worked great – the USB ethernet adapter installed out of the box with Ubuntu and the whole configuration only took an hour to set up.

Like with ESX, I still need an extra VM on my macbook (Ubuntu desktop under Fusion) to run virt-manager to manage KVM (instead of the Windows VM where I used to run the vmware client to manage the ESX installation).

I used to backup the VMs with ghettoVCB to a Synology via nfs (which worked completely reliably). Now I’ll use LVM snapshots and and file copy to backup the KVM VMs to the same Synology.

We’ve been using Devolo powerline networking at home for years – we have an old house with wifi-proof walls. We’d been having plenty of problems with them, possibly due to overheating of the wifi plugs. After replacing the dLan 500 adapters with the more expensive Devolo dLan Pro adapters without any improvement, I finally decided to try TP-LINK AV500s instead.

The TP-LINKs are much cheaper (base plug plus two WLAN repeaters for < EUR 100). They are also compatible with the Devolo dLan 500s, so I’m still using a few of the Devolos (for a printer and in the cellar). Unfortunately they don’t perform better than the Devolos – I still get about 30Mbit throughput (measured with iperf between two devices connected by ethernet to the TP-LINKs). Should’ve installed CAT5 when wiring the house :(.

Addendum: the Devolos did not play so well with the TP-LINK network, so I ended up replacing them with some additional TP-LINKs. For the time being, the network seems to run reliably.

We used to wait for Jenkins to produce us a Cobertura report and of course nobody read it until delivery time arrived and we realised we hadn’t met the SLAs. Enter the eCobertura eclipse plugin which provides you with visual code coverage directly from within the eclipse editor. Just run your unit tests and you immediately see the source lines in green and red. Wow! How did we ever do without this?