Problem

The VPN client built into Mac OS X has a checkbox saying “Send all traffic over VPN connection”. Turning this on causes all traffic to get routed over the VPN. Turning this off means that only the VPN IP block will get routed over the VPN. If there are additional IP networks behind the VPN gateway, they won’t be reachable unless you manually add static routes.

Solution

Mac OS X uses a program called pppd to negotiate a point-to-point connection. pppd is in charge of performing mutual authentication and creating a ppp network interface. pppd is used, at least, by PPTP and L2TP over IPSec VPNs in Mac OS X.

When a PPP connection is established, the pppd program will look for a script named /etc/ppp/ip-up and, if it exists and is executable, will run it. This file does not exist in a default, clean installation of Mac OS X, but it can easily be created and customized to add static routes whenever a VPN connection is established,

When pppd executes this script, it passes several pieces of information onto the command line. The following sample script describes them:

MAAS 2.x relies on Ephemeral images during commissioning of nodes. Basically, an Ephemeral image consists of a kernel, a RAM disk and a squashfs file-system that is booted over the network (PXE) and relies on cloud-init to perform discovery of a node’s hardware (e.g. number of CPUs, RAM, disk, etc.)

There are times that, for some reason, the commissioning process fails and you need to perform some troubleshooting. Typically, the node boots over PXE but cloud-init fails and you are left on the login screen with an non-configured host (e.g. hostname is ‘ubuntu”). But Ephemeral images don’t allow anyone to log in interactively. The solution consists of injecting some backdoor into the Ephemeral image. Such backdoor could be enabling some password for the root user, for example. Next, I will explain how to do this.

Ephemeral images are downloaded from the Internet by the MAAS region controller and synchronized to MAAS rack controllers. These files are kept on disk under:

The squashfs filesystem is shared among different types of kernels/ramdisk combinations (GA which stands for General Availability, HWE or HWE Edge). As mentioned before, these files are downloaded and kept updated in MAAS rack controllers under:

On-disk layout is different from the Web layout, as each kernel/ramdisk combination has its own subdirectory together with the squashfs filesystem. But let’s no diverge. To introduce a backdoor, such as a password for the root user, let’s do the following:

I’ve been experimenting with deploying OpenStack using Nova/LXD (instead of Nova/KVM) for quite some time, using conjure-up as the deployment tool. It is simple, easy to set up and use and produces a usable OpenStack cluster.

However, I’ve been unable to run Docker inside a Nova instance (implemented as an LXD container) using an out-of-the-box installation deployed by conjure-up. The underlying reason is that the LXD container where nova-compute is hosted lacks some privileges. Also, inside this nova-compute container Nova/LXD spawns nested LXD containers, one for each Nova instance, which again lack some additional privileges required by Docker.

Short story, you can apply the docker LXD profile to both the nova-compute container and those nested LXD containers inside it where you want to run Docker, and Docker will run fine:

From the previous output, notice how the nova-compute/0 unit is running in machine #4, and that the underlying LXD container is named juju-59ffc3-4. Now, let’s see the LXD profiles used by this container:

The docker LXD profile is missing from this container, and this will cause that any nested container trying to use Docker will fail. Entering the nova-compute/0 container, we see initially no nested containers. That is, since there are no Nova instances, there are no LXD containers. Remember that when using Nova/LXD, there is a 1:1 mapping between a Nova instance and an LXD container:

But this, besides being a manual process, it is not elegant. There’s another solution which requires no operation intervention. It consists of a Python code patch to the Nova/LXD driver that allows selectively adding additional LXD profiles to Nova containers:

I’ve been playing quite a lot lately with Juju and other related software projects, like conjure-up and LXD. They make so easy to spin up and down complex software stacks like OpenStack that you don’t even realize until your hosting provider start alerting you of high traffic consumption. And guess where most of this traffic usage comes from? From installing packages.

So I decided to save on bandwidth by using apt-cacher. It is straightforward and easy to set up and getting it running. In the end, if you follow the steps described in the previous link or this, you will end up with a Perl program listening on your machine in port 3142 that you can use as an Apt cache.

Background

This post is about deploying a minimal OpenStack newton cluster atop LXD on a single machine. Most of what is mentioned here is based on OpenStack on LXD.

Introduction

The rationale behind using LXD is simplicity and feasibility: it doesn’t require more than one x86_64 server with 8 CPU cores, 64GB of RAM and a SSD drive large enough to perform an all-in-one deployment of OpenStack Newton.

According to Canonical, “LXD is a pure-container hypervisor that runs unmodified Linux guest operating systems with VM-style operations at incredible speed and density.”. Instead of using pure virtual machines to run the different OpenStack components, LXD is used which allows for higher “machine” (container) density. In practice, an LXD container behaves pretty much like a virtual or baremetal machine.

For all purposes, I will be using Ubuntu 16.04.02 for this experiment on a 128GB machine with 12 CPU cores and 4x240GB SSD drives configured using software RAID0. For increased performance and efficiency ZFS is also used (dedicated partition separate from the base OS) as a backing store for LXD.

It is important to run all the following commands inside the openstack-on-lxd directory where the Git repository has been cloned locally.

LXD set up

$ sudo lxd init

The relevant part here is the network configuration. IPv6 is not properly supported by Juju so make sure to not enable. For IPv4 use the 10.0.8.0/24 subnet and assign the 10.0.8.1 IPv4 address for LXD itself. The DHCP range could be something like 10.0.8.2 to 10.0.8.200.

NOTE: Having LXD listen on the network is also an option for remotely managing LXD, but beware of security issues when exposing it over a public network. Using ZFS (or btrfs) should also increase performance and efficiency (e.g. copy-on-write shall save disk space by prevent duplicate bits from all the containers running the same base image).

Using an MTU of 9000 for container interfaces will likely increase performance:

NOTE: For reasons I yet do not understand, one can’t use a flavor other than m1.tiny. Reason is that this flavor is the only one that does not request any ephemeral disk. As soon as ephemeral disk is requested, the LXD subsystem inside the nova-compute container will complain with the following error:

And, for the record, it is totally possible to have multiple services like the one before, each one for every additional IPv4 address. Just make sure to name the .plist files differently as well as the services name in the Label tag.