BlogBlog2020-04-27T21:15:47+00:00Jens Langhammerhttps://beryju.org/blogAutomating Ubuntu Server 20.04 with Packerhttps://beryju.org/blog/automating-ubuntu-server-20-04-with-packer2020-04-27T21:15:47+00:002020-04-27T21:15:47+00:00
Ubuntu Server 20.04 has been out for a few days, which I think is a perfect time to build start my migration from Debian to Ubuntu. Now, with Debian, I had a nice Packer setup, that automatically builds base-images. These images have some default packages installed, some miscellaneous settings and a default user. These images are used by an Ansible Workflow that creates new VMs on the fly, and deploys whatever tools I need into the VM.

To build this setup with Debian, I used a preseed file. This solution has been around for ages. However, with the release of Ubuntu 20.04, they've introduced a new Installer called the "Live Installer".

This installer is based on curtin, netplan and cloud-init. Whilst this is great in theory, especially for cloud-environments, it is a bit more difficult for on-prem installs.

Whereas with a preseed file being based on d-i ... statements, this new flow is completely YAML based. An example file could look like this:

There are equivalents for all d-i Options, which are listed here: https://wiki.ubuntu.com/FoundationsTeam/AutomatedServerInstalls/ConfigReference.
In my opinion this is a much cleaner overview to all options. With preseed files you'd often run into some cryptic option, that would result in 30 minutes of googling.

Now that we have this installer file, we need to somehow tell Ubuntu to use it for the install. This has also slightly changed from previous Ubuntu versions, with the main change being that floppy drives no longer mount by default.

According to the cloud-init documentation, a floppy drive with the label of cidata should work, but Packer has no option to set the floppy label for vSphere.

The only other options for loading the cloud-init configuration is via HTTP (directly from Packer or some other URL), or building a custom ISO.

Since I didn't want to build a custom ISO, I ended up uploading my cloud-config YAML to my s3, and referencing to it from the Packer file.

]]>
Upgrading to ESXi 6.5 on HP gearhttps://beryju.org/blog/upgrading-to-esxi-6-5-on-hp-gear2020-04-26T17:53:42+00:002020-04-26T17:53:42+00:00
It's been a day since vSphere 6,5 came out, and sysadmins all over the world have been updating their test systems. This works really well if you update to vCenter 6.5 first, since it has the Update Manager integrated.

Upgrading to ESXi 6.5 worked fine on my Dell R710, which was running ESXi 6.0u2 (Dell customized) before. My DL380 G6's however just threw the error Software or system configuration of host <hostname> is incompatible. Check scan results for details. They also have ESXi 6.0u2 (HP customized) installed, however it turns out that there's a VIB in the old HP image that conflicts with 6.5.

To remove it, you just have to enable SSH and execute esxcli software vib remove --vibname=char-hpcru. If you have access to the offline depot for 6.5, you can just continue with the update:

After that command has finished, you can reboot the host via vCenter and it'll upgrade to 6.5. To update it with the Update Manager instead, you have to reboot the host after removing the VIB and Remediate the host after that.

The experimenting

I've recently started to mess around with IPv6, mostly for the reasons of being future-proof (somewhat), a lot of free addresses and also cause it seemed interesting. Now at home I already have IPv6, at least in theory. My home connection is a UnityMedia Cable Connection. This is running DS-Lite, so the whole aparetmeent complex has an external IPv4, and every flat has their own IPv6 space. Sounds pretty easy to deal with, right? No. (But this is also not the point of this post).

The first act in implementing IPv6 is finding out which implementation the Service Provider used. The best (in my opinion) implementation I have seen of this is at our work Colo, where we have Init7, a swiss ISP. You get a /48, and they just RA on your Ports so everything configures itself.

Online's implementation I would describe as the exact opposite. You also get a /48, which is nice, but nothing is automatic. For each Server you have with them, you get a /56 Subnet from your /48. So in the case you only have one server, you only get a /56 out of your /48.

As if that wasn't bad enough, you can't directly route that /56 to your Server. You will get a DHCP lease if you set it up correctly, and you will be able to ping your gateway, but it won't route you anywhere or route anything to you.

So now we're down to a /64 for WAN, which luckily you can create endless amounts of. Just finding out all of thse facts took dantho and I around 4-6 hours.

Now of course your Router doesn't get these DHCP leases without setting a DUID, basically a Client-Identifier proving that you actually want the Server to have IPv6. The only way to set this on pfSense is by encoding the data, and writing it to a special file. On VyOS you can easily set it as a parameter on that interface (set interface ethernet eth0 dhcpv6-options duid <duid>), but only if you're on a recent beta version (I am running VyOS 1.2.0-beta1 (lithium) in this case)

But the fun still doesn't end here. Since online.net uses Router Advertisements for their gateways, you're pretty much limited to VyOS or some other software router that can accept RA's on WAN interfaces.

And since we can only route a /64 to WAN, and can't route a seperate /64 to LAN since it would be missing the DUID, we have to use a smaller subnet than /64, which means no EUI-64 for you!

]]>
Getting Started with Foreman: Part 3https://beryju.org/blog/getting-started-foreman-part-32020-04-12T17:16:16+00:002020-04-12T17:16:16+00:00

What we're going to do in this Part

Continuing on from last part, we're going to provision VMware's ESXi. Since ESXi is based on Linux, we can actually do this without a separate server or special configuration, we just need a few files.

Preparation of the Source

First of we're going to need to extract all files from the installation ISO. To do that, we're going to mount it under /mnt and copy the files over to /srv/tftp/esxi/6. We also have to adjust the prefix on the bootloader, since it wants to load files straight from the root. After that we have to make some adjustments to syslinux, since the version shipping with Foreman doesn't quite do what we need it to do.

Now that we have all files in place, we can close the SSH session and jump over to the webinterface to create all necessary objects there.

Creating templates

We're going to start with the Installation Media. This isn't actually used by ESXi during the installation, but it is required by Foreman.

image

Afterwards, we have to create the operating system itself. Choose Red Hat as the family and SHA512 for the password hash. You can also go ahead and set the installation media to the one we just created in the Installation Media tab after you create the OS.

Now we need to create our PXELinux template, to tell Foreman where it can find our ESXi sources.

It’s colo time baby!

the structure of this post was totally not stolen from MonsterMuffin (<3 bb)

After a recent power bill reminded me that Servers were not free to run, but rather pulled some rather big power costs behind them, I decided to downsize.

My initial Plan involved selling 3 of my 1366-era servers and keeping the R410 as sole VM host. This brought it's own headaches, like having to deal with moronic eBay buyers and manually having to fiddle with the partition table since it was a partition in a partition (don't ask)...

So anyways, after having sold the three space-heaters, and having moved everything to the R410, I quickly realized something: This thing is loud as fuck.

I however didn't have the money to go for something more quiet, like a DL360e G8 or similar. So I started looking at colo....

Colo at work

A few days later I asked my boss if we had some space left in our Colo, and if I could use 1u of that. We rent a quarter rack locally to use, with IWB in their BSL1 Datacenter. After having explained to him, why I, a normal IT Tech, has his own private servers, he accepted my request.

This meant I had to order rails, somehow squeeze 2 SSDs into an R410 and make sure I have a colleague to install it with.

The Pre-Preparation

Cleanup

Since this server has lived in my mildly-dusty room for a good year, and I don't have a compressor at home, I used this opportunity to clean the Server.

This should improve temperatures by a little, while also keeping the Color Rack dust-free.

The Rails

Rails were pretty easy to find on eBay, however they were sadly 70€ with shipping. There were a few auctions at the time, but since I didn't want to have an extra wait and potentially spend more money, I just bit the bullet and forked out the 70.

The SSDs

Pretty early after having fully migrated to the R410 I noticed that IOPS were atrocious with the local HDD Raid 10. I did have 2 Crucial 250 GB SSDs laying on hand, which I used to use with iSCSI. My First plan was to just a Slimline Optical to HDD adapter, but I couldn't find mine and dint't want to buy another one.

After playing around with a few other plans, I decided to get this, which converts the slimline SATA Power to normal SATA Power. This would be enough for one SSD. Shortly after having gotten the adapter though, I realized that 250 GB of Flash weren't going to cut it...

The R410 however only has one spare power connector, so I had to get creative. One SATA Power to Molex and one Molex to double SATA Adapter later, I had 2 SSDs running in my R410. These fit pretty fairly in the space where the optical drive would be (I only had to remove a few metal...

]]>
My Thoughts about Puppet 4https://beryju.org/blog/my-thoughts-about-puppet-42016-07-24T22:00:00+00:002016-07-24T22:00:00+00:00
This weekend I decided to upgrade my Foreman to 1.12, which finally supports Puppet 4. I was pretty excited for this, since I always try to run the latest software since April 2015. I used this guide to upgrade my Puppet install since Foreman still supports Puppet 3, and won't force you to upgrade. The guide in itself wasn't too hard, so I was able to finish it within the hour. Shortly finishing the guide, I started getting bombarded with mails from my foreman, since nodes started to fail. Now it's Sunday evening, and I still haven't fixed all the issues that came up since the upgrade. Now that might be due to my (relative) inexperience with Puppet (about 1 year), but I'd still like to share my thoughts on Puppet 3 vs Puppet 4. So here's a list of thoughts in no particular order:

New AIO-Packages

Puppet 4 switched to so-called 'AIO' Packages, which means that there are less dependencies, which in itself is a good thing, but that also means it installs itself to /opt/, as opposed to the correct places in your OS. Since the puppet executable is also in there now, you can't just run puppet agent -t, you either have to add /opt/puppetlabs/puppet/bin to your PATH or run /opt/puppetlabs/puppet/bin/puppet agent -t every time you want to manually apply changes. Multiple places, as well as the guide above mention that there are non-AIO packages, but I have yet to find those.
Another problem with this is that you no longer have an InitV or SystemCtl Service, at least until you run /opt/puppetlabs/bin/puppet resource service puppet ensure=running enable=true.

New directory Structure

This is really just a little nitpick rather than an issue, but changing the directory structure doesn't seem needed from the old versions. I also didn't have to deal with this much since the guide above mentions where you have to move what. This also doesn't concern the agent-installs either since I just copied the old config file over.

Puppetserver running in JRuby

So with the new version it's no longer a 'puppetmaster', it's a puppetserver. And whereas the old one could run as ruby itself or with apache2 and passenger. This is my main complaint with this post. I just don't understand why they had to drag Java into this. With ~50 hosts on Puppet 3, my master had a RAM usage of about 3 GB (with foreman, foreman-proxy and a few other things). After I upgraded to Puppet 4, the RAM usage rose to 3.9 out of 4 GB, with ~1 GB swapped. After I gave the host 6 GB instead of 4, it settled down to about 4.2 GB of usage. WHY? Why did they have to drag Java into this? WHAT ADVANTAGE DOES JAVA BRING IN THIS MIX? IT WAS RUNNING SO WELL WITH PASSENGER AND APACHE2!

Module Compatibility

An even smaller nitpick than the directory structure above, since out of the ~35 modules I use only 2...

]]>
Getting Started with Foreman: Part 2https://beryju.org/blog/getting-started-foreman-part-22016-07-17T00:00:00+00:002016-07-17T00:00:00+00:00

What we're going to do in this Part

Continuing on from last part, we're going to provision Windows (7/10/Server). There are two ways to do this, Wimaging by kireevco or a WDS Server. I am going to show you the WDS way since it integrates with MDT. Also Wimaging hasn't been updated in a while.

Prerequisites

Since this is a continuation of the previous part, the hostnames are going to be the same as last time.

Here's a quick list of things you need to follow this tutorial:

a Windows Server ISO (I am using 2012R2 Datacenter here, but anything that has the ability to install WDS will work)

an ISO of the Windows you want to install (I am using Windows 10 Pro x64 here, but it's pretty much the same with Windows 7/8/8.1)

about 150 GB Free Space on your VM host for MDT and the provisioned Windows VM

about 1-2 hours of your spare time

(optional, but recommended) an existing Active Directory Domain. I am not going to be relying on this, but I'll highlight where the steps differ.

Enough rambling, let's get started with installing the WDS Server.

Installing and Configuring the WDS Server

The VM doesn't need a lot of power. I set mine up with one vCPU, 4 GB of RAM and 2 Harddrives. I use the first harddrive for the Windows install and the second one for WDS/MDT. This should be enough for 99% of all environments, since it essentialy only serves as a TFTP/SMB Server.

Mount your Windows Server ISO and start the install. There's nothing special to do here, just format both drives, select the first one as installation target and lean back for 15-30 mins while it installs itself.

After we're done installing the OS, we're going to change the Hostname and a few other things. I am going to call my Server war-dev-mdt01.beryju.org, but you can call it whatever you want to. Additionally I am going to assign it a static IP since it shouldn't have to rely on Foreman's DHCP Server. After having done that, reboot the Machine so the Hostname is applied. After the restart, you'd also join it to the domain if you have one. Now begins the actual installation of our WDS Server. Open the Server Manager and go to Manage, Add Roles and Features and click next until you're presented with a list of roles to add. From that list we're going to select Windows Deployment...

]]>
Getting Started with Foreman: Part 1https://beryju.org/blog/getting-started-foreman-part-12016-07-03T00:00:00+00:002016-07-03T00:00:00+00:00

What is Foreman

Foreman is a complete lifecycle management tool for physical and virtual servers. We give system administrators the power to easily automate repetitive tasks, quickly deploy applications, and proactively manage servers, on-premise or in the cloud.

This is a multi-part series about provisioning and automating things with Foreman. It's going cover deploying Debian, Windows (7/10/Server) and ESXi as well as automating things like Package installs.

Since I am using VMWare, this tutorial is going to involve integration with vCenter and ESXi. Foreman supports Bare metal, Amazon EC2, Google Compute Engine, OpenStack, Libvirt and oVirt, so if you use any of those some of the instructions won't match up.

Installing Foreman

Installing the OS

In this case I am going to give the Foreman VM 1 vCPU and 2 GB of RAM as well as 25 GB of HDD Space. The Specs are very dependent on the amount of hosts you are managing with puppet. For my production Foreman VM, which has about 50 hosts checking in, I provisioned 3 vCPUs and 4 GB of RAM.

Since this will deploy machines over DHCP, I am going to set a static IP. Also since we don't have a DNS Server yet (let's assume), I am going to set it to the Google DNS

The hostname for this test box is war-dev-puppet01.beryju.org, but don't let that puppet throw you off. I chose to use puppet instead of foreman since it fits better in my naming Scheme. For partitioning I am going to go with a single Partition.

Dependant on your distribution, you might need to adjust that jessie. This adds the Foreman APT Repository, installs their public GPG Key and installs the foreman-installer Package. This package installs Foreman, Foreman Smart Proxies and everything else needed.

This should be the result we get after running above commands. Now we can access the Web Interface of Foreman, which is listening on https://<ip>. First time authentication happens with the credentials provided after the installation. You probably want to change your password to something you can actually remember. To do that, you click...

Over the weekend I've been renmaing my Domain Controllers to fit in with the other Servers (dc1 -> dc01). The Next day, I couldn't log into vCenter anymore with my Domain Account, neither with Windows Session Credentials nor Direct Input. I got this very cryptic error "N3Sso5Fault13InternalFault9ExceptionE":

Took me a bit tinkering, but then I rembemered I renamed my DC's and hadn't updated them in vCenter. So I logged in with the vCenter SSO Administrator Account, readded the Authentication Source and all was well, even the Windows Session credentials worked again!