Since WordPress cam up with the new editor, actually, writing does not spark fun anymore. Actually, I do not know why I do not like the new editor. Therefore, I recalled MarsEdit, which I used quite some time ago.

I am still not disappointed by the editor. Connection the WorPress installation worked like a charm. Looks like I can start writing blog articles again in a “traditional” manner.

Said that, this is going to be the first article written with MarsEdit for a long time.

In my current position, my co-workers are very disrespectful considering my time. Therefore meetings are often scheduled over lunchbreak or they intercept one in front of the elevator absorbing your lunch break starting with the words “Do you have a few moments…?”

Eventually, I start to bring in my own kind of “fast food” in the form of smoothies. As I am very bad in memorizing recipes, I started writing them down in my blog. Feel free to experiment and comment on them.

For the logistics I started to recycle true fruits bottles which are available in various sizes.

So my first try, I call Blue Dragon with the following ingredients:

1 handful of frozen or fresh blueberries

2 apples

2 bananas

1 carrot

1 slice of honey melon

1 tbsp of almond butter

100 ml of almond milk

some water

As tomorrow is my first working day after the christmas brea, I am looking this one is getting me through the day…

2018 was not bad at all. Here are some of the major achievements and events of last year. Quite a few things happened during the last year.

After I bought a 4K TV, I finally got my XBOX One X as there was a great deal at Amazon. Finally, I was able to fully utilize my TV.

In February, I started teaching Interactive Systems and MCI at the Baden-Württemberg Cooperative State University in Karlsruhe. It looks like the course went quite well as the University decided to offer it a second time during the autumn where I gave the lecture a second time. As a fun fact, they forgot to tell me until I got a note some days before the course was scheduled.

In May I started teaching a second lecture covering data structures and algorithms, also at the Baden-Württemberg Cooperative State University in Karlsruhe which will be given annually.

As a result of the lecture, I started to write a book covering this topic using .NET Core. However, I am afraid I won’t be able to finish it in 2019.

I also was appointed as a member of the examination board of Baden-Württemberg Cooperative State University in Karlsruhe where I advised two students writing their bachelor thesis.

Also in May, I became a certified Product Manager after becoming a Certified Scrum Product Owner in 2017. The course, given by Product Focus based in London, UK is highly recommended.

Our son celebrated his very first birthday and started walking this year. Not really my personal achievements but something you are proud of as a dad.

In September we switched from HomeKit to Amazon Alexa and changed meanwhile almost all our Lights to Hue and IKEA TRÅDFRI.

There is only a little I was able to achieve in my day job, however, our team managed to finish a five-year multi-million migration.

As some side projects, I started learning Ansible as well as Docker and moved my 10-year-old handcrafted, homebrewed server to the latest generation of automated container based version.

I also started again blogging, especially about my experiences with Docker, Ansible and further Automation.

Unfortunately, our rabbits died by the rabbit hemorrhagic disease. Actually, this was a rather sad event. The vaccine is not easily available for this particular and very variant of the virus. It was an event I learned a lot about vaccines, viruses as well as diseases at all.

Also, it seems as we experienced one of the longest droughts in Germany. We did not have any rain for weeks or even months.

In December, I was appointed at a teaching assignment of Software Engineering of Complex Systems at the University of Heilbronn. I will start this lecture in February ’19 as well as a lab covering software development. I am really looking forward for these new courses.

All over the year, I suffered from overall sleep deficiency. I wish I would know where our 16mo takes his energy from.

One highlight was definitely visiting my former manager and old friend from Microsoft Research who meanwhile is head of a university.

Eventually, I started using Twitter again.

Although I thought 2018 was a rather boring year, it looks as there was a lot going on. Unfortunately, I was not able to read that many books as I am used to, but his might change in 2019. So I am looking forward to an exciting next year and many new things to learn.

On my way setting up a proper monitoring for my server, I just installed Logwatch to receive a daily summary of the what happened recently on the machine. I will have a look into Prometheus, Grafana, Icinga etc. later. However, for now I just wanted a quick summary of the daily “what’s going on on the machine”. Eventually, I had to fix an occur No such file or directory error.

Therefore, I decided to use Logwatch as a lightweight solution to my needs.

Configuration & Troubleshooting

I basically set up two parameters, the e-mail as well as the detail level I want for the report. Important to know is the order Logwatch is applying your configuration settings. Following the recommendations, I did not change anything in the configuration file at

/usr/share/logwatch/default.conf/logwatch.conf

rather I decided to copy the file to

/etc/logwatch/conf/

The reason is the order, logwatch is scanning for configuration parameters in the following order. Each step actually overwrites the previous one.

/usr/share/logwatch/default.conf/*

/etc/logwatch/conf/dist.conf/*

/etc/logwatch/conf/*

The script command line arguments

Eventually, I ended up in the following error:

/etc/cron.daily/00logwatch:/var/cache/logwatch No such file or directory at /usr/sbin/logwatch line 634.run-parts: /etc/cron.daily/00logwatch exited with return code 2

To fix this, avoid copying the original configuration to one of the other places. I did this because I followed some recommendation I received. Instead, I now touch a new configuration file as well as setting the two parameters for MailTo= as well as Detail=. Both are s set using Ansible variables in my scripts. The additional configuration file now looks pretty boring, though:

MailTo = mail@example.orgDetail = Low

You also can provide these parameters when calling the script in the cron job: Using Ansible the modification would look like the following:

I decided to change the cron job call as one never can be safe from the file changing during package updates. The same should be valid for the configuration file at its origin place.

tl;dr

Setting up Logwatch using Ansible might cause strange “No file or directory”-errors during the cron job call. This can be avoided by applying additional configuration settings at appropriate configuration locations.

As I started with a minimal Ubuntu 18.04 LTS installation, there is simply no Python 2 installed. However, to run the Ansible tasks on the node, Python is required. I made use of the raw task in Ansible to update the package lists as well as install the package python-minimal. In addition, I added the package python2.7-apt in this bootstraper as it is needed later on. Once Python has been installed the ping playbook worked without any problems.

Even though, I wanted to do this structured, beginning with this topic is quite a mess. You have to set up everything and pull all pieces together before one can start working properly.

Getting a Server

First of all, I picked up a new root server. I will install Ubuntu 18.04 LTS on this particular one. I found a great offer at Netcup, where I just ordered a new root server with unlimited traffic.

That way, I can start moving bit by bit from my old, handcrafted and home-brewed server to the new one instead of replacing the old server. Once everything works fine, I will migrate the data and change the DNS entries to the new server.

macOS as Ansible Control Machine

When starting with such a project, one should expect problems from the very first moment. For me, it started already when I wanted to install Ansible on macOX. Why I’ve chosen Ansible over Chef and Puppet will be covered later.

There is no native Ansible package for macOS available. Instead, you can use Homebrew to do so. Assuming Homebrew is already installed, the following command should do the job.

As most of the time the latest macOS version Mojave might be the reason, running

> xcode-select --install

the command line tools will be installed as part of Xcode. Once accomplished, the homebrew command should work like a charm. Eventually, Ansible is available on my local laptop which is now my control machine.

Setting up the SSH Access

Usually, I deactivate all password-based logins on a server and allow only RSA-key-based logins. To generate the key, you can follow the instruction on ssh.com.

To upload the key, ssh-copy-id is needed. However, once again this is not available on macOS and you have to install it using

> brew install ssh-copy-id

Now you can upload the key using

> ssh-copy-id -i ~/.ssh/key user@host

So far I have planned this was the only step necessary on the server. All future settings, including deactivating password only access should be set using Ansible scripts.

Setting up a Repository

As a (former) developer, I almost don’t do anything without putting my stuff into a revisioning system. For my Ansible scripts, I decided to set up a private GitHub repository. Although I use GitLab and Subversion at the moment at work as well as running a Subversion server at home, I meanwhile put almost everything int GitHub repositories. Therefore Github comes just in quite handy.

Why Ansible?

For automation, there are several frameworks. One major advantage of Ansible is the fact, only a single so-called control machine is needed. There is no further Ansible infrastructure needed. Once you have installed Ansible as described above you are ready to go. For Puppet, you need again to deal with a server and agents an eventually you stick with running daemons. While this is a feasible approach e.g. to manage my team’s 100 servers at work, this is an overkill for personal use. Same with Chef. That’s the reason, I decided to use Ansible as it seems a quite feasible approach for personal use with little overhead while being an approach with the least attack vectors.

If not being familiar with ansible, I recommend watching the following 14-minute video, which gives you a good overview of Ansible’s capabilities. You might want to skip the last four minutes as it deals with Red Hat’s Ansible Tower which targets enterprises.

I recently gave a talk, introducing DevOps how to add value by fully automated toolchains. Following this at my daily work, I realized, that my very personal servers are managed overall manually. Every single change is an epic battle. Therefore, I decided to get my hands dirty and to set up a fully automated toolchain for my personal server.

Why!?

I want to learn about the tools, the automation and the processes necessary. Maybe you ask yourself, why one should do this. I simply do this to fetch up with the current technology development. I meet a lot of technical managers who actually have no clue about the technology they are talking about. Personally, I don’t want to be one of those managers.

On the other hand, I have to move to a more recent version of my server’s OS but I don’t want to set up the server again by hand.

About one year ago, I had to re-install the server completely after a configuration mistake, three months ago I had to spend hours and hours to manually clean my WordPress installations due to an infected site. So there is an actual need as well.

I decided to blog about this adventure, which is especially interesting, as the blog itself has to move at one point to the new server. As usual, I blog to keep notes for myself. However, if you find the articles helpful, feel free to drop me a line, recommendations and tips are warmly welcomed.

Goals

I thought about some goals I want to achieve. Ultimately, I want a fully automated deployment pipeline for at least my main server. To achieve this, I want to

upgrade to a new version of my server’s OS

apply automation scripts such as Puppet, Chef or Ansible

containerize everything – at least Mail and Web-Server have to be deployed independently

If you are connecting a lot from via SSH to remote servers, entering passphrases over and over might be a pain. As long as your hardware is secured, you might consider enabling the macOS keychain also for your SSH connections.

Go to ~/.ssh and open or create the config file. Now just enter the following entry to the file.

Host *
UseKeychain yes

In the case you want to enable the keychain only for a certain server, simply replace * with your hostname such as foo.com.

Prior to macOS Sierra, macOS offered a dialog to enter the passphrase where you were able to select the passphrase to be remembered. For some reason, this dialog was removed. On the other hand, the keychain option has been introduced with macOS 10.12.2.

Finally, I found some minutes to set up my website with SSL encryption. The issue here, many hosters demand a fortune for certificates.

Applying Let’s Encrypt

Let’s Encrypt is a free alternative, providing certificates, accepted by most of the browsers.

While manually installing a certificate can be a real pain, Let’s Encrypt utilizes Certbot to do on your behave. Once installed you can select the sites to protect and let do Certbot its work. There is a crisp description on the Let’s Encrypt page which explains how this actually works.

To be honest, applying SSL certificate using this setup makes it absolutely easy for everybody to do so – as long as you have shell access on your server. After downloading the packages – which are provided for a variety of OS and Web server software – Certbot even takes care of the configuration. You also can configure the sites in a way, that all HTTP requests are automatically are forwarded to HTTPS.

Once done, the site was already available via HTTPS. Unfortunately, Chrome told me the connection is still not secure.

The help provided did not help much either.

Further investigation eventually showed all images within posts did not use HTTPS even after the base URL of the site was changed in WordPress settings.

The links are not created on the fly – the are actually stored in the text. In the database. At least for internal resources (aka images from your own server), I expected something like relative links or similar. To be honest, I have never looked that much at the WordPress internals.

Altering the Database

To change this quickly, I decided to alter all not secure URLs in the database. As changing the protocol from HTTP to HTTPS is changing base URLs as when changing the domain, you could make use of tools to do so.

Once done the next request already ended up in a valid and secure HTTPS request.

Warning: Do a backup (apply mysqldump) before altering your WordPress database, in case you brick it for whatever reason.

tl;dr

Using certificates issued by Let’s Encrypt you can automatically apply these by using Certbot to secure your website. While doing this I experienced some issues with WordPress as all URLs are stored as plain text in the database. With generated scripts from Misha Rudrastyh’s Query Generator altering the WordPress content to apply HTTPS instead of HTTP is quite easy.