Since a few years now I'm a happy Archlinux user. I like their philosophy which was one of the major points why I made the switch back in the days.

I'm not only using it on my laptop, but do have some devices running at home which are configured with it. From a thin client which I use as a docker node through some raspberry pies running ArchlinuxARM.

Since Arch is a rolling update distro there are several updates available throughout the day. To keep on top of them I had to log in on all those devices at least …

as published a few months ago I worked out a dockerized a jenkins farm where both master as slaves are docker containers working together with services like nexus and such. Next to that setup I've dockerized my home setup where services like pi-hole, home-assistant and others are running as docker containers on a thin client I promoted to my home lab.

To have an overview about all those containers and the resources they are consuming I pulled in the git repo of Brian Christner which spins up a whole prometheus stack with some exporters and a grafana instance to visualize …

a few months ago I configured a thin client as my home server to replace the previous raspberry pi setup.

During that migration I moved over all native services within docker containers. One of those services being a pi-hole setup to block ad serving domains on dns level and to have a dns cache within our LAN to gain a bit of speed.

It has been running ever since without any issue and worked pretty well.

When cloudflare announced their fast and privacy based DNS resolver I got a bit intrigued by their DNS over HTTPS feature. Especially since our …

recently I started working at a new project where the infra is maintained by ansible. When been asked to write some functionality in a playbook I missed my vagrant puppet setup where I could easily test my puppet code on my local machine.

Due to my previous project I felt like maybe I could use docker for this purpose on the ansible part. So I looked a bit around and stumbled on the docker-ansible github repository of William Yeh. He already did a great job by creating a docker container with ansible preinstalled for a lot of linux distributions.

The next thing on the roadmap was to use this jenkins setup to actually build new docker images for specific software. Before going to the different teams and talking how they now build their software and how this could be done using this new containerized setup I setted up a new jenkins job.

This jenkins job will build a generic jenkins slave docker container which will be used by the jenkins master to build some …

Today we bumped into an interesting issue in the jenkins builds of some android based applications. The gradle commands succeeded but then suddenly failed the build with this most cryptic message ever:

BUILD SUCCESSFUL
Total time: 1 mins 20.492 secs
FAILURE: Build failed with an exception.
* What went wrong:
Already finished
* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output.
[Pipeline] }

Since this came out of nowhere without any modification on the build servers we where flabbergasted since the builds ran fine on our local machines.

looking for a global repository store which could store maven projects, yum repositories, docker repositories, we bumped into Nexus repository manager. We used the official docker image to see how it can be implemented in the dockerized CI environment.

docker repository

as a first the docker repository feature could be enabled so we can start building and storing docker images for the different jenkins build slaves and the jenkins master so our work is reproducible and stored in a safe central place.

We configured 3 repositories in nexus for our docker images seen as a recommended approach in the nexus …

started at a new customer we were looking for a more flexible way of having jenkins spinning up slaves on the fly. This in a way a slave is only started and consuming resources when a specific job is running. That way those resources could be used more efficient.

Also the fact that developers could take control over their build servers by managing the Dockerfiles themselves is a great advantage too. But that's for a later phase. Let's start at the beginning.

For the docker host a CentOS 7 server has been provisioned and prepared to run the docker daemon …