Month: February 2016

Hello internet and welcome to the last part of our tutorial series about Continuous Integration, Code Deployment and Automated Testing with Jenkins. If you arrived at this post and have read all the others we are very proud of you, hope you enjoyed the journey and learned something along the way. In this post we are going to discuss and wrap up everything we talked about so far and present an alternative to Jenkins.

Quality and Testing – one of the most discussed and valuable topics software engineering has to offer. This blog post will cover all the relevant stuff related to quality and testing in regard to Continuous Integration and Jenkins. We will show you in detail, how you can automate your testing with Jenkins to ensure best possible software quality.

In this blog post we will show you, how to set up your first job using Jenkins CI and Github. We will guide you through every single step of the process – including all rookie mistakes we made. Without further ado, let’s begin.

Hi, it’s us again, the guys with the strange idea of using Sesame Street characters in a blog series about CI. Since we didn’t really cover the reasons, why you should use CD / CI, we want to catch up on that this time and therefore talk a bit about popular buzzwords like “Deployment Pipeline” and “Branching”. This might be especially useful, if you want to use the CI methodology for your next project but have to convince your team members of the benefits at first. This post is more abstract than the others, if you just want to get down to the Jenkins business you might skip it (although we don’t recommend it, since we put a lot of love into writing this).

Now, it’s finally time to start our first load test. We will be using ApacheBench. To install it simply enter apt-get install apache2-utils. To load test your website enter

ab -n 200 -c 50 <URL_to_your_page>

This command runs 200 requests, with a maximum of 50 at the same time. The results are then displayed in the terminal.

All good so far. We decided to to run 10000 requests with a maximum of 1000 at the same time with 1, 5, 20 and 100 Docker containers providing our website, to see if the amount of containers makes a difference. However, the results did not really vary at all. No matter if we used 1 or 100 containers. The requests per second and the time per request ended up being the same (with a little, not noteworthy variation) for every amount of containers.Continue reading →

To benefit from using a loadbalancer we need several machines to distribute the traffic on, evidently.
Thanks to Docker we simply run

docker run -d -p 81:80 testwebsite:1

to get a second machine. This time the container port of the webserver is mapped to port 81. If you now visit <IP OF YOUR VM>:81 you should see your test website.
You can have as many machines as you want to. Simply pay attention to the ports.
Of course we don’t want to write this command manually each time when we want to create a new container. Especially not when we want about 100 new containers. That’s why we wrote a small bash script, which does the job for us.