Deploying Openshift to AWS with HashiCorp Terraform and Ansible

Tuesday, Oct 16, 2018, 6:15 PM

HAYS7 Castle St, Edinburgh EH2 3AH Edinburgh, GB

30 Members Went

In this session we look at Infrastructure as Code and Configuration as Code, as we demonstrate how to use these approaches to deploy RedHat OpenShift to AWS with HashiCorp Terraform and Ansible. We start off with configuring AWS credentials, then use HashiCorp Terraform to create the AWS infrastructure needed to deploy and run our own RedHat OpenSh…

Once that’s done, you can try it out with the customary “Hello World” example…

I’m running Docker on an Ubuntu VM, but the commands and the results are the same regardless of platform – that’s one of the main Docker concepts.

You can then check which processes (docker containers) are running using the “docker ps” command – in my example you can see that there’s one Jenkins image running. If you run “docker ps -a” you will see all containers (including stopped ones, of which I have a few on this host):

and you can check your Docker version with:

root@ubuntud:~# docker --version
Docker version 1.13.0, build 49bf474

Now that the basic setup is done, we can move on to something a little more interesting – downloading and running a “Dockerised” Jenkins container.

I’m going to use my own Dockerised Jenkins Image, and there will be more detail on that in the next post – you’re welcome to try it out too, just run this command in your terminal:

docker run -d -p 8080:8080 donaldsimpson/dockerjenkins

if you don’t happen to have my docker image cached locally (like I do) then docker will automatically download it for you from Docker Hub then run it:

That command did a quite few important things, here’s a quick explanation of them all:

docker run -d

tells docker that we want to run the container in the background so that we can carry on and do other things while it runs. The alternative is -it, for an interactive/foreground session.

docker run -d -p 8080:8080

The -p 8080:8080 tells docker to map port 8080 on the local host to port 8080 in the running container. This means that when we visit localhost:8080 the request will be passed through to the container.

docker run -d -p 8080:8080 donaldsimpson/dockerjenkins

and finally, we have the namespace and name of the Docker image we want to run – my “donaldsimpson/dockerjenkins” one – more on this later!

You can now visit port 8080 on your Docker host and see that Jenkins is up and running….

That’s Jenkins up and running and being happily served from the Docker container that was just pulled from Docker Hub – how easy was that?!

And the best thing is, it’s entirely and reliably repeatable, it’s guaranteed to work the same on all platforms that can run Docker, and you can quickly and easily update, delete, replace, change or share it with others! Ok, that’s more than one thing, but the point is that there’s a lot to like here 🙂

That’s it for this post – in the next one we will look in to the various elements that came together to make this work – the code and configuration files in my Git repo, the automated build process on Docker Hub that builds and updates the Docker Image, and how the two are related.

I wrote a while back about my troubles with Carrier Grade Nat (CGNAT), and described a solution that involved tunneling out of CGNAT using a combination of SSH and an AWS server – the full article is here.

That worked ok, but it was pretty fragile and not ideal – connections could be dropped, sessions expired, hosts rebooted etc etc. Passing data through my EC2 host is also not ideal.

My “new and improved” solution to this is to use a local tool like ngrok to create the tunnel for me. This is proving to be far simpler to manage, more reliable, and ngrok also provides a load of handy additional features too.

Here’s a very quick run through of getting it up and running on my Ubuntu VM, which sits behind CGNAT and hosts a webserver I’d like to be able to access from the outside occasionally. This is the front end to my ZoneMinder CCTV interface, but it could be anything you want to host and on any port.

Intro

After switching to a 4G broadband provider, who shall (pretty much) remain nameless, I discovered they were using Carrier-Grade NAT (aka CGNAT) on me.

There are more details on that here and here but in short, the ISP is ‘saving’ IPv4 addresses by sharing them out amongst several users and NAT’ing their connections – in much the same way as you do at home, when you port forward multiple devices using one external IP address: my home network is just one ‘device’ in a pool of their users, who are all sharing the same external IP address.

The impact of this for me is that I can no longer NAT my internal network services, as I have been given a shared pubic-facing IPv4 address. This approach may be practical for a bunch of mobile phone users wanting to check Twitter and Facebook, but it sucks big time for gamers or anyone else wanting to connect things from their home network to the internet. So, rather than having “Everything Everywhere” through my very expensive new 4G connection – with 12 months contract – it turns out I get “not much to anywhere“.

The Aim

Point being; I would like to be able to check my internal servers and websites when I’m away – especially my ZoneMinder CCTV setup – but my home broadband no longer has its own internet address. So an alternative solution had to be found…

I basically use 2 servers, the one at home (unhelpfully now stuck behind my ISPs CGNAT) and one in the Amazon Cloud (my public facing AWS web server with DNS), and create a reverse SSH Tunnel between them. Plus a couple of essential tweaks you wont find out about if you don’t read any further 🙂

The Steps

Step 1 – create the reverse SSH tunnel:

This is initiated on the internal/home server, and connects outwards to the AWS host on the internet, like so.

-N (from the SSH man page) “Do not execute a remote command. This is useful for just forwarding ports.”

-R (from the SSH man page) “Specifies that connections to the given TCP port or Unix socket on the remote (server) host are to be forwarded to the given host and port, or Unix socket, on the local side.”

8888:localhost:80 – means, create the reverse tunnel from localhost port 80 (my ZoneMinder web app) to port 8888 on the destination host. This doesn’t look right to me, but it’s what’s needed for a reverse tunnel

the -i and everything after it is just me connecting to my AWS host as my user with an identity file. YMMV, whatever you nornally do should be fine.

When you run this command you should not see any issues or warnings. You need to leave it running using whatever method you like – personally I like screen for this kind of thing, and will also be setting up Jenkins jobs later (below).

Step 2 – check on the AWS host

With that SSH command still running on your local server you should now be able to connect to the web app from your remote AWS Web Server, by reading from port 8888 with curl or wget.

This is a worthwhile check to perform at this point, before moving on to the next 2 steps – for example:

This shows that port 8888 on my AWS server is currently connected to the ZoneMinder application that’s running on port 80 of my home web server. A good sign.

Step 3 – configure AWS Security & Ports

Progress is being made, but in order to be able to hit that port with a browser and have things work as I’d like, I still need to configure AWS to allow incomming connections to the newly chosen port 8888.

This is done through the Amazon EC2 Management Console using the left hand menu item “Network & Security” then “Security Groups”:

This should load your current Security Groups, which you can click on to Edit. You may have a few to check.

Now select Add and configure a new Inbound rule something like so:

It’s the “Custom TCP Rule” second from the bottom, with port 8888 and “Anywhere” and “0.0.0.0/0” as the source in my picture. Don’t go for the HTTP option – unless you’re sure that’s what you want 🙂

Step 4 – configure SSH on AWS host

At this point I thought I was done… but it didn’t work and I couldn’t immediately see why, as the wget check was all good.

Some head scratching and checking of firewalls later, I realised it was most likely to be permissions on the port I was tunneling – it’s not very likely to be exposed and world readable by default, is it? Doh.

to that file, checking that there’s no existing reference to GatewayPorts – edit this file carefully and at your own risk.

As I understand it – which may best be described as ‘loosely’ – the reason this worked when I tested with wget earlier is because I was connecting to the loopback interface; this change to sshd binds the port to all interfaces. See the detailed answer on this post for further detail, including ways to limit this to specific users.

Once that’s done, restart sshd with

service ssh restart

and you should now be able to connect by pointing a web browser at port 8888 (or whatever you set) of your AWS web server and see your app responding from the other end:

Step 5 – automate it with Jenkins

The final step for me is to wrap this (the ssh tunnel creation part) up in a Jenkins job running on my home server.

This is useful for a number of reasons, such as avoiding and resetting defunct/stale connections and enabling scheduling – i.e. I can have the port forwarded when I want it, and have it shutdown during the hours I don’t.

About This Book

Find out how to interact with Jenkins from within Eclipse, NetBeans, and IntelliJ IDEA

Develop custom solutions that act upon Jenkins information in real time

A step-by-step, practical guide to help you learn about extension points in existing plugins and how to build your own plugin

Who This Book Is For

This book is aimed primarily at developers and administrators who are interested in taking their interaction and usage of Jenkins to the next level.

The book assumes you have a working knowledge of Jenkins and programming in general, and an interest in learning about the different approaches to customizing and extending Jenkins so it fits your requirements and your environment perfectly.

Discover how Jenkins customization can help improve quality and reduce costs

In Detail

Jenkins CI is the leading open source continuous integration server. It is written in Java and has a wealth of plugins to support the building and testing of virtually any project. Jenkins supports multiple Software Configuration Management tools such as Git, Subversion, and Mercurial.

This book explores and explains the many extension points and customizations that Jenkins offers its users, and teaches you how to develop your own Jenkins extensions and plugins.

First, you will learn how to adapt Jenkins and leverage its abilities to empower DevOps, Continuous Integration, Continuous Deployment, and Agile projects. Next, you will find out how to reduce the cost of modern software development, increase the quality of deliveries, and thereby reduce the time to market. We will also teach you how to create your own custom plugins using Extension points.

Finally, we will show you how to combine everything you learned over the course of the book into one real-world scenario.

About This Video

Build and manage your own custom Docker Containers to set up data sources, filesystems, and networking

Build your own personal Heroku PaaS with Dokku

Who This Video Is For

If you’re a developer who wants to learn about Docker, a powerful tool to manage your applications effectively on various platforms, this course is perfect for you! It assumes basic knowledge of Linux but supplies everything you need to know to get your own Docker environment up and running.

What You Will Learn

Build new Docker containers and find and manage existing ones

Use the Docker Index, and create your own private one by using containers

Discover ways to automate Docker, and harness the power of containers!

Build your own Docker powered mini-Heroku Paas with Dokku

Set up Docker on your environment based on your application’s custom requirements

Master Docker patterns and enhancements using the Ambassador and Minimal containers

In Detail

One of the major challenges while creating an application is adapting your application to run smoothly on all of the plethora of operating systems available. Docker is an extremely efficient technology that allows you to wrap all your code along with its supporting files into a single bundle; it also guarantees that your application will behave in the same way on any host powered by Docker. You can also easily reuse existing Docker containers or create and publish your own. Unlike Virtual Machines, Docker containers are lightweight and more efficient.

Beginning Docker starts with the fundamentals of Docker—explaining how it works, how to set it up, and how to get started on leveraging the benefits of this technology. The course goes on to cover more advanced features and shows you how to create and share your own Docker images.

You will learn how to install Docker on your own machine, then how to manage it effectively, and then progress to creating and publishing your very own application. You will then learn a bit more about Docker Containers; built-in features and commands such as volumes, mounts, ports, and linking and constraining containers; before diving into running a web application.

Docker has functionality such as the Docker web API to handle complex automation processes which will be explained in detail. You will also learn how to use the Docker Hub to fetch and share containers, before running through the creation of your own Docker powered mini-Heroku

Beginning Docker covers everything required to get you up and running with Docker, with detailed real-world examples and helpful tips to make sure you get the most from it.

Style and Approach

An easy-to-follow and structured video tutorial with practical examples of Docker to help you get to grips with each and every aspect.

The course will take you on a journey from the basics to the advanced application of Docker containers, and includes several real-world scenarios to learn from.

and that’s that done – all very simple, quick and straightforward, now it’s time to pull down an Ubuntu image…

docker pull ubuntu

once that’s done, you are ready to run a command (bash in the case of this quick test I found suggested elsewhere) in a docker container like so:

docker run -i -t ubuntu /bin/bash

(-i attaches stdin & stdout and -t allocates a terminal)

That’s it for this post – the next will look at dynamic Jenkins Slave provisioning using Docker Containers, Jenkins plugins for Docker and ways to use Docker for a variety of Jenkins build, deployment and test tasks for Continuous Integration, Continuous Build and DevOps purposes.

In Part I, Information Radiators, I covered what they are, what the main benefits are, and the approach I usually use to set them up. This post goes in to more technical detail on how I extract this data from Jenkins.

The Jenkins XML API is very useful for automating tasks like this – if you simply append “/api/xml” to a
Jenkins job URL, it will serve up an XML version – note there is also a JSON API and a CLI and plenty of other options, but I’m using what suits me.

The Jenkins XML API

For example, if you go to one of your Jenkins jobs and add /api/xml like this:

e.g. “true” or “false” – this was very useful and easy to do, but the functionality was removed/disabled in recent versions of Jenkins for security reasons, meaning that my processes that used it needed rewritten 🙁

Extracting the data – Plan B…

So, here’s the new solution I went for – the real scripts/methods do some error handling and cleaning up etc but I’m just highlighting the main functions and the high level logic behind each of them here;

get_url’s method:

query a table in MySQL that contains a list of the job names and URL’s to monitor
for each $JOB_NAME found, it calls the get_file method, passing that the URL as a parameter.

get_file method:

this takes a URL param, and uses curl to fetch and save the XML data from that URL to a temporary file (“xmlfile”):

curl -sL "$1" | xmllint --format - > xmlfile

Note I’m using “xmllint –format” there to nicely format the XML data, which makes processing it later much easier.

get_data method:

this first calls “get_if_building” (see below) to see if the job is currently running or not, then it does:

I then have JSP pages that read data from that table, and translate things like true/false in to HTML that sets the background colours (Red, Amber, Green), and shows the appropriate blocks and details per job.

If you have a few browsers/TV’s or Monitors showing these strategically placed around the office, developers get rapid feedback on the result of their code changes which speeds up development, increases quality and reduces development time and costs – and they can be fun to watch and set up too 🙂

Information Radiators are used to provide people with feedback on the current status of code builds and automated tests in Continuous Integration and Agile development environments.

The basic idea is that when developers commit a code change, they can easily and quickly see that it has been picked up by the automated build process, and then (ideally within 10 minutes) see the result of their change; did the build succeeded and did the automated tests pass?

Martin Fowler’s description goes in to more detail on the ideas behind this approach and the function that Information Radiators serve.

The normal convention for these is to use colour coded blocks per build, using:

Green for good/passing jobs

Amber for either currently building/running jobs (or sometimes for unstable ones)

Red for failed jobs

Generally you want to keep things as clean, simple and uncluttered as possible, but sometimes it is helpful to add in a bit more info.

Details I have occasionally found worth adding include things like;

name and/or picture of the user who triggered (or broke!) the build

commit message from the code change that triggered the build

build number

history – number of recent fails or passes

date/time last failed and last checked

if you use amber for “unstable” rather than “build in progress”, you may want to add text to say if the job is currently building – I often use the “spinning wheel” icon thingy from jenkins itself :

Why build your own?

There are tons of readily available plugins that allow you to quickly and easily produce a Radiator or Wall panel from Jenkins, so why go to the bother of making your own?

Plugins are usually linked to one Jenkins instance (the one they are running on) and I have often found that alone to be too restrictive – having too many different radiators all over the place makes things too cluttered and uncertain, and people can easily start to “switch off” from them all – having one screen that people can understand at a glance usually works best.

Changing requirements – developers are constantly wanting/looking to improve processes and often come up with ideas and requests for things to try that may help them do their jobs – adding a bit of information from another source, for example, or changing the colours used to a different shade, or adding curved borders etc etc…

What I have found often works best, is to get all of the data I am interested in inside a database then write my own simple but flexible presentation layer from scratch – this gives me all the flexibility I could want (or may find myself wanting or needing later…) and importantly, it also allows me to leverage additional benefits by combining data from Jenkins with data from elsewhere – this means I can easily produce reports, charts, metrics etc that present a view comprised from multiple data sources throughout an organisation – for example, you can then easily create reports that combine:

version control – the actual code change can be extracted from version control and linked to both the developer and bug details – the “svn log” command is useful for this

environment monitors – state and health of environments; database and app server health, deployed patch and code levels etc (again, these are usually other Jenkins jobs!)

and you can add in anything else you can get your hands on 🙂

This allows you to track the flow of a change right through the development life cycle – from the initial change/requirement/defect to the change itself at the code/file level, then the testing and building of it and the eventual release. This is far more than you need for a simple Information Radiator, but using this approach means that you can easily reuse much of the work in different ways.