Introduction

This series of blog posts aims at helping students at the University of Geneva to develop their first application following micro-service principles.
Besides explaining the concepts and implementation details of micro-service architecture, we will as well discuss software development practices such as software
factories and innovative deployment options such as containers and container composition. All samples and a complete working application can be found here on GitHub

The following diagram represents the end-state of our microservice architecture. From a business perspective, it delivers RegTech services.
More specifically, it manages counterparties and financial instruments. It valuates a portfolio and finally provides some regulatory reporting.
You do not need a deep financial knowledge, sufficient is to say that:

A counterparty is an individual or a company participating in a financial transaction. For more details.

A financial instrument is an asset that can be traded such as stocks, loans, and so on. For more details.

Portfolio valuation is the action of evaluating the net value of a set of assets. For more details.

Financial institutions must comply to a set of regulations such as delivering monthly report to state their financial health.

Network topology and high level component view of the micro-service architecture

Besides, these “business” services, the architecture delivers a set of non-functional services such as:

A Central logging mechanism to deal with the distributed nature of the architecture. It relies on a Logspout companion container that sends the logs from all the containers to a concentrator called Logstash that in turn
sends them to a database optimized for searching called ElasticSearch. Finally, Kibana provides vizualization and analysis of the logs.

A Message broker to increase service decoupling and scalability. Kafka in this case.

An API-Gateway that provides routing, load-balancing and SSO to the micro-services by integrating an indentity manager called Keycloack. Furthermore, Kong delivers API-Gateway services (e.g., security, API composition and aggregation)
The API-Gateway also shields the user from knowing the ugly details of the network topology. It also protects the backend by establishing a clear front vs back network separation.
Furthermore, it exposes static resources and finally, it provides TLS termination.

From a technology perspective, Microservices are implemented using JEE 8 microservice and its microprofile. More specifically, Thorntail [3].
Furthermore, microservices are packaged as Docker [1][2] container using Maven [4] as a build tool.

This chapter describes step by step how to compile and deploy the microservices themselves.
Part 2 describes how to setup non-functional services such as SSO (Single Sign On), API concentration, and logging. Because of its
distributed nature, in
a microservice architecture, non-functional infratructure is as important than the actual services.
Part 3 dives deeper in what a microservice architecture actually is, its benefits and drawbacks, and some details on the related technologies.
Part 5 focuses on the software factory, putting everything together and testing the result.
Then Part 6 does the autopsy of a microservice, detailling the associated design patterns.

Pre-requisites

Note: This series of blog post leverages a lot of different technologies. Please take the time to install everything properly. It will save time later on.

To execute the samples you will need to install and to configure the following tools:

a “reasonnably” powerfull computer with Linux (whatever recent distribution) or Windows (min. Windows 10) to support Docker. Mac is ok as well but it requires some additional steps that will not be described here.

Note: We will start a lot of containers, please grant at 6GB RAM and 6GB swap to your docker-machine

Getting the backend components to run

First thing first, let’s checkout the code and compile everything. Before you start complaining,
yes this section is tedious but we have to have the environment set up before diving into the wonderful world of microservices.
Let’s start by cloning the code from GitHub.

The next step is to compile the project to produce the artifacts (i.e., binaries) that are required. To that end,
we use Apache Maven. Maven is an opiniated build tool:

Opinionated Software is a software product that believes a certain way of approaching a business process is inherently
better and provides software crafted around that approach.

Namely, following its opinion make our life easier and requires less efforts. For more information and tutorials please refer to this Maven Tutorial. The output of the build process is a set of “JAR” files (i.e., a JAVA library) that are stored in your local ~/.m2 repository for later use.

At this point, you compiled all of the Java code and you created maven artifacts for each microservice (Java Archives a.k.a. JARs). But as we will see in the next chapters, a micro-service architecture is much more than a bunch of micro-services. We will need a lot of additional 3rd party tools and services.
These additional services (e.g., logging, security) are usually provided as container images that runs on Docker.
To be able to run the microservices along side this “3rd” party tools, we need to package the microservice as Docker images.

Simply put, Docker provides lightweight virtualization. It has a smaller footprint that usual Virtual Machine approaches.
Compared to Virtual Box, VM Ware and the others, the main difference is that the OS system layer is not replicated in each container but rather shared.

Docker containers run Docker images that are merely lightweight Linux systems with additional softwares. For more about Docker
Let’s first check whether docker is properly installed.

So the docker daemon is up and running. Let’s create the docker images for the microservices. This step will reuse the
JAR files created previously and package them along a Linux system so that every image can be run independently.

Tip: You now have Docker images for your microservices. So at this point, we have docker images for the microservices and the api-gateway.

Let’s start a docker container with the counterparty microservice and let’s map the port 8080 of the container to the port 10080 of the host.
In principle, this will start a Linux OS and then start the microservice as the first process (PID 1). This container provides all the service, you would
expect from any Linux system such as network, security, and isolation.

Tip: open a browser and navigate to http://localhost:10080/counterparies It will display a long list of counterparties.

This demonstrates that a web services is listening on port 10080 of localhost. More specifically, we started a container with the image of the counterparty microservice. The port 8080 is mapped to port 10080 so that we can test it.
Furthermore, we named the container myCounterpartyService.

As it is a fully running Linux system, you can connect to the container to inspect it. In another console, we can run a docker ps command to list running containers.

So there is one running container name myCounterpartyService that listen on port 10080 of localhost.

Let’s test it by connecting to http://localhost:10080/counterparties/724500J4K3Q60O9QLF45 either using a browser or the curl command line. counterparties is the context name of the service and 724500J4K3Q60O9QLF45is the id of particular counterparty we want the details on.

So far we only ran one service, to run all the microservices (plus the message broker) we will compose the images by using docker-compose.
docker-compose is a way to script a series of complex docker configuration to provide a coherent ecosystem.