In this blog post you will get a brief overview on how to quickly setup a Log Management Solution with the ELK Stack (Elasticsearch-Logstash-Kibana) for Spring Boot based Microservices. I will show you two ways how you can parse your application logs and transport it to the Elasticsearch instance. Basically you can replace Spring Boot with any other application framework which uses Logback, Log4J or any other known Java logging framework. So this is also interesting for people who are not using Spring Boot. This post doesn’t contain detailed insights about the used technologies, but you will find a lot of informations in the web about it. So, before we start have a short look at Elasticsearch, Logstash and Kibana. A good starting point is the website of elasticsearch.org with a lot of resources and interesting webinars. Also my codecentric colleagues have already blogged about some topics in this area. The reason why I have selected Spring Boot for this Demo is, that we actually using it in some projects and I believe it will help to make the next big step in the area of Enterprise Java Architectures. With this Micrservice based approach there will be a lot more logfiles you have to monitor, so a solution is definitely needed here.

First of all, clone the example repository into your workspace and go into the root of this directory.

git clone http://github.com/denschu/elk-example
cd elk-example

git clone http://github.com/denschu/elk-example cd elk-example

The Spring Boot example application is a small batch job which is located in the directory “loggging-example-batch”. Start the JVM with the following commands:

cd loggging-example-batch/
mvn spring-boot:run

cd loggging-example-batch/ mvn spring-boot:run

Take a look inside “/tmp/server.log”. There you will find some log statements like those:

Kibana

In another shell download Kibana and extract the contents of the archive. It contains the JavaScript-based Dashboard which you can simply serve with every HTTP Server. In this example we use a lightweight Python-based HTTP server.

Open the preconfigured Logstash Dashboard in Kibana and check if it successfully connect to your running Elasticsearch Server. Per default it uses the URL “http://localhost:9200” (see config.js to modify it).

http://localhost:8087/index.html#/dashboard/file/logstash.json

http://localhost:8087/index.html#/dashboard/file/logstash.json

Logstash Agent

To collect the logfiles and transport them to our log server we use Logstash. Open a new shell and execute this:

Method 1: Parse unstructured logfiles with Grok

The mostly used method to parse the logs is to create a Grok Filter which is able to extract the relevant data from the log statement. I have created a Grok Filter for the standard configuration of Logback that is used actually in Spring Boot.

Open the preconfigured Logstash Dashboard in Kibana again and you will see upcoming logstatements

http://localhost:8087/index.html#/dashboard/file/logstash.json

http://localhost:8087/index.html#/dashboard/file/logstash.json

Method 2: Use JSON Logback Encoder

One big disadvantage of Method 1 is that it’s sometimes not so easy to create a fully working Grok Pattern that is able to parse the unstructured logfiles. The Spring Boot default log format is one of the better ones, because it uses fixed columns. An alternative is to directly create the log statements in JSON Format. To achieve that you have to add the following artifact (It’s already inlcuded in the sample application!) to the pom.xml.

Alternative Log Shippers

The Logstash Agent runs with a memory footprint (up to 1GB) that is not so suitable for small servers (e.g. EC2 Micro Instances). For our demo here it doesn’t matter, but especially in Microservice environments it is recommended to switch to another Log Shipper, e.g. the Logstash Forwarder (aka Lumberjack). For more Informations about it please refer to this link. Btw. for the JS Guys, there is also a Node.JS implementation of Logstash available.

To summarize it up, the ELK Stack (Elasticsearch-Logstash-Kibana) is a good combination to setup a complete Log Management Solution only with Open Source Technologies. For larger environments with a high amount of logs it’s maybe useful to add an additional transport like Redis to decouple the componentes (Log Server, Log Shipper) and make it more reliable. In the next time I will post about some other topics in the area of Microservices. So stay tuned and give some feedback 🙂

I’ve been using AWS ElasticBeanstalk and I haven’t found a nice way to also start up a logstash log shipping agent on my application servers. To overcome this, I’ve used log4j’s SocketAppender – which logstash seems to handle very well. Specifically, I wrap the SocketAppender in an AsyncAppender for performance reasons, but everything is working well so far.

Would you consider the SocketAppender an appropriate alternative log shipper? Anything that I might be missing?

Thanks, very interesting and informative I’ve been using log4j and lately started using Stackify’s log management (http://stackify.com). They have an appender to log4j so getting it started was really simple and easy. While it is not free, it is very low cost and the benefit of having integrated log and error management together (so for example I can get in one click all the logs thrown while a specific error was reported) . We love it here and use it on a daily basis. Actually I find looking at their real-time tailing addicting 🙂