Tracing messages in Choreography with Sleuth and Zipkin

One of the challenges in building distributed system is having a good visibility of what is happening inside them. This challenge is only magnified when dealing with choreography- microservices, loosely coupled, communicating via messaging. In this article you will see how Sleuth and Zipkin help to solve that problem.

One of the most important requirements for production ready microservices is being able to correlate logs. What does that mean? Having some sort of id, that will link logs from different services together. Of course you don’t want to link everything- you want to focus on a single request/process that is happening in the system. This was often done with MDC (Mapped Diagnostic Context) in slf4j. There is nothing wrong in using these technologies directly, but here I want to show you something better…

Meet Spring Cloud Sleuth

Spring Cloud Sleuth is a project designed to make tracing requests in microservices easy. It succeeds spectacularly in that goal. If you are using Spring Boot (and you should!) enabling Sleuth only requires adding a single dependency:

I am using here https://github.com/bjedrzejewski/food-order-publisher project as an example. If you are interested how messaging works and in Spring Cloud Stream, check my earlier post about it. There is another blog post that explains error handling used in the code that we will use here. Now, assuming you have the basics, lets look closer at the log that is being created:

[food-order-publisher,888114b702f9c3aa,888114b702f9c3aa,true]

What you are seeing there, are the respective parameters:

appname – the name of the application that logged the span

traceId – the id of the latency graph that contains the span

spanId – the id of a specific operation

exportable – whether the log should be exported to Zipkin or not (more about Zipkin later)

Here, the traceId is the same as spanId, because this is the beginning of a trace:

You can think of traceId as sort of a parent of spanId. I could have not describe it better than the official documentation does with the following picture:

Sleuth correlation explained

As you can see in that picture above, traceId always stay the same for the whole time, while spanId creates the sort of black-box. It runs from request to response.

These traceId and spanId propagate through REST calls automatically. You really don’t need to do anything special and you will see the same traceId across multiple Spring Boot servers- as long as they have Spring Cloud Sleuth of course. They are also automatically created for number of different interactions, including interacting with data sources and messages.

Just with that, you suddenly have a power to trace your logs expertly. If you add Logstash, Elastic, Kibana- you can then easily filter by traceId and build up a holistic view of the system. It is incredible how much you get with Sleuth with such a little effort. But wait, there is more…

Meet Zipkin

Zipkin is a project whose main use for us is to visualize these traces that you have collected with Sleuth. Zipkin Server used to be part of a Spring Cloud (done by annotation placed on a Spring Boot), but currently is a standalone project. Since I am quite a big fan of docker, I recommend you running Zipkin server with the following command (provieded you have a Docker installed):

docker run -d -p 9411:9411 --name zipkin openzipkin/zipkin

It runs by default on the port 9411, but this can be changed by passing different environment variables. If you are not keen on docker, you can run Zipkin Server in multiple different ways as listed by their official Quickstart. After starting Zipkin server and visiting port 9411 on your localhost you should see something like that:

Looks great, just a bit empty…

To make use of this brand new Zipkin Server, we need to tell Spring Boot to actually use it. To do this, you can replace Sleuth dependency with Zipkin dependency (Zipkin includes Sleuth), pasting the following into your pom file:

Here the sender type is set to web as we want to report data to Zipkin via HTTP calls rather than a message queue (RabbitMQ for example is another option). sampler.percentage defines how many traces with be sent to Zipkin. Default is 0.1 which means 10%, here for the demo purposes I decided for 1.0- 100%.

Example output from working Zipkin should look something like that:

Zipkin example screenshot from the official website

Sleuth, Zipkin and Spring Cloud Stream working together – Example

After discussing this technologies, I will show you how seamlessly they are working together. For this demonstration I will use the code that I created for the previous two blog post on Spring Cloud Stream (starting with Spring Cloud Stream and dead letter queue in Spring Cloud Stream). The finished code for the three projects used can be found on my github in these three repositories:

When making a REST call to the publisher, now I should be able to see the interaction between the two services:

Here you can see the whole trace starting from Publisher and ending with Consumer reading the message.

Step 3: Adding Sleuth and Zipkin to the DLQ Handler

This is where it gets a little difficult. My DLQ handler does not use Spring Cloud for handling the messages, but rather it has its own RabbitMQ connection. In order to get that connected into the span I have to add the Zipkin maven dependency and the standard set of properties:

The whole file can be seen here. This is a temporary workaround. I wanted to demonstrate the ability to arbitrarily add to the Span, as this may be not the only occasion when you may need to do this. The crucial parts here are:

//Manually extracting the Span properties from the message and using
//HeaderBasedMessagingExtractor&nbsp; from Spring to create the Span
//(this could be done manually)
HeaderBasedMessagingExtractor headerBasedMessagingExtractor = new HeaderBasedMessagingExtractor();
MySpanTextMap entries = new MySpanTextMap(failedMessage.getMessageProperties().getHeaders());
Span span = headerBasedMessagingExtractor.joinTrace(entries);
//using the manually created Span to add it to the tracer
Span mySpan = tracer.createSpan(":rePublish", span);
//closing the Span
tracer.close(mySpan);

To do it in a cleaner fashion you should make use of Spring Aspect Oriented Programming (AOP) capabilities, but this is beyond the scope of this blog post. If you want to know the details I recommend reading the Customisation chapter of the official documentation that explains it in more details. People involved in the Sleuth and Zipkin projects are actively working on adding new automated tracing to these projects. There is a good chance that by the time you read it, if you use the latest versions of the respective libraries, you won’t have to do it manually.

Lets make a few calls that will fail directly and make use of the DLQ handler. You will see how much easier to understand the flow is when you have a good visualization.

You can see a trace through a DLQ process that worked here- how great is that!Here is an example of a trace through a DLQ process that fails three attempts at reprocessing the message. Having time scales and visualization greatly helps to understand it

You can even get the details of the Exceptions by clicking on the spans:

There is a lot of value from being able to click on any part of the pipeline and see what the exceptions were!

As you can see this is a truly useful tool when investigating Exceptions and understanding different flows in your choreography.

Summary and what to do next

I consider Sleuth an invaluable addition to any serious microservices built around the Spring Cloud project. You don’t need to use Zipkin, but with the ease of integration I don’t see why you wouldn’t want to! Once you have your tracing figured out, it is very important to be able to easily search through your logs. To deal with this I recommend getting familiar with the ELK stack- Elastic Search, Logstash and Kibana. Together with Sleuth and Zipkin they give you the ultimate insight into your logs and microservices communication!

I am glad that you enjoyed the blog post. Good to see that using Sleuth 2.0 is going to make tracing easier. Once it goes into proper release I will give it a go and see how much easier it actually is. Thanks for all your great work and the rest of the people that make Spring amazing.

Post navigation

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here:
Cookie Policy

About

E4developer is a place where I share my open and honest views on software development, technology and working with people. The name – e4 comes from a chess move, this is how I start most of my games. Follow me on twitter – @e4developer