Give Codeship a try

Want to learn more?

Microservices are small programs that handle one task. A microservice that is never used is useless though — it’s the system on the whole that provides value to the user. Microservices work together by communicating messages back and forth so that they can accomplish the larger task.

Communication is key, but there are a variety of ways this can be accomplished. A pretty standard way is through a RESTful API, passing JSON back and forth over HTTP. In one sense, this is great; it’s a form of communication that’s well understood. However, this method isn’t without flaws because it adds other factors, such as HTTP status codes and receiving/parsing requests and responses.

What other ways might microservices communicate back and forth? In this article, we’re going to explore the use of a queue, more specifically RabbitMQ.

What Does RabbitMQ Do?

RabbitMQ provides a language-agnostic way for programs to send messages to each other. In simple terms, it allows a “Publisher/Producer” to send a message and allows for a “Consumer” to listen for those messages.

In one of its simpler models, it resembles what many Rails developers are used to with Sidekiq: the ability to distribute asynchronous tasks among one or more workers. Sidekiq is one of the first things I install on all my Rails projects. I don’t think RabbitMQ would necessarily take its place, especially for things that work more easily within a Rails environment: sending emails, interacting with Rails models, etc.

It doesn’t stop there though. RabbitMQ can also handle Pub/Sub functionality, where a single “event” can be published and one or more consumers can subscribe to that event. You can take this further where consumers can subscribe only to specific events and/or events that match the pattern they’re watching for.

Finally, RabbitMQ can allow for RPCs (Remote Procedure Calls), where you’re looking for an answer right away from another program… basically calling a function that exists in another program.

In this article, we’ll be taking a look at both the “Topic” or pattern-based Pub/Sub approach, as well as how an RPC can be accomplished.

Event-based and asynchronous

The first example we’ll be working with today is a sports news provider who receives incoming data about scores, goals, players, teams, etc. It has to parse the data, store it, and perform various tasks depending on the incoming data.

To make things a little clearer, let’s imagine that, in one of the incoming data streams, we’ll be notified about soccer goals.

When we discover that a goal has happened, there are a number of things that we need to do:

Parse and normalize the information

Store the details locally

Update the “box-score” for the game that the goal took place in

Update the league leaderboard showing who the top goal scorer is

Notify all subscribers (push notification) of a particular league, team, or player

And any number of other tasks or analysis that we need to do

Do we need to do all of those tasks in order? Should the program in charge of processing incoming data need to know about all of these tasks and how to accomplish them? I suggest that other than parsing/normalizing the incoming data and maybe even saving it locally, the rest of the tasks can be done asynchronously and that the program shouldn’t really know or care about all of these other tasks.

What we can do is have the parser program emit an event (soccer.mls.goal for example), along with its accompanying information:

The parser can then forget about it! It’s done its work of emitting the event. The rest of the work will be done by any number of consumers who have subscribed to this specific event.

Producing in Ruby

To produce or emit events in Ruby, the first thing we need to do is install the bunny client, which allows Ruby to communicate with RabbitMQ. For an example, here is some fake incoming data that needs to trigger the goal event for soccer.

The 'live_events' string has to do with which Exchange to publish the event to. An Exchange is basically like a router that decides which Queue(s) the event should be placed into. The emit method is inside of a Module I created to simplify emitting events:

It receives the topic, event slug, and event payload and sends that information to RabbitMQ.

Consuming in Ruby

So far we have produced an event, but without a consumer to consume it, the event will be lost. Let’s create a Ruby consumer that is listening for all soccer goal events.

You may have noticed that what I was calling the event slug (or the routing_key) looked like "soccer.mls.goal". Picking a pattern to follow is important, because consumers can choose which events to listen for based on a pattern such as "soccer.*.goal": all soccer goals regardless of the league.

The consumer in this case will be some code which updates the leaderboard for the top goal scorers in the league. It is kicked off by running a Ruby file with this line:

SoccerLeaderboard.new.live_updates

The SoccerLeaderboard class has a method called live_updates which will call a receive method provided be an included Module. It will provide the topic, the pattern of event slug/routing_key to listen for, and a block of code to be called any time there is a new event to process.

class SoccerLeaderboard
include EventReceiver
def live_updates
receive('live_events', 'soccer.*.goal') do |payload|
puts "#{payload['player']} has scored a new goal."
end
end
end

The EventReceiver Module is a little larger, but for the most part it’s just setting up a connection to RabbitMQ and telling it what it wants to listen for.

Consuming in Elixir

I mentioned that RabbitMQ is language agnostic. What I mean by this is that we can not only have a consumer in Ruby listening for events, but we can have a consumer in Elixir listening for events at the same time.

In Elixir, the package I used to connect to RabbitMQ was amqp. One gotcha was that it relies on amqp_client which was giving me problems with Erlang 19. To solve that, I had to link directly to the GitHub repository because it doesn’t appear that the fix has been published to Hex yet.

The code to listen for events in Elixir looks like the following code below. Most of the code inside of the start_listening method is just creating a connection to RabbitMQ and telling it what to subscribe to. The wait_for_messages is where the event processing takes place.

RPC… when you need an answer right away

Remote Procedure Calls can be accomplished with RabbitMQ, but I’ll be honest: It’s more involved than the examples above for more of a typical Pub/Sub approach. To me, it felt like each side (producer/consumer) has to act as both a producer and a consumer.

The flow is a little like this:

Program A asks Program B for some information, providing a unique ID for the request

Program A listens for responses that match the same unique ID

Program B receives request, does the work and provides a response with the same unique ID

Program A runs callback once matching unique ID is found in response from Program B

In this example, we’ll be talking about a product’s inventory… an answer we need to know right away to be sure that there is stock available for a customer to purchase.

Wow… that’s a lot of work to make an RPC! RabbitMQ has a great guide explaining how this works in a variety of different languages.

Conclusion

Microservices don’t always need to communicate synchronously, and they don’t always need to communicate over HTTP/JSON either. They can, but next time you’re thinking about how they should speak to each other, why not consider doing it asynchronously using RabbitM? It comes with a great interface for monitoring the activity of the queue and has fantastic client support in a variety of popular languages. It’s fast, reliable, and scalable.

Microservices aren’t free though… I think it’s worthwhile considering whether the extra complexity involved in setting up separate services and providing them a way to communicate couldn’t be better handled using something like Sidekiq and writing clean, modular code.

Subscribe via Email

Over 60,000 people from companies like Netflix, Apple, Spotify and O'Reilly are reading our articles. Subscribe to receive a weekly newsletter with articles around Continuous Integration, Docker, and software development best practices.

We promise that we won't spam you. You can unsubscribe any time.

Join the Discussion

Leave us some comments on what you think about this topic or if you like to add something.

Regarding using RabbitMQ with Microservices I would like to share with you my experience and opinions.

I love RabbitMQ for Pub/Sub, mostly because of it flexibility when you use topic exchanges.

I prefer avoid using RabbitMQ for RPC unless you want to keep the messages within rabbit in case a request fail or something.

Using HTTP/JSON scales better and it’s more simple to maintain. Adding RabbitMQ in the middle of two services adds an extra dependency to the architecture of the app and when you experience latency issues, this will kill you.

Also another thing that I consider a good practice is to use different RabbitMQ layers per microservice, it will be something as mailbox for processes on Erlang. This way if some layer fails, the problem will be isolated and won’t affect other parts of your system. This also gives you flexibility to perform all other kind of operations.

Again, thanks for writing this article.

Tiago Cardoso

It is an interesting post indeed. I developed a variation of this “microservice” approach, where one service would live waiting for certain requests on a requests rabbitmq channel, enqueue a sidekiq job to redis, fetch the job from redis, process the job in a worker, and enqueue it to a responses channel in rabbitmq to be consumed by the other service which requested it in the first place. Now what’s obvious in this setup is the network indirection overhead, or “pub to mq, sub from mq, enqueue to redis/sidekiq, dequeue from redis/sidekiq”. The “problem” with this setup is the guarantees offered by the sidekiq service (retry jobs queue, dead jobs queue, jobs running queue, nice webUI), which just can’t be reproduced with a plain “mq + workers pool” setup without a significant ammount of work. So this setup is based only on feature richness. One could stay only with the queue approach, if all the other guarantees are negligible. But even that can be too much sometimes. What if I can have my producers talk directly with the consumer and viceversa? That would be the zeroMQ approach. It eliminates the overhead of enqueueing to a 3rd party, but it’s probably harder to guarantee availability and integration of the messages. Again, a trade-off.

randito

I used Ruby and RabbitMQ on my last project. We ended up using the library Hutch — https://github.com/gocardless/hutch — to add some nice syntactic sugar on top of everything to avoid the nitty gritty of setting up queues, communication, etc.

It ended up being very nice. There was a consumer class that used a worker class — passing along messages to a #work method.

We also used a convention that I came up with (but probably stole from somewhere). Where if events were sent over a ‘project.function.action’ type queue then results were sent over the ‘project.function.action.results’ queue and errors on the ‘project.function.action.error’ queue with logs sent over ‘project.function.action.log’. It let us create simple ‘*.error’ error handlers and ‘*.log’ loggers. Plus, you can easily chain message processors together by listening to a specific ‘.result’ channel.

But, the sad part is… we struggled with the “query” or “RPC” side of things. Never ended up implementing it with the RabbmitMQ side being eclipsed with HTTP for queries.

Sure. There’s a lot more than I could reasonably cover in this forum but in a nutshell, like all tech choices, Async Messaging solves some problems and creates others. Is reacting to/capturing your events critical or best efforts for example? The answer to that will will have a significant impact on how you configure your queues. And then there is the configuration and management of RabbitMQ, not a trivial task. I’m not saying don’t, I’m saying don’t take that decision lightly. As well as looking at RabbitMQ you might also want to take a look at ZeroMQ and perhaps even EventSourcing (I’m taking a wild punt here, this may be well out of scope for what you are trying to do).

Douwe de vries

thank you for your insights. I’m going to use this in our discussion if we really need/want it.

Perhaps start with some proof-of-concepts to see what works and doesn’t work.

Leigh, Thank you Taking time to Explain about MicroService Communication with Queues.

I have a question:

When you have a few MicroService that need to Interact with Queue/REST, Do you use any tool to maintain the communication, monitoring? Best Practice Pattern (Keyword to search for)

Background: I’ve thought of

* Manually Configure each of the Service Communication – Hard to Maintain (Monitor each Service when it is broken), that mean only 1 person(designer/developer) know how to configure. I want anyone to be easily configure communication through front end not through code.

* BPM like WSO2 to do the orchestration (BPEL) – New Learning Curve Not much training material that I can find :(

* NGNIX API Gateway (when I am ready for production) There may be other better or recommendation, I haven’t read about yet

I’ve read somewhere that each MicrosoService need to have a front end (HapiJs may not be a good choice, but I found that there is swagger plugin to HapiJs making REST API documentation so easy)

What I have in mind is creating MicroService Exposing as REST API (for Direct Request) and Also Can be Linked via Queues Like Pub/Sub (Rabbit MQ / Service Bus) Then Create Workflow orchestrating MicroService to build a Useful Service