Introduction

1. Overview

1.1. RabbitMQ

With more than 35,000 production deployments world-wide
at small startups and large enterprises, RabbitMQ is the most popular
open source message broker.

RabbitMQ is lightweight and easy to deploy on premises and in the cloud.
It supports multiple messaging protocols. RabbitMQ can be deployed in
distributed and federated configurations to meet high-scale, high-availability requirements.

1.2. Project Reactor

Reactor is a highly optimized reactive library for
building efficient, non-blocking applications on the JVM based on the
Reactive Streams Specification.
Reactor based applications can sustain very high throughput message rates
and operate with a very low memory footprint,
making it suitable for building efficient event-driven applications using
the microservices architecture.

Reactor implements two publishers
Flux<T> and
Mono<T>,
both of which support non-blocking back-pressure.
This enables exchange of data between threads with well-defined memory usage,
avoiding unnecessary intermediate buffering or blocking.

1.3. Reactive API for RabbitMQ

Reactor RabbitMQ is a reactive API for RabbitMQ
based on Reactor and RabbitMQ Java Client.
Reactor RabbitMQ API enables messages to be published to
RabbitMQ and consumed from RabbitMQ using functional APIs
with non-blocking back-pressure and very low overheads.
This enables applications using Reactor to use RabbitMQ as a message bus or
streaming platform and integrate with other systems to provide an end-to-end
reactive pipeline.

2. Motivation

2.1. Functional interface for RabbitMQ

Reactor RabbitMQ is a functional Java API for RabbitMQ.
For applications that are written in functional style,
this API enables RabbitMQ interactions to be integrated
easily without requiring non-functional produce or consume APIs to be
incorporated into the application logic.

2.2. Non-blocking Back-pressure

The Reactor RabbitMQ API benefits from non-blocking back-pressure
provided by Reactor. For example, in a pipeline, where
messages received from an external source (e.g. an HTTP proxy) are published
to RabbitMQ, back-pressure can be applied easily to the
whole pipeline, limiting the number of messages in-flight and controlling memory usage.
Messages flow through the pipeline as they are available,
with Reactor taking care of limiting the flow rate to avoid overflow,
keeping application logic simple.

2.3. End-to-end Reactive Pipeline

The value proposition for Reactor RabbitMQ is the efficient utilization of resources
in applications with multiple external interactions where RabbitMQ is one of the
external systems. End-to-end reactive pipelines benefit from
non-blocking back-pressure and efficient use of threads, enabling a
large number of concurrent requests to be processed efficiently.
The optimizations provided by Project Reactor enable development of reactive applications
with very low overheads and predictable capacity planning to deliver low-latency,
high-throughput pipelines.

2.4. Comparisons with other RabbitMQ Java libraries

Reactor RabbitMQ is not intended to replace any of the existing Java libraries.
Instead, it is aimed at providing an alternative API for reactive event-driven applications.

2.4.1. RabbitMQ Java Client

For non-reactive applications, RabbitMQ Java Client
provides the most complete API to manage resources, publish messages to and
consume messages from RabbitMQ. Note Reactor RabbitMQ is based on RabbitMQ Java Client.

Applications using RabbitMQ as a message bus using this API may consider
switching to Reactor RabbitMQ if the application is implemented in a functional style.

2.4.2. Spring AMQP

Spring AMQP applies core
Spring Framework
concepts to the development of AMQP-based messaging solutions.
It provides a "template" as a high-level abstraction for sending and receiving messages.
It also provides support for Message-driven POJOs with a "listener container".
These libraries facilitate management of AMQP resources while promoting the use of
dependency injection and declarative configuration. Spring AMQP is based on
RabbitMQ Java Client.

The SampleSender sends 20 messages to the demo-queue queue, with publisher
confirms enabled. The log line for a given message is printed to the console
when the publisher confirmation is received from the broker.

6.2. Reactive RabbitMQ Sender

Outbound messages are sent to RabbitMQ using reactor.rabbitmq.Sender.
A Sender is associated with one RabbitMQ Connection that is used
to transport messages to the broker. A Sender can also manage resources
(exchanges, queues, bindings).

A Sender is created with an instance of sender configuration options
reactor.rabbitmq.SenderOptions.
The properties of SenderOptions contains the ConnectionFactory that creates
connections to the broker and some Reactor Scheduler`s used by the `Sender.

In the snippet above the connection can be created from 2 different nodes (useful for
failover) and the connection name is set up.

Once the required options have been configured on the options instance,
a new Sender instance can be created with the options already
configured in senderOptions.

Sender sender = RabbitFlux.createSender(senderOptions);

The Sender is now ready to send messages to RabbitMQ.
At this point, a Sender instance has been created,
but no connections to RabbitMQ have been made yet.
The underlying Connection instance is created lazily
when a first call is made to create a resource or to send messages.

Let’s now create a sequence of messages to send to RabbitMQ.
Each outbound message to be sent to RabbitMQ is represented as a OutboundMessage.
An OutboundMessage contains routing information (exchange to send to and routing key)
as well as the message itself (properties and body).

Note the Sender#declare* methods return their respective AMQP results
wrapped into a Mono.

For queue creation, note that if a queue specification has a
null name, the queue to be created will have a server-generated name
and will be non-durable, exclusive, and auto-delete. If you want
a queue to have a server-generated name but other parameters,
specify an empty name "" and set the parameters accordingly on
the QueueSpecification instance. For more information about queues,
see the official documentation.

One can also use the ResourcesSpecification factory class
with a static import to reduce boilerplate code. Combined with
Mono chaining and Sender#declare shortcuts, it allows for condensed syntax:

Sender#sendWithPublishConfirms returns a Flux<OutboundMessageResult>
that can be subscribed to to know that outbound messages
have successfully reached the broker.

6.2.3. Threading model

Reactor RabbitMQ configure by default the Java Client to use NIO, i.e. there’s only
one thread that deals with IO. This can be changed by specifying a ConnectionFactory
in the SenderOptions.

The Sender uses 2 Reactor’s Scheduler: one for the subscription when creating the
connection and another one for resources management. The Sender defaults
to 2 elastic schedulers, this can be overriden in the SenderOptions. The Sender
takes care of disposing the default schedulers when closing. If not using the default
schedulers, it’s developer’s job to dispose schedulers they passed in to the
SenderOptions.

6.2.4. Closing the Sender

When the Sender is no longer required, the instance can be closed.
The underlying Connection is closed, as well as the default
schedulers if none has been explicitly provided.

sender.close();

6.2.5. Error handling during publishing

The send and sendWithPublishConfirms methods can take an additional
SendOptions parameter to specify the behavior to adopt if the publishing of a message
fails. The default behavior is to retry every 200 milliseconds for 10 seconds
in case of connection failure. As automatic connection recovery
is enabled by default,
the connection is likely to be re-opened after a network glitch and the flux of
outbound messages should stall only during connection recovery before restarting automatically.
This default behavior tries to find a trade-off between reactivity and robustness.

You can customize the retry by settings your own instance of RetrySendingExceptionHandler
in the SendOptions, e.g. to retry for 20 seconds every 500 milliseconds:

The RetrySendingExceptionHandler uses a Predicate<Throwable> to decide whether
an exception should trigger a retry or not. If the exception isn’t retryable, the exception
handler wraps the exception in a RabbitFluxException and throws it.

For consistency sake, the retry exception handler used with ExceptionHandlers.CONNECTION_RECOVERY_PREDICATE
(the default) will trigger retry attempts for the same conditions as connection recovery triggering.
This means that if connection recovery has kicked in, publishing will be retried at least for the retry
timeout configured (10 seconds by default).

Note the exception handler is a BiConsumer<Sender.SendContext, Exception>, where Sender.SendContext
is a class providing access to the OutboundMessage and the underlying AMQP Channel. This makes it
easy to customize the default behavior: logging BiConsumer#andThen retrying, only logging, trying to
send the message somewhere else, etc.

6.2.6. Request/reply

RPC (request/reply) is a popular pattern to implement with a
messaging broker like RabbitMQ. […​] The typical way to do
this is for RPC clients to send requests that are routed to a
long lived (known) server queue. The RPC server(s)
consume requests from this queue and then send replies
to each client using the queue named by the client in the reply-to header.

For performance reason, Reactor RabbitMQ builds on top
direct reply-to. The next
snippet shows the usage of the RpcClient class:

In the example above, a consumer waits on the rpc.server.queue to
process requests. A RpcClient is created from a Sender, it will
send requests to a given exchange with a given routing key. The RpcClient
handles the machinery to send the request and wait on a reply queue the
result processed on the server queue, wrapping everything up with reactive API.
Note a RPC client isn’t meant to be used for only 1 request, it can be a long-lived object
handling different requests, as long as they’re directed to the same destination (defined
by the exchange and the routing key passed in when the RpcClient is created).

A RpcClient uses a sequence of Long for correlation, but this can be changed
by passing in a Supplier<String> when creating the RpcClient:

This can be useful e.g. when the RPC server can make sense of the correlation ID.

6.3. Reactive RabbitMQ Receiver

Messages stored in RabbitMQ queues are consumed using the reactive
receiver reactor.rabbitmq.Receiver.
Each instance of Receiver is associated with a single instance
of Connection created by the options-provided ConnectionFactory.

A receiver is created with an instance of receiver configuration options
reactor.rabbitmq.ReceiverOptions. The properties of ReceiverOptions
contains the ConnectionFactory that creates connections to the broker
and a Reactor Scheduler used for the connection creation.

In the snippet above the connection can be created from 2 different nodes (useful for
failover) and the connection name is set up.

Once the required configuration options have been configured on the options instance,
a new Receiver instance can be created with these options to consume inbound messages.
The code snippet below creates a receiver instance and an inbound Flux for the receiver.
The underlying Connection and Consumer instances are created lazily
later when the inbound Flux is subscribed to.

6.3.1. Consuming options

The Receiver class has different flavors of the receive* method and each of them
can accept a ConsumeOptions instance. Here are the different options:

overflowStrategy: the OverflowStrategy
used when creating the Flux of messages. Default is BUFFER.

qos: the prefetch count used when message acknowledgment is enabled. Default is 250.

hookBeforeEmitBiFunction: a BiFunction<Long, ? super Delivery, Boolean> to decide
whether a message should be emitted downstream or not. Default is to always emit.

stopConsumingBiFunction: a BiFunction<Long, ? super Delivery, Boolean> to decide
whether the flux should be completed after the emission of the message. Default is to never complete.

6.3.2. Acknowledgment

Receiver has several receive* methods that differ on the way consumer are acknowledged
back to the broker. Acknowledgment mode can have profound impacts on performance and memory
consumption.

consumeNoAck: the broker forgets about a message as soon as it has sent it to the consumer.
Use this mode if downstream subscribers are very fast, at least faster than the flow of inbound
messages. Messages will pile up in the JVM process memory if subscribers are not
able to cope with the flow of messages, leading to out-of-memory errors. Note this mode
uses the auto-acknowledgment mode when registering the RabbitMQ Consumer.

consumeAutoAck: with this mode, messages are acknowledged right after their arrival,
in the Flux#doOnNext callback. This can help to cope with the flow of messages, avoiding
the downstream subscribers to be overwhelmed. Note this mode
does not use the auto-acknowledgment mode when registering the RabbitMQ Consumer.
In this case, consumeAutoAck means messages are automatically acknowledged by the library
in one the Flux hooks.

consumeManualAck: this method returns a Flux<AcknowledgableDelivery> and messages
must be manually acknowledged or rejected downstream with AcknowledgableDelivery#ack
or AcknowledgableDelivery#nack, respectively. This mode lets the developer
acknowledge messages in the most efficient way, e.g. by acknowledging several messages
at the same time with AcknowledgableDelivery#ack(true) and letting Reactor control
the batch size with one of the Flux#buffer methods.

To learn more on how the ConsumeOptions#qos setting can impact the behavior
of Receiver#consumeAutoAck and Receiver#consumeManualAck, have a look at
this
post about queuing theory.

6.3.3. Closing the Receiver

When the Receiver is no longer required, the instance can be closed.
The underlying Connection is closed, as well as the default scheduler
if none has been explicitly provided.

receiver.close();

6.3.4. Connection failure

Network connection between the broker and the client can fail. This
is transparent for consumers thanks to RabbitMQ Java client
automatic connection recovery.
Connection failures affect sending though, and acknowledgment is a sending operation.

When using Receiver#consumeAutoAck, acknowledgments are retried for 10 seconds every 200 milliseconds
in case of connection failure. This can be changed by setting the
BiConsumer<Receiver.AcknowledgmentContext, Exception> exceptionHandler in the
ConsumeOptions, e.g. to retry for 20 seconds every 500 milliseconds:

When using Receiver#consumeManualAck, acknowledgment is handled by the developer, who
can do pretty anything they want on acknowledgment failure.

AcknowledgableDelivery#ack and AcknowledgableDelivery#nack methods handle retry internally
based on BiConsumer<Receiver.AcknowledgmentContext, Exception> exceptionHandler in the ConsumeOptions.
Developer does not have to execute retry explicitly on acknowledgment failure and benefits from Reactor RabbitMQ
retry support when acknowledging a message:

Note the exception handler is a BiConsumer<Receiver.AcknowledgmentContext, Exception>. This means
acknowledgment failure can be handled in any way, here we choose to retry the acknowledgment.
Note also that by using ExceptionHandlers.CONNECTION_RECOVERY_PREDICATE, we choose
to retry only on unexpected connection failures and rely on the AMQP Java client to automatically re-create
a new connection in the background. The decision to retry on a given
exception can be customized by providing a Predicate<Throwable> in place of
ExceptionHandlers.CONNECTION_RECOVERY_PREDICATE.

6.4. Advanced features

This section covers advanced uses of the Reactor RabbitMQ API.

6.4.1. Creating a connection with a custom Mono

It is possible to specify only a ConnectionFactory for Sender/ReceiverOptions and
let Reactor RabbitMQ create connection from this ConnectionFactory.
If you want more control over the creation of connections, you can use
Sender/ReceiverOptions#connectionSupplier(ConnectionFactory). This is fine for most cases
and doesn’t use any reactive API. Both Sender and Receiver use internally a Mono<Connection>
to open the connection only when needed. It is possible to provide this Mono<Connection>
through the appropriate *Options class:

Providing your own Mono<Connection> lets you take advantage of all the Reactor API
(e.g. for caching).

6.4.2. Sharing the same connection between Sender and Receiver

Sender and Receiver instances create their own Connection but it’s possible to use
only one or a few Connection instances to be able to use exclusive resources between a Sender
and a Receiver or simply to control the number of created connections.

Both SenderOptions and ReceiverOptions have a connectionMono method that can encapsulate
any logic to create the Mono<Connection> the Sender or Receiver will end up using. Reactor
RabbitMQ provides a way to share the exact same connection instance from a Mono<Connection>:

Be aware that closing the first Sender or Receiver will close the underlying
AMQP connection for all the others.

6.4.3. Creating channels with a custom Mono in Sender

SenderOptions provides a channelMono property that is called when creating the Channel used
in sending methods. This is a convenient way to provide any custom logic when
creating the Channel, e.g. retry logic.

6.4.4. Threading considerations for resource management

A Sender instance maintains a Mono<Channel> to manage resources and by default
the underlying Channel is cached. A new Channel is also automatically created in case of error.
Channel creation is not a cheap operation, so this default behavior fits most use
cases. Each resource management method provides a counterpart method with an additional
ResourceManagementOptions argument. This allows to provide a custom Mono<Channel>
for a given resource operation. This can be useful when multiple threads are using
the same Sender instance, to avoid using the same Channel from multiple threads.

In the example above, each operation will use the same Channel as it is cached.
This way these operations won’t interfer with any other thread using the default
resource management Mono<Channel> in the Sender instance.

6.4.5. Channel pooling in Sender

By default, Sender#send* methods open a new Channel for every call. This is
OK for long-running calls, e.g. when the flux of outbound messages is
infinite. For workloads whereby Sender#send* is called often for finite,
short flux of messages, opening a new Channel every time may not be optimal.

It is possible to use a pool of channels as part of the SendOptions when
sending outbound messages with Sender, as illustrated in the following snippet:

Note it is developer’s responsibility to close the pool when it is no longer
necessary, typically at application shutdown.

Micro-benchmarks
revealed channel pooling performs much better for sending short sequence
of messages (1 to 10) repeatedly, without publisher confirms. With longer sequence
of messages (100 or more), channel pooling can perform worse than
without pooling at all. According to the same micro-benchmarks, channel pooling
does not make sending with publisher confirms perform better, it appears to perform
even worse. Don’t take these conclusions for granted, you should always make your
own benchmarks depending on your workloads.