Spring Kafka - Consumer Producer Example

The Spring for Apache Kafka (spring-kafka) project applies core Spring concepts to the development of Kafka-based messaging solutions. It provides a ‘template’ as a high-level abstraction for sending messages. It also contains support for Message-driven POJOs with @KafkaListener annotations and a listener container.

In the following tutorial, we will configure, build and run a Hello World example in which we will send/receive messages to/from Apache Kafka using Spring Kafka, Spring Boot, and Maven.

General Project Setup

We will be building and running our example using Apache Maven. Shown below is the XML representation of our Maven project in a POM file. It contains the needed dependencies for compiling and running the example.

In order to run the Kafka consumer and producer, we will use the Spring Boot project. To facilitate the management of the different Spring dependencies, Spring Boot Starters are used which are a set of convenient dependency descriptors that you can include in your application.

The spring-boot-starter dependency is the core starter, it includes auto-configuration, logging, and YAML support. The spring-boot-starter-test includes the dependencies for testing Spring Boot applications with libraries that include JUnit, Hamcrest and Mockito.

To avoid having to manage the version compatibility of the different Spring dependencies, we will inherit the defaults from the spring-boot-starter-parent parent POM.

A dependency to spring-kafka is added in addition to a property that specifies the version. At the time of writing the latest stable release was '1.3.2.RELEASE'.

We also include spring-kafka-test in order to have access to an embedded Kafka broker when running our unit test.

In the plugins section, we included the spring-boot-maven-plugin Maven plugin so that we can build a single, runnable “uber-jar”, which is convenient to execute and transport our written code. In addition, the plugin allows us to start the example via a Maven command.

Spring Boot is used in order to make a Spring Kafka example application that you can “just run”. We start by creating a SpringKafkaApplication class which contains the main() method that uses Spring Boot’s SpringApplication.run() to launch the application.

Note that the @SpringBootApplication annotation is a convenience annotation that adds: @Configuration, @EnableAutoConfiguration and @ComponentScan.

The below sections will detail how to create a sender and receiver together with their respective configurations. It is also possible to have Spring Boot autoconfigure Spring Kafka using default values so that actual code that needs to be written is reduced to a bare minimum.

This example will send/receive a simple String. If you would like to send more complex objects you could, for example, use an Avro Kafka serializer or the Kafka Jsonserializer that ships with Spring Kafka.

We also create an application.ymlYAML properties file under src/main/resources. Properties from this file will be injected by Spring Boot into our configuration beans using the @Value annotation.

kafka:bootstrap-servers:localhost:9092topic:helloworld:helloworld.t

Create a Spring Kafka Message Producer

For sending messages we will be using the KafkaTemplate which wraps a Producer and provides convenience methods to send data to Kafka topics. The template provides asynchronous send methods which return a ListenableFuture.

In the Sender class, the KafkaTemplate is auto-wired as the creation will be done further below in a separate SenderConfig class.

For this example we will use the send() method that takes as input a topic name and a String payload that needs to be sent.

Note that the Kafka broker default settings cause it to auto-create a topic when a request for an unknown topic is received.

The creation of the KafkaTemplate and Sender is handled in the SenderConfig class. The class is annotated with @Configuration which indicates that the class can be used by the Spring IoC container as a source of bean definitions.

In order to be able to use the Spring Kafka template, we need to configure a ProducerFactory and provide it in the template’s constructor.

The producer factory needs to be set with a number of mandatory properties amongst which the 'BOOTSTRAP_SERVERS_CONFIG' property that specifies a list of host:port pairs used for establishing the initial connections to the Kafka cluster. Note that this value is configurable as it is fetched from the application.yml configuration file.

A message in Kafka is a key-value pair with a small amount of associated metadata. As Kafka stores and transports Byte arrays, we need to specify the format from which the key and value will be serialized. In this example we are sending a String as payload, as such we specify the StringSerializer class which will take care of the needed transformation.

packagecom.codenotfound.kafka.producer;importjava.util.HashMap;importjava.util.Map;importorg.apache.kafka.clients.producer.ProducerConfig;importorg.apache.kafka.common.serialization.StringSerializer;importorg.springframework.beans.factory.annotation.Value;importorg.springframework.context.annotation.Bean;importorg.springframework.context.annotation.Configuration;importorg.springframework.kafka.core.DefaultKafkaProducerFactory;importorg.springframework.kafka.core.KafkaTemplate;importorg.springframework.kafka.core.ProducerFactory;@ConfigurationpublicclassSenderConfig{@Value("${kafka.bootstrap-servers}")privateStringbootstrapServers;@BeanpublicMap<String,Object>producerConfigs(){Map<String,Object>props=newHashMap<>();// list of host:port pairs used for establishing the initial connections to the Kakfa clusterprops.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,bootstrapServers);props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,StringSerializer.class);props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,StringSerializer.class);returnprops;}@BeanpublicProducerFactory<String,String>producerFactory(){returnnewDefaultKafkaProducerFactory<>(producerConfigs());}@BeanpublicKafkaTemplate<String,String>kafkaTemplate(){returnnewKafkaTemplate<>(producerFactory());}@BeanpublicSendersender(){returnnewSender();}}

Create a Spring Kafka Message Consumer

Like with any messaging-based application, you need to create a receiver that will handle the published messages. The Receiver is nothing more than a simple POJO that defines a method for receiving messages. In the below example we named the method receive(), but you can name it anything you like.

The @KafkaListener annotation creates a ConcurrentMessageListenerContainer message listener container behind the scenes for each annotated method. In order to do so, a factory bean with name kafkaListenerContainerFactory is expected that we will configure in the next section.

Using the topics element, we specify the topics for this listener. The name of the topic is injected from the application.yml properties file.

For more information on the other available elements on the KafkaListener, you can consult the API documentation.

For testing convenience, we added a CountDownLatch. This allows the POJO to signal that a message is received. This is something you are not likely to implement in a production application.

The creation and configuration of the different Spring Beans needed for the Receiver POJO are grouped in the ReceiverConfig class. Similar to the SenderConfig it is annotated with @Configuration.

Note the @EnableKafka annotation which enables the detection of the @KafkaListener annotation that was used on the previous Receiver class.

The kafkaListenerContainerFactory() is used by the @KafkaListener annotation from the Receiver in order to configure a MessageListenerContainer. In order to create it, a ConsumerFactory and accompanying configuration Map is needed.

In this example, a number of mandatory properties are set amongst which the initial connection and deserializer parameters.

We also specify a 'GROUP_ID_CONFIG' which allows to identify the group this consumer belongs to. Messages will effectively be load balanced over consumer instances that have the same group id.

On top of that we also set 'AUTO_OFFSET_RESET_CONFIG' to "earliest". This ensures that our consumer reads from the beginning of the topic even if some messages were already sent before it was able to startup.

packagecom.codenotfound.kafka.consumer;importjava.util.HashMap;importjava.util.Map;importorg.apache.kafka.clients.consumer.ConsumerConfig;importorg.apache.kafka.common.serialization.StringDeserializer;importorg.springframework.beans.factory.annotation.Value;importorg.springframework.context.annotation.Bean;importorg.springframework.context.annotation.Configuration;importorg.springframework.kafka.annotation.EnableKafka;importorg.springframework.kafka.config.ConcurrentKafkaListenerContainerFactory;importorg.springframework.kafka.config.KafkaListenerContainerFactory;importorg.springframework.kafka.core.ConsumerFactory;importorg.springframework.kafka.core.DefaultKafkaConsumerFactory;importorg.springframework.kafka.listener.ConcurrentMessageListenerContainer;@Configuration@EnableKafkapublicclassReceiverConfig{@Value("${kafka.bootstrap-servers}")privateStringbootstrapServers;@BeanpublicMap<String,Object>consumerConfigs(){Map<String,Object>props=newHashMap<>();// list of host:port pairs used for establishing the initial connections to the Kafka clusterprops.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG,bootstrapServers);props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG,StringDeserializer.class);props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,StringDeserializer.class);// allows a pool of processes to divide the work of consuming and processing recordsprops.put(ConsumerConfig.GROUP_ID_CONFIG,"helloworld");// automatically reset the offset to the earliest offsetprops.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG,"earliest");returnprops;}@BeanpublicConsumerFactory<String,String>consumerFactory(){returnnewDefaultKafkaConsumerFactory<>(consumerConfigs());}@BeanpublicKafkaListenerContainerFactory<ConcurrentMessageListenerContainer<String,String>>kafkaListenerContainerFactory(){ConcurrentKafkaListenerContainerFactory<String,String>factory=newConcurrentKafkaListenerContainerFactory<>();factory.setConsumerFactory(consumerFactory());returnfactory;}@BeanpublicReceiverreceiver(){returnnewReceiver();}}

Testing the Spring Kafka Template & Listener

A basic SpringKafkaApplicationTest is provided in order to verify that we are able to send and receive a message to and from Apache Kafka. It contains a testReceiver() unit test case that uses the Sender bean to send a message to the 'helloworld.t' topic on the Kafka bus.

We then check if the CountDownLatch from the Receiver was lowered from 1 to 0 as this indicates a message was processed by the receive() method.

An embedded Kafka broker is automatically started by using a @ClassRule. Check out following Spring Kafka test example for more detailed information on this topic.

As the embedded server is started on a random port, we provide a dedicated src/test/resources/apppication.yml properties file for testing which uses the spring.embedded.kafka.brokers system property that the @ClassRule sets to the address of the broker(s).

Below test case can also be executed after you install Kafka and Zookeeper on your local system. Just comment out the @ClassRule and change the 'bootstrap-servers' property of the application properties file located in src/test/resources to the address of the local broker.