In the above example, a KafkaConsumer instance is created using
a map instance in order to specify the Kafka nodes list to connect (just one) and
the deserializers to use for getting key and value from each received message.

More advanced creation methods allow to specify the class type for the key and the value used for sending messages
or provided by received messages; this is a way for setting the key and value serializers/deserializers instead of
using the related properties for that

Here the KafkaProducer instance is created in using a Properties for
specifying Kafka nodes list to connect (just one) and the acknowledgment mode; the key and value deserializers are
specified as parameters of KafkaProducer.create.

Receiving messages from a topic joining a consumer group

In order to start receiving messages from Kafka topics, the consumer can use the
subscribe method for
subscribing to a set of topics being part of a consumer group (specified by the properties on creation).

You also need to register a handler for handling incoming messages using the
handler.

The handler can be registered before or after the call to subscribe(); messages won’t be consumed until both
methods have been called. This allows you to call subscribe(), then seek() and finally handler() in
order to only consume messages starting from a particular offset, for example.

A handler can also be passed during subscription to be aware of the subscription result and being notified when the operation
is completed.

Using the consumer group way, the Kafka cluster assigns partitions to the consumer taking into account other connected
consumers in the same consumer group, so that partitions can be spread across them.

The Kafka cluster handles partitions re-balancing when a consumer leaves the group (so assigned partitions are free
to be assigned to other consumers) or a new consumer joins the group (so it wants partitions to read from).

Receiving messages from a topic requesting specific partitions

Besides being part of a consumer group for receiving messages from a topic, a consumer can ask for a specific
topic partition. When the consumer is not part part of a consumer group the overall application cannot
rely on the re-balancing feature.

As with subscribe(), the handler can be registered before or after the call to assign();
messages won’t be consumed until both methods have been called. This allows you to call
assign(), then seek() and finally handler() in
order to only consume messages starting from a particular offset, for example.

Calling assignment provides
the list of the current assigned partitions.

Getting topic partition information

You can call the partitionsFor to get information about
partitions for a specified topic

Manual offset commit

In Apache Kafka the consumer is in charge to handle the offset of the last read message.

This is executed by the commit operation executed automatically every time a bunch of messages are read
from a topic partition. The configuration parameter enable.auto.commit must be set to true when the
consumer is created.

Manual offset commit, can be achieved with commit.
It can be used to achieve at least once delivery to be sure that the read messages are processed before committing
the offset.

You can use the offsetsForTimes API introduced in Kafka 0.10.1.1 to look up an offset by
timestamp, i.e. search parameter is an epoch timestamp and the call returns the lowest offset
with ingestion timestamp >= given timestamp.

Code not translatable

Message flow control

A consumer can control the incoming message flow and pause/resume the read operation from a topic, e.g it
can pause the message flow when it needs more time to process the actual messages and then resume
to continue message processing.

Sending messages to a topic

The simplest way to send a message is to specify only the destination topic and the related value, omitting its key
or partition, in this case the messages are sent in a round robin fashion across all the partitions of the topic.

Since the producers identifies the destination using key hashing, you can use that to guarantee that all
messages with the same key are sent to the same partition and retain the order.

for (i in 0 until 10) {
// i.e. defining different keys for odd and even messages
var key = i % 2
// a key is specified, all messages with same key will be sent to the same partition
var record = KafkaProducerRecord.create("test", String.valueOf(key), "message_${i}")
producer.write(record)
}

Note

the shared producer is created on the first createShared call and its configuration is defined at this moment,
shared producer usage must use the same configuration.

Sharing a producer

Sometimes you want to share the same producer from within several verticles or contexts.