Once you have deployed your cluster in production, there are some tools and best practices to keep your cluster running in good shape. This section talks about configuring settings dynamically, tweaking logging levels, partition reassignment and deleting topics.

Many config settings in Kafka are static and are wired through the properties file. However, there are several settings that you can tweak per topic. These settings can be changed dynamically using the kafka-topics tool without having to restart the brokers.

When changed using this tool, each change is persistent and lives through broker restarts.

unclean.leader.election.enable
Indicates whether unclean leader election is enabled. If it is, then a leader may be moved to a replica that is not insync with the leader when all insync replicas are not available, leading to possible data loss.

Type: boolean

Default: true

Importance: high

min.insync.replicas
If number of insync replicas drops below this number, we stop accepting writes with -1 (or all) request.required.acks

Type: int

Default: 1

Importance: high

max.message.bytes
The maximum size of a message

Type: int

Default: Integer.MAX_VALUE

Importance: medium

cleanup.policy
Should old segments in this log be deleted or deduplicated?

Type: string

Default: delete

Importance: medium

flush.messages
The number of messages that can be written to the log before a flush is forced

Type: long

Default: Long.MAX_VALUE

Importance: medium

flush.ms
The amount of time the log can have dirty data before a flush is forced

Type: long

Default: Long.MAX_VALUE

Importance: medium

segment.bytes
The hard maximum for the size of a segment file in the log

Type: int

Default: 1048576

Importance: low

segment.ms
The soft maximum on the amount of time before a new log segment is rolled

Type: long

Default: Long.MAX_VALUE

Importance: low

retention.bytes
The approximate total number of bytes this log can use

Kafka emits a number of logs. The location of the logs depends on the packaging format - kafka_logs_dir will be in /var/log/kafka in rpm/debian and $base_dir/logs in the archive format. The default logging level is INFO. It provides a moderate amount of information, but is designed to be rather light so that your logs are not enormous.

When debugging problems, particularly problems with replicas falling out of ISR, it can be helpful to bump up the logging level to DEBUG.

The logs from the server go to logs/server.log.

You could modify the log4j.properties file and restart your nodes — but that is both tedious and leads to unnecessary downtime.

Kafka elects one broker in the cluster to be the controller. The controller is responsible for cluster management and handles events like broker failures, leader election, topic deletion and more.

Since the controller is embedded in the broker, the logs from the controller are separated from the server logs in logs/controller.log. Any ERROR, FATAL or WARN in this log indicates an important event that should be looked at by the administrator.

The controller does state management for all resources in the Kafka cluster. This includes topics, partitions, brokers and replicas. As part of state management, when the state of any resource is changed by the controller, it logs the action to a special state change log stored under logs/state-change.log. This is useful for troubleshooting purposes. For example, if some partition is offline for a while, this log can provide useful information as to whether the partition is offline due to a failed leader election operation.

Kafka has the facility to log every request served by the broker. This includes not only produce and consume requests, but also requests sent by the controller to brokers and metadata requests.

If this log is enabled at the DEBUG level, it contains latency information for every request along with the latency breakdown by component, so you can see where the bottleneck is. If this log is enabled at TRACE, it further logs the contents of the request.

We do not recommend you set this log to TRACE for a long period of time as the amount of logging can affect the performance of the cluster.

This section covers the various admin tools that you can use to administer a Kafka cluster in production. There are still a number of useful operations that are not automated and have to be triggered using one of the tools that ship with Kafka under bin/

You have the option of either adding topics manually or having them be created automatically when data is first published to a non-existent topic. If topics are auto-created then you may want to tune the default topic configurations used for auto-created topics.
Topics are added and modified using the topic tool:

The replication factor controls how many servers will replicate each message that is written. If you have a replication factor of 3 then up to 2 servers can fail before you will lose access to your data. We recommend you use a replication factor of 2 or 3 so that you can transparently bounce machines without interrupting data consumption.

The partition count controls how many logs the topic will be sharded into. There are several impacts of the partition count. First each partition must fit entirely on a single server. So if you have 20 partitions the full data set (and read and write load) will be handled by no more than 20 servers (no counting replicas). Finally the partition count impacts the maximum parallelism of your consumers.

The configurations added on the command line override the default settings the server has for things like the length of time data should be retained. The complete set of per-topic configurations is documented here.

Be aware that one use case for partitions is to semantically partition data, and adding partitions doesn’t change the partitioning of existing data so this may disturb consumers if they rely on that partition. That is if data is partitioned by hash(key) % number_of_partitions then this partitioning will potentially be shuffled by adding partitions but Kafka will not attempt to automatically redistribute data in any way.

The Kafka cluster will automatically detect any broker shutdown or failure and elect new leaders for the partitions on that machine. This will occur whether a server fails or it is brought down intentionally for maintenance or configuration changes. For the later cases Kafka supports a more graceful mechanism for stoping a server then just killing it. When a server is stopped gracefully it has two optimizations it will take advantage of:

It will sync all its logs to disk to avoid needing to do any log recovery when it restarts (i.e. validating the checksum for all messages in the tail of the log). Log recovery takes time so this speeds up intentional restarts.

It will migrate any partitions the server is the leader for to other replicas prior to shutting down. This will make the leadership transfer faster and minimize the time each partition is unavailable to a few milliseconds.

Syncing the logs will happen automatically happen whenever the server is stopped other than by a hard kill, but the controlled leadership migration requires using a special setting: controlled.shutdown.enable=true

Note that controlled shutdown will only succeed if all the partitions hosted on the broker have replicas (i.e. the replication factor is greater than 1 and at least one of these replicas is alive). This is generally what you want since shutting down the last replica would make that topic partition unavailable.

Adding servers to a Kafka cluster is easy, just assign them a unique broker id and start up Kafka on your new servers. However these
new servers will not automatically be assigned any data partitions, so unless partitions are moved to them they won’t be doing any work
until new topics are created. So usually when you add machines to your cluster you will want to migrate some existing data to these
machines. Other common reasons for migrating data are decommissioning of brokers and rebalancing data across the cluster (when it becomes
unbalanced).

The process of migrating data is manually initiated but fully automated. Under the covers, when Kafka moves a partition, it will add a new replica on the destination machine
as a follower of the partition it is migrating. The new replica is allowed to replicate and when it is fully
caught up, it will be marked as in-sync. Then one of the existing replicas on the original server will be deleted, completing the move.

Confluent Enterprise includes the confluent-rebalancer tool while Confluent Open Source includes the kafka-reassign-partitions tool. The former
has the following advantages:

Minimises data movement

Balances data at both cluster and topic level (instead of just topic level)

Balances disk usage across brokers (in addition to balancing the number of leaders and replicas across racks and brokers)

The open source partition reassignment tool can run in 3 mutually exclusive modes -

--generate: In this mode, given a list of topics and a list of brokers, the tool generates a candidate reassignment to move all partitions of the specified topics to the new brokers. This option merely provides a convenient way to generate a partition reassignment plan given a list of topics and target brokers.

--execute: In this mode, the tool kicks off the reassignment of partitions based on the user provided reassignment plan. (using the --reassignment-json-file option). This can either be a custom reassignment plan hand crafted by the admin or provided by using the –generate option

--verify: In this mode, the tool verifies the status of the reassignment for all partitions listed during the last --execute. The status can be either of successfully completed, failed or in progress

The partition reassignment tool does not have the ability to automatically generate a reassignment plan for decommissioning brokers yet. As such,
the admin has to come up with a reassignment plan to move the replica for all partitions hosted on the broker to be decommissioned, to the rest of
the brokers. This can be relatively tedious as the reassignment needs to ensure that all the replicas are not moved from the decommissioned broker
to only one other broker. As stated previously, the confluent-rebalancer has built-in support for this.

Increasing the replication factor can be done via the kafka-reassign-partitions tool. Specify the extra replicas in the custom reassignment json file and use
it with the --execute option to increase the replication factor of the specified partitions. For instance, the following example increases the replication
factor of partition 0 of topic foo from 1 to 3. Before increasing the replication factor, the partition’s only replica existed on broker 5. As part of increasing
the replication factor, we will add more replicas on brokers 6 and 7.

The first step is to hand craft the custom reassignment plan in a json file-

The --verify option can be used with the tool to check the status of the partition reassignment. Note that the same increase-replication-factor.json (used with the --execute option) should be used with the --verify option

Kafka lets you apply a throttle to replication traffic, setting an upper bound on the bandwidth used to move replicas from machine to machine. This is useful when rebalancing a cluster, bootstrapping a new broker or adding or removing brokers, as it limits the impact these data-intensive operations will have on users.

There are three interfaces that can be used to engage a throttle. The simplest, and safest, is to apply a throttle when invoking confluent-rebalancer or kafka-reassign-partitions, but kaka-configs can also be used to view and alter the throttle values directly.

So for example, if you were to execute a rebalance, with the below command, it would move partitions at no more than 50MB/s.

…
The throttle limit was set to 50000000 B/s
Successfully started reassignment of partitions.

Should you wish to alter the throttle, during a rebalance, say to increase the throughput so it completes quicker, you can do this by re-running the execute command passing the same reassignment-json-file:

$ bin/kafka-reassign-partitions --zookeeper localhost:2181 --execute
--reassignment-json-file bigger-cluster.json --throttle 700000000
There is an existing assignment running.
The throttle limit was set to 700000000 B/s

Once the rebalance completes the administrator can check the status of the rebalance using the --verify option. If the rebalance has completed, and --verify is run, the throttle will be removed. It is important that administrators remove the throttle in a timely manner once rebalancing completes by running the command with the --verify option. Failure to do so could cause regular replication traffic to be throttled.

When the --verify option is executed, and the reassignment has completed, the script will confirm that the throttle was removed:

The administrator can also validate the assigned configs using the kafka-configs. There are two pairs of throttle configuration used to manage the throttling process. The throttle value itself. This is configured, at a broker level, using the dynamic properties:

Which are configured per topic. All four config values are automatically assigned by kafka-reassign-partitions (discussed below).

The throttle mechanism works by measuring the received and transmitted rates, for partitions in the replication.throttled.replicas lists, on each broker. These rates are compared to the replication.throttled.rate config to determine if a throttle should be applied. The rate of throttled replication (used by the throttle mechanism) is recorded in the below JMX metrics, so they can be externally monitored.

Here we see the leader throttle is applied to partition 1 on broker 102 and partition 0 on broker 101. Likewise the follower throttle is applied to partition 1 on broker 101 and partition 0 on broker 102.

By default kaka-reassign-partitions will apply the leader throttle to all replicas that exist before the rebalance, any one of which might be leader. It will apply the follower throttle to all move destinations. So if there is a partition with replicas on brokers 101,102, being reassigned to 102,103, a leader throttle, for that partition, would be applied to 101,102 (possible leaders during rebalance) and a follower throttle would be applied to 103 only (the move destination).

If required, you can also use the --alter switch on kafka-configs to alter the throttle configurations manually.

Some care should be taken when using throttled replication. In particular:

Throttle Removal:

The throttle should be removed in a timely manner once reassignment completes (by running confluent-rebalancer--finish or kaka-reassign-partitions-—verify).

Ensuring Progress:

If the throttle is set too low, in comparison to the incoming write rate, it is possible for replication to not make progress. This occurs when:

max(BytesInPerSec) > throttle

Where BytesInPerSec is the metric that monitors the write throughput of producers into each broker.

The administrator can monitor whether replication is making progress, during the rebalance, using the metric:

The lag should constantly decrease during replication. If the metric does not decrease the administrator should increase the throttle throughput as described above.

Avoiding long delays during replication:

The throttled throughput should be large enough that replicas cannot be starved for extended periods. A good, conservative rule of thumb is to keep throttle above #brokersMB/s where #brokers is the number of brokers in your cluster.

Administrators wishing to use lower throttle values can tune the response size used for replication based on the relation:

Here, the admin should tune the throttle and/or replica.fetch.response.max.bytes appropriately to ensure the delay is never larger than replica.lag.time.max.ms (as it is possible for some partitions, particularly smaller ones, to enter the ISR before the rebalance completes) or the outer throttle window: (replication.quota.window.size.secondsxreplication.quota.window.num) or the connection timeout replica.socket.timeout.ms.

As the default for replica.fetch.response.max.bytes is 10MB and the delay should be less than 10s (replica.lag.time.max.ms), this leads to the rule of thumb that throttles should never be less than #brokers MB/s .

To better understand the relation let’s consider an example. Say we have a 5 node cluster, with default settings. We set a throttle of 10MB/s, cluster-wide, and add a new broker. The bootstrapping broker would replicate from the other 5 brokers with requests of size 10MB (default replica.fetch.response.max.bytes). The worst case payload, arriving at the same time on the bootstrapping broker, is 50MB. In this case the follower throttle, on the bootstrapping broker, would delay subsequent replication requests for (50MB / 10MB/s) = 5s, which is acceptable. However if we set the throttle to 1MB/s the worst-case delay would be 50s which is not acceptable.

The rack awareness feature spreads replicas of the same partition across different racks. This extends the guarantees Kafka provides for broker-failure to cover rack-failure, limiting the risk of data loss should all the brokers on a rack fail at once. The feature can also be applied to other broker groupings such as availability zones in EC2.

You can specify that a broker belongs to a particular rack by adding a property to the broker config:

broker.rack=my-rack-id

When a topic is created, modified or replicas are redistributed, the rack constraint will be honoured, ensuring replicas span as many racks as they can (a partition will span min(#racks, replication-factor) different racks).

The algorithm used to assign replicas to brokers ensures that the number of leaders per broker will be constant, regardless of how brokers are distributed across racks. This ensures balanced throughput.

However if racks are assigned different numbers of brokers, the assignment of replicas will not be even. Racks with fewer brokers will get more replicas, meaning they will use more storage and put more resources into replication. Hence it is sensible to configure an equal number of brokers per rack.

Starting in 0.9, the Kafka cluster has the ability to enforce quotas on produce and fetch requests. Quotas are basically byte-rate thresholds defined per client-id. A client-id logically identifies an application making a request. Hence a single client-id can span multiple producer and consumer instances and the quota will apply for all of them as a single entity i.e. if client-id=”test-client” has a produce quota of 10MB/sec, this is shared across all instances with that same id.

Quotas protect brokers from producers and consumers who produce/consume very high volumes of data and thus monopolize broker resources and cause network saturation. This is especially important in large multi-tenant clusters where a small set of badly behaved clients can degrade user experience for the well behaved ones. In fact, when running Kafka as a service quotas make it possible to enforce API limits according to an agreed upon contract.

By default, each unique client-id receives a fixed quota in bytes/sec as configured by the cluster (quota.producer.default, quota.consumer.default). This quota is defined on a per-broker basis. Each client can publish/fetch a maximum of X bytes/sec per broker before it gets throttled.

When a broker detects quota violation, it does not return an error. Rather it attempts to slow down a client exceeding its quota. The broker computes the amount of delay needed to bring the quota-violating client under it’s quota and delays the response for that time. This approach keeps the quota violation transparent to clients (outside of client side metrics). This also keeps them from having to implement any special backoff and retry behavior which ensures quotas are enforced regardless of the client implementation. JMX metrics on the client and brokers can reveal when clients are throttled.

Client byte rate is measured over multiple small windows (for e.g. 30 windows of 1 second each) in order to detect and correct quota violations quickly. Typically, having large measurement windows (for e.g. 10 windows of 30 seconds each) leads to large bursts of traffic followed by long delays which is not great in terms of user experience.

It is possible to override the default quota for client-ids that need a higher (or even lower) quota. The mechanism is similar to the per-topic log config overrides.

By default, each client-id receives an unlimited quota. The following sets the default quota per producer and consumer client-id to 10MB/sec:

There isn’t really a right answer, we expose this as an option because it is a tradeoff. The simple answer is that the partition count determines the maximum consumer parallelism and so you should set a partition count based on the maximum consumer parallelism you would expect to need (i.e. over-provision). Clusters with up to 10k total partitions are quite workable. Beyond that we don’t aggressively test (it should work, but we can’t guarantee it).

Here is a more complete list of tradeoffs to consider:

A partition is basically a directory of log files.

Each partition must fit entirely on one machine. So if you have only one partition in your topic you cannot scale your write rate or retention beyond the capability of a single machine. If you have 1000 partitions you could potentially use 1000 machines.

Each partition is totally ordered. If you want a total order over all writes you probably want to have just one partition.

Each partition is not consumed by more than one consumer thread/process in each consumer group. This allows to have each process consume in a single threaded fashion to guarantee ordering to the consumer within the partition (if we split up a partition of ordered messages and handed them out to multiple consumers even though the messages were stored in order they would be processed out of order at times).

Many partitions can be consumed by a single process, though. So you can have 1000 partitions all consumed by a single process. Another way to say the above is that the partition count is a bound on the maximum consumer parallelism.

More partitions will mean more files and hence can lead to smaller writes if you don’t have enough memory to properly buffer the writes and coalesce them into larger writes

Each partition corresponds to several znodes in zookeeper. Zookeeper keeps everything in memory so this can eventually get out of hand.

More partitions means longer leader fail-over time. Each partition can be handled quickly (milliseconds) but with thousands of partitions this can add up.

When we checkpoint the consumer position we store one offset per partition so the more partitions the more expensive the position checkpoint is.

It is possible to later expand the number of partitions BUT when we do so we do not attempt to reorganize the data in the topic. So if you are depending on key-based semantic partitioning in your processing you will have to manually copy data from the old low partition topic to a new higher partition topic if you later need to expand.

Note that I/O and file counts are really about #partitions/#brokers, so adding brokers will fix problems there; but zookeeper handles all partitions for the whole cluster so adding machines doesn’t help.

ISR is the set of replicas that are fully sync-ed up with the leader. In other words, every replica in the ISR has written all committed messages to its local log. In steady state, ISR should always include all replicas of the partition. Occasionally, some replicas fall out of the insync replica list. This could either be due to failed replicas or slow replicas.

A replica can be dropped out of the ISR if it diverges from the leader beyond a certain threshold. This is controlled by 2 parameters:

replica.lag.time.max.ms

This is typically set to a value that reliably detects the failure of a broker. You can set this value appropriately by observing the value of the replica’s minimum fetch rate that measures the rate of fetching messages from the leader (kafka.server:type=ReplicaFetcherManager,name=MinFetchRate,clientId=<Replica> where Replica is the id of the replica broker) If that rate is n, set the value for this config to larger than 1/n*1000.

replica.lag.max.messages

This is typically set to the observed maximum lag measured in number of bytes on the follower. The JMX bean for this kafka.server:type=ReplicaFetcherManager,name=MaxLag,clientId=Replica. Note that if replica.lag.max.messages is too large, it can increase the time to commit a message. If latency becomes a problem, you can increase the number of partitions in a topic. If a replica constantly drops out of and rejoins isr, you may need to increase replica.lag.max.messages. If a replica stays out of ISR for a long time, it may indicate that the follower is not able to fetch data as fast as data is accumulated at the leader. You can increase the follower’s fetch throughput by setting a larger value for num.replica.fetchers.

First, try to figure out if the consumer is just slow or has stopped. To do so, you can monitor the maximum lag metric kafka.consumer:type=ConsumerFetcherManager,name=MaxLag,clientId=([-.\w]+) that indicates the number of messages the consumer lags behind the producer. Another metric to monitor is the minimum fetch rate kafka.consumer:type=ConsumerFetcherManager,name=MinFetchRate,clientId=([-.\w]+) of the consumer. If the MinFetchRate of the consumer drops to almost 0, the consumer is likely to have stopped. If the MinFetchRate is non-zero and relatively constant, but the consumer lag is increasing, it indicates that the consumer is slower than the producer. If so, the typical solution is to increase the degree of parallelism in the consumer. This may require increasing the number of partitions of a topic.

If you are still running on Kafka 0.7.x (released in 2012):
The only time the aforementioned update instructions will not work is when upgrading from 0.7 to 0.8.
In this case please refer to the specific
Migrating from 0.7 to 0.8
migration guide.

The best way to backup a Kafka cluster is to setup a mirror for the cluster. Depending on your setup and requirements, this mirror may be in the same data center or in a remote one. See the section on Mirroring data between clusters for more details.