For advanced use only. A string to be inserted into ssl.properties for this role only.

ssl.properties_role_safety_valve

false

Logs

Display Name

Description

Related Name

Default Value

API Name

Required

Kafka Broker Log Directory

The log directory for log files of the role Kafka Broker.

kafka.log4j.dir

/var/log/kafka

log_dir

false

Kafka Broker Logging Threshold

The minimum log level for Kafka Broker logs

INFO

log_threshold

false

Kafka Broker Maximum Log File Backups

The maximum number of rolled log files to keep for Kafka Broker logs. Typically used by log4j or logback.

10

max_log_backup_index

false

Kafka Broker Max Log Size

The maximum size, in megabytes, per log file for Kafka Broker logs. Typically used by log4j or logback.

200 MiB

max_log_size

false

Monitoring

Display Name

Description

Related Name

Default Value

API Name

Required

Enable Configuration Change Alerts

When set, Cloudera Manager will send alerts when this entity's configuration changes.

false

enable_config_alerts

false

Other

Display Name

Description

Related Name

Default Value

API Name

Required

Advertised Host

If set, this is the hostname given out to producers, consumers, and other brokers to use in establishing connections. Never set this
property at the group level; it should always be overriden on instance level.

advertised.host.name

advertised.host.name

false

Authenticate Zookeeper Connection

Authenticate a SASL connection with zookeeper, if Kerberos authentication is enabled. It also allows a broker to set SASL ACL on
zookeeper nodes which locks these nodes down so that only kafka broker can modify.

authenticate.zookeeper.connection

true

authenticate.zookeeper.connection

false

Broker ID

ID uniquely identifying each broker. Never set this property at the group level; it should always be overridden on instance
level.

broker.id

broker.id

false

Additional Broker Java Options

These arguments are passed as part of the Java command line. Commonly, garbage collection flags or extra debugging flags are passed
here.

Maximum size for the Java process heap memory. Passed to Java -Xmx. Measured in megabytes. Kafka does not generally require setting
large heap sizes. It is better to let the file system cache utilize the available memory.

broker_max_heap_size

1 GiB

broker_max_heap_size

false

HTTP Metric Report Host

Host the HTTP metric reporter binds to.

kafka.http.metrics.host

0.0.0.0

kafka.http.metrics.host

false

Data Directories

A list of one or more directories in which Kafka data is stored.. Each new partition created is placed in the directory that
currently has the fewest partitions.. Each directory should be on its own separate drive.

log.dirs

/var/local/kafka/data

log.dirs

true

Data Retention Size

The amount of data to retain in the log for each topic-partition. This is the limit per partition: multiply by the number of
partitions to get the total data retained for the topic. The special value of -1 is interpreted as unlimited. If both log.retention.ms and log.retention.bytes are set, a segment is deleted when
either limit is exceeded.

log.retention.bytes

-1 B

log.retention.bytes

false

Data Retention Check Interval

The frequency, in milliseconds, that the log cleaner checks whether any log segment is eligible for deletion, per retention
policies.

log.retention.check.interval.ms

5 minute(s)

log.retention.check.interval.ms

false

Data Retention Hours

The maximum time before a new log segment is rolled out (in hours). Secondary to the log.retention.ms property. The special value of
-1 is interpreted as unlimited. This property is deprecated in Kafka 1.4.0. Use log.retention.ms.

log.retention.hours

7 day(s)

log.retention.hours

false

Data Retention Time

The maximum time before a new log segment is rolled out. If both log.retention.ms and log.retention.bytes are set, a segment is
deleted when either limit is exceeded. The special value of -1 is interpreted as unlimited. This property is used in Kafka 1.4.0 and later in place of log.retention.hours.

log.retention.ms

log.retention.ms

false

Data Log Roll Hours

The maximum time before a new log segment is rolled out (in hours). This property is deprecated in Cloudera Kafka 1.4.0; use
log.roll.ms.

log.roll.hours

7 day(s)

log.roll.hours

false

Data Log Roll Time

The maximum time before a new log segment is rolled out. This property is used in Cloudera Kafka 1.4.0 and later in place of
log.roll.hours.

log.roll.ms

log.roll.ms

false

Segment File Size

The log for a topic partition is stored as a directory of segment files. This setting controls the size to which a segment file can
grow before a new segment is rolled over in the log. This value should be larger than message.max.bytes.

log.segment.bytes

1 GiB

log.segment.bytes

false

Maximum Connections per IP Address

Maximum number of connections allowed from each IP address.

max.connections.per.ip

max.connections.per.ip

false

Number of I/O Threads

The number of I/O threads that the server uses for executing requests. You should have at least as many threads as you have
disks.

num.io.threads

8

num.io.threads

false

Inter Broker Protocol

Protocol to be used for inter-broker communication.

security.inter.broker.protocol

PLAINTEXT

security.inter.broker.protocol

false

SSL Client Authentication

Client authentication mode for SSL connections. Default is none, could be set to "required", i.e., client authentication is required
or to "requested", i.e., client authentication is requested and client without certificates can still connect.

ssl.client.auth

none

ssl.client.auth

false

Performance

Display Name

Description

Related Name

Default Value

API Name

Required

Maximum Process File Descriptors

If configured, overrides the process soft and hard rlimits (also called ulimits) for file descriptors to the configured value.

rlimit_fds

false

Ports and Addresses

Display Name

Description

Related Name

Default Value

API Name

Required

Advertised Port

The port to give out to producers, consumers, and other brokers to use in establishing connections. This only needs to be set if
this port is different from the port the server should bind to.

advertised.port

advertised.port

false

JMX Port

Port for JMX.

jmx_port

9393

jmx_port

false

HTTP Metric Report Port

Port the HTTP metric reporter listens on.

kafka.http.metrics.port

24042

kafka.http.metrics.port

false

TCP Port

Kafka broker port.

port

9092

port

false

TLS/SSL Port

Kafka broker secure port.

ssl_port

9093

ssl_port

false

Resource Management

Display Name

Description

Related Name

Default Value

API Name

Required

Cgroup CPU Shares

Number of CPU shares to assign to this role. The greater the number of shares, the larger the share of the host's CPUs that will be
given to this role when the host experiences CPU contention. Must be between 2 and 262144. Defaults to 1024 for processes not managed by Cloudera Manager.

cpu.shares

1024

rm_cpu_shares

true

Cgroup I/O Weight

Weight for the read I/O requests issued by this role. The greater the weight, the higher the priority of the requests when the host
experiences I/O contention. Must be between 100 and 1000. Defaults to 1000 for processes not managed by Cloudera Manager.

blkio.weight

500

rm_io_weight

true

Cgroup Memory Hard Limit

Hard memory limit to assign to this role, enforced by the Linux kernel. When the limit is reached, the kernel will reclaim pages
charged to the process. If reclaiming fails, the kernel may kill the process. Both anonymous as well as page cache pages contribute to the limit. Use a value of -1 B to specify no limit. By default
processes not managed by Cloudera Manager will have no limit.

memory.limit_in_bytes

-1 MiB

rm_memory_hard_limit

true

Cgroup Memory Soft Limit

Soft memory limit to assign to this role, enforced by the Linux kernel. When the limit is reached, the kernel will reclaim pages
charged to the process if and only if the host is facing memory pressure. If reclaiming fails, the kernel may kill the process. Both anonymous as well as page cache pages contribute to the limit. Use
a value of -1 B to specify no limit. By default processes not managed by Cloudera Manager will have no limit.

memory.soft_limit_in_bytes

-1 MiB

rm_memory_soft_limit

true

Security

Display Name

Description

Related Name

Default Value

API Name

Required

Kafka Broker TLS/SSL Certificate Trust Store File

The location on disk of the trust store, in .jks format, used to confirm the authenticity of TLS/SSL servers that Kafka Broker might
connect to. This is used when Kafka Broker is the client in a TLS/SSL connection. This trust store must contain the certificate(s) used to sign the service(s) connected to. If this parameter is not
provided, the default list of well-known certificate authorities is used instead.

ssl.truststore.location

ssl_client_truststore_location

false

Kafka Broker TLS/SSL Certificate Trust Store Password

The password for the Kafka Broker TLS/SSL Certificate Trust Store File. This password is not required to access the trust store;
this field can be left blank. This password provides optional integrity checking of the file. The contents of trust stores are certificates, and certificates are public information.

For advanced use only. A string to be inserted into ssl_server.properties for this role only.

ssl_server.properties_role_safety_valve

false

Logs

Display Name

Description

Related Name

Default Value

API Name

Required

Kafka MirrorMaker Log Directory

The log directory for log files of the role Kafka MirrorMaker.

kafka_mirrormaker.log4j.dir

/var/log/kafka

log_dir

false

Kafka MirrorMaker Logging Threshold

The minimum log level for Kafka MirrorMaker logs

INFO

log_threshold

false

Kafka MirrorMaker Maximum Log File Backups

The maximum number of rolled log files to keep for Kafka MirrorMaker logs. Typically used by log4j or logback.

10

max_log_backup_index

false

Kafka MirrorMaker Max Log Size

The maximum size, in megabytes, per log file for Kafka MirrorMaker logs. Typically used by log4j or logback.

200 MiB

max_log_size

false

Monitoring

Display Name

Description

Related Name

Default Value

API Name

Required

Enable Configuration Change Alerts

When set, Cloudera Manager will send alerts when this entity's configuration changes.

false

enable_config_alerts

false

Other

Display Name

Description

Related Name

Default Value

API Name

Required

Abort on Send Failure

Stop the entire mirror maker when a send failure occurs.

abort.on.send.failure

true

abort.on.send.failure

false

Topic Blacklist

Regular expression that represents a set of topics to avoid mirroring. Note that whitelist and blacklist parameters are mutually
exclusive. If both are defined, only the whilelist is used. WARNING: Does not work with Kafka 2.0 or later.

blacklist

blacklist

false

Destination Broker List

List of brokers on destination cluster. This should be more than one, for high availability, but there's no need to list all
brokers.

bootstrap.servers

bootstrap.servers

true

MirrorMaker Consumer Rebalance Listener

A consumer rebalance listener of type ConsumerRebalanceListener to be invoked when MirrorMaker's consumer rebalances.

Name of the consumer group used by MirrorMaker. When multiple role instances are configured with the same topics and same group ID,
the role instances load-balance replication for the topics. When multiple role instances are configured with the same topics but different group ID, each role instance replicates all the events for
those topics - this can be used to replicate the source cluster into multiple destination clusters.

group.id

cloudera_mirrormaker

group.id

false

MirrorMaker Message Handler

A MirrorMaker message handler of type MirrorMakerMessageHandler that will process every record in-between producer and
consumer.

message.handler

message.handler

false

MirrorMaker Message Handler Arguments

Arguments used by MirrorMaker message handler.

message.handler.args

message.handler.args

false

Avoid Data Loss

Run with MirrorMaker settings that eliminate potential loss of data. This impacts performance, but is highly recommended. WARNING:
Does not work with Kafka 2.0 or later.

no.data.loss

true

no.data.loss

false

Number of Producers

Number of producer instances. WARNING: Does not work with Kafka 2.0 or later.

num.producers

1

num.producers

false

Number of Consumer Threads

Number of consumer threads.

num.streams

1

num.streams

false

Offset Commit Interval

Offset commit interval in milliseconds.

offset.commit.interval.ms

60000

offset.commit.interval.ms

false

Queue Size

Maximum number of bytes that can be buffered between producer and consumer. WARNING: Does not work with Kafka 2.0 or later.

queue.byte.size

100000000 B

queue.byte.size

false

Message Queue Size

Number of messages that are buffered between producer and consumer. WARNING: Does not work with Kafka 2.0 or later.

queue.size

10000

queue.size

false

Source Broker List

List of brokers on source cluster. This should be more than one, for high availability, but there's no need to list all
brokers.

source.bootstrap.servers

source.bootstrap.servers

true

Source Kafka Cluster's Security Protocol

Protocol to be used for communication with source kafka cluster.

source.security.protocol

PLAINTEXT

source.security.protocol

false

Source Kafka Cluster's Client Auth

Only required if source Kafka cluster requires client authentication.

source.ssl.client.auth

false

source.ssl.client.auth

false

Topic Whitelist

Regular expression that represents a set of topics to mirror. Note that whitelist and blacklist parameters are mutually exclusive.
If both are defined, only the whilelist is used.

whitelist

whitelist

false

Performance

Display Name

Description

Related Name

Default Value

API Name

Required

Maximum Process File Descriptors

If configured, overrides the process soft and hard rlimits (also called ulimits) for file descriptors to the configured value.

rlimit_fds

false

Ports and Addresses

Display Name

Description

Related Name

Default Value

API Name

Required

JMX Port

Port for JMX.

jmx_port

9394

jmx_port

false

Resource Management

Display Name

Description

Related Name

Default Value

API Name

Required

Cgroup CPU Shares

Number of CPU shares to assign to this role. The greater the number of shares, the larger the share of the host's CPUs that will be
given to this role when the host experiences CPU contention. Must be between 2 and 262144. Defaults to 1024 for processes not managed by Cloudera Manager.

cpu.shares

1024

rm_cpu_shares

true

Cgroup I/O Weight

Weight for the read I/O requests issued by this role. The greater the weight, the higher the priority of the requests when the host
experiences I/O contention. Must be between 100 and 1000. Defaults to 1000 for processes not managed by Cloudera Manager.

blkio.weight

500

rm_io_weight

true

Cgroup Memory Hard Limit

Hard memory limit to assign to this role, enforced by the Linux kernel. When the limit is reached, the kernel will reclaim pages
charged to the process. If reclaiming fails, the kernel may kill the process. Both anonymous as well as page cache pages contribute to the limit. Use a value of -1 B to specify no limit. By default
processes not managed by Cloudera Manager will have no limit.

memory.limit_in_bytes

-1 MiB

rm_memory_hard_limit

true

Cgroup Memory Soft Limit

Soft memory limit to assign to this role, enforced by the Linux kernel. When the limit is reached, the kernel will reclaim pages
charged to the process if and only if the host is facing memory pressure. If reclaiming fails, the kernel may kill the process. Both anonymous as well as page cache pages contribute to the limit. Use
a value of -1 B to specify no limit. By default processes not managed by Cloudera Manager will have no limit.

memory.soft_limit_in_bytes

-1 MiB

rm_memory_soft_limit

true

Security

Display Name

Description

Related Name

Default Value

API Name

Required

Kafka MirrorMaker TLS/SSL Certificate Trust Store File

The location on disk of the trust store, in .jks format, used to confirm the authenticity of TLS/SSL servers that Kafka MirrorMaker
might connect to. This is used when Kafka MirrorMaker is the client in a TLS/SSL connection. This trust store must contain the certificate(s) used to sign the service(s) connected to. If this
parameter is not provided, the default list of well-known certificate authorities is used instead.

ssl.truststore.location

ssl_client_truststore_location

false

Kafka MirrorMaker TLS/SSL Certificate Trust Store Password

The password for the Kafka MirrorMaker TLS/SSL Certificate Trust Store File. This password is not required to access the trust
store; this field can be left blank. This password provides optional integrity checking of the file. The contents of trust stores are certificates, and certificates are public information.

service_wide

Advanced

For advanced use only, key-value pairs (one on each line) to be inserted into a role's environment. Applies to configurations of all
roles in this service except client configuration.

KAFKA_service_env_safety_valve

false

System Group

The group that this service's processes should run as.

kafka

process_groupname

true

System User

The user that this service's processes should run as.

kafka

process_username

true

Monitoring

Display Name

Description

Related Name

Default Value

API Name

Required

Enable Configuration Change Alerts

When set, Cloudera Manager will send alerts when this entity's configuration changes.

false

enable_config_alerts

false

Other

Display Name

Description

Related Name

Default Value

API Name

Required

Topic Auto Creation

Enables auto creation of topics on the server. If this is set to true, then attempts to produce, consume, or fetch metadata for a
non-existent topic automatically create the topic with the default replication factor and number of partitions.

auto.create.topics.enable

true

auto.create.topics.enable

false

Enable Automatic Leader Rebalancing

If automatic leader rebalancing is enabled, the controller tries to balance leadership for partitions among the brokers by
periodically returning leadership for each partition to the preferred replica, if it is available.

auto.leader.rebalance.enable

true

auto.leader.rebalance.enable

false

Enable Controlled Shutdown

Enables controlled shutdown of the broker. If enabled, the broker moves all leaders on it to other brokers before shutting itself
down. This reduces the unavailability window during shutdown.

controlled.shutdown.enable

true

controlled.shutdown.enable

false

Controlled Shutdown Maximum Attempts

Number of unsuccessful controlled shutdown attempts before executing an unclean shutdown. For example, the default value of 3 means
that the system will attempt a controlled shutdown 3 times before executing an unclean shutdown.

controlled.shutdown.max.retries

3

controlled.shutdown.max.retries

false

Default Replication Factor

The default replication factor for automatically created topics.

default.replication.factor

1

default.replication.factor

false

Enable Delete Topic

Enables topic deletion using admin tools. When delete topic is disabled, deleting topics through the admin tools has no effect.

delete.topic.enable

true

delete.topic.enable

false

List of Metric Reporters

List of metric reporter class names. HTTP reporter is included by default.

kafka.metrics.reporters

nl.techop.kafka.KafkaHttpMetricsReporter

kafka.metrics.reporters

false

Enable Kerberos Authentication

Enable Kerberos authentication for this KAFKA service.

kerberos.auth.enable

false

kerberos.auth.enable

false

Leader Imbalance Check Interval

The frequency with which to check for leader imbalance.

leader.imbalance.check.interval.seconds

5 minute(s)

leader.imbalance.check.interval.seconds

false

Leader Imbalance Allowed Per Broker

The percentage of leader imbalance allowed per broker. The controller rebalances leadership if this ratio goes above the configured
value per broker.

leader.imbalance.per.broker.percentage

10 %

leader.imbalance.per.broker.percentage

false

Log Cleaner Deduplication Buffer Size

The total memory used for log deduplication across all cleaner threads. This memory is statically allocated and will not cause GC
problems.

log.cleaner.dedupe.buffer.size

128 MiB

log.cleaner.dedupe.buffer.size

false

Log Compaction Delete Record Retention Time

The amount of time to retain delete messages for log compacted topics. Once a consumer has seen an original message you need to
ensure it also sees the delete message. If you removed the delete message too quickly, this might not happen. As a result there is a configurable delete retention time.

log.cleaner.delete.retention.ms

7 day(s)

log.cleaner.delete.retention.ms

false

Enable Log Compaction

Enables the log cleaner to compact topics with cleanup.policy=compact on this cluster.

log.cleaner.enable

true

log.cleaner.enable

false

Log Cleaner Clean Ratio

Controls how frequently the log cleaner will attempt to clean the log. This ratio bounds the maximum space wasted in the log by
duplicates. For example, at 0.5 at most 50% of the log could be duplicates. A higher ratio will mean fewer, more efficient cleanings but will mean more wasted space in the log.

log.cleaner.min.cleanable.ratio

0.5

log.cleaner.min.cleanable.ratio

false

Number of Log Cleaner Threads

The number of background threads to use for log cleaning.

log.cleaner.threads

1

log.cleaner.threads

false

Log Flush Message Interval

The number of messages written to a log partition before triggering an fsync on the log. Setting this lower syncs data to disk more
often, but has a major impact on performance. We recommend use of replication for durability rather than depending on single-server fsync; however, this setting can be used to be extra certain. If
used in conjunction with log.flush.interval.ms, the log is flushed when either criteria is met.

log.flush.interval.messages

log.flush.interval.messages

false

Log Flush Time Interval

The maximum time between fsync calls on the log. If used in conjuction with log.flush.interval.messages, the log is flushed when
either criteria is met.

log.flush.interval.ms

log.flush.interval.ms

false

Log Flush Scheduler Interval

The frequency, in ms, with which the log flusher checks whether any log is eligible to be flushed to disk.

log.flush.scheduler.interval.ms

log.flush.scheduler.interval.ms

false

Maximum Message Size

The maximum size of a message that the server can receive. It is important that this property be in sync with the maximum fetch size
the consumers use, or else an unruly producer could publish messages too large for consumers to consume.

message.max.bytes

1000000 B

message.max.bytes

false

Minimum Number of Replicas in ISR

The minimum number of replicas in the in-sync replica needed to satisfy a produce request where required.acks=-1 (that is,
all).

min.insync.replicas

1

min.insync.replicas

false

Enable Kafka Monitoring (Note: Requires Kafka-1.3.0 parcel or higher)

Enables Kafka monitoring.

monitoring.enabled

true

monitoring.enabled

false

Default Number of Partitions

The default number of partitions for automatically created topics.

num.partitions

1

num.partitions

false

Number of Replica Fetchers

Number of threads used to replicate messages from leaders. Increasing this value increases the degree of I/O parallelism in the
follower broker.

num.replica.fetchers

1

num.replica.fetchers

false

Offset Commit Topic Number of Partitions

The number of partitions for the offset commit topic. Since changing this after deployment is currently unsupported, we recommend
using a higher setting for production (for example, 100-200).

offsets.topic.num.partitions

50

offsets.topic.num.partitions

false

Offset Commit Topic Replication Factor

The replication factor for the offset commit topic. A higher setting is recommended in order to ensure higher availability (for
example, 3 or 4) . If the offsets topic is created when there are fewer brokers than the replication factor, then the offsets topic is created with fewer replicas.

offsets.topic.replication.factor

3

offsets.topic.replication.factor

false

Default Consumer Quota

Any consumer distinguished by clientId/consumer group will get throttled if it fetches more bytes than this value per-second. Only
respected by Kafka 2.0 or later.

quota.consumer.default

quota.consumer.default

false

Default Producer Quota

Any producer distinguished by clientId will get throttled if it produces more bytes than this value per-second. Only respected by
Kafka 2.0 or later.

quota.producer.default

quota.producer.default

false

Replica Maximum Fetch Size

The maximum number of bytes to fetch for each partition in fetch requests replicas send to the leader. This value should be larger
than message.max.bytes.

replica.fetch.max.bytes

1 MiB

replica.fetch.max.bytes

false

Allowed Replica Message Lag

If a replica falls more than this number of messages behind the leader, the leader removes the follower from the ISR and treats it
as dead. This property is deprecated in Kafka 1.4.0; higher versions use only replica.lag.time.max.ms.

replica.lag.max.messages

4000

replica.lag.max.messages

false

Allowed Replica Time Lag

If a follower has not sent any fetch requests, nor has it consumed up to the leader's log end offset during this time, the leader
removes the follower from the ISR set.

replica.lag.time.max.ms

10 second(s)

replica.lag.time.max.ms

false

Enable Unclean Leader Election

Enable replicas not in the ISR set to be elected as leader as a last resort, even though doing so might result in data loss.

unclean.leader.election.enable

false

unclean.leader.election.enable

false

ZooKeeper Root

ZNode in ZooKeeper that should be used as a root for this Kafka cluster.

zookeeper.chroot

zookeeper.chroot

false

ZooKeeper Session Timeout

If the server fails to send a heartbeat to ZooKeeper within this period of time, it is considered dead. If set too low, ZooKeeper
might falsely consider a server dead; if set too high, ZooKeeper might take too long to recognize a dead server.

zookeeper.session.timeout.ms

6 second(s)

zookeeper.session.timeout.ms

false

ZooKeeper Service

Name of the ZooKeeper service that this Kafka service instance depends on

zookeeper_service

true

Suppressions

Display Name

Description

Related Name

Default Value

API Name

Required

Suppress Parameter Validation: Controlled Shutdown Maximum Attempts

Whether to suppress configuration warnings produced by the built-in parameter validation for the Controlled Shutdown Maximum
Attempts parameter.

false

service_config_suppression_controlled.shutdown.max.retries

true

Suppress Parameter Validation: Default Replication Factor

Whether to suppress configuration warnings produced by the built-in parameter validation for the Default Replication Factor
parameter.

false

service_config_suppression_default.replication.factor

true

Suppress Parameter Validation: List of Metric Reporters

Whether to suppress configuration warnings produced by the built-in parameter validation for the List of Metric Reporters
parameter.