The Avro compatibility type. Valid values are: none (new schema can be any valid Avro schema), backward (new schema can read data produced by latest registered schema), backward_transitive (new schema can read data produced by all previously registered schemas), forward (latest registered schema can read data produced by the new schema), forward_transitive (all previously registered schemas can read data produced by the new schema), full (new schema is backward and forward compatible with latest registered schema), transitive_full (new schema is backward and forward compatible with all previously registered schemas)

Type: string

Default: “backward”

Importance: high

host.name

The host name advertised in Zookeeper. Make sure to set this if running SchemaRegistry with multiple nodes.

Type: string

Default: “192.168.50.1”

Importance: high

kafkastore.ssl.key.password

The password of the key contained in the keystore.

Type: string

Default: “”

Importance: high

kafkastore.ssl.keystore.location

The location of the SSL keystore file.

Type: string

Default: “”

Importance: high

kafkastore.ssl.keystore.password

The password to access the keystore.

Type: string

Default: “”

Importance: high

kafkastore.ssl.truststore.location

The location of the SSL trust store file.

Type: string

Default: “”

Importance: high

kafkastore.ssl.truststore.password

The password to access the trust store.

Type: string

Default: “”

Importance: high

kafkastore.topic

The durable single partition topic that actsas the durable log for the data

Type: string

Default: “_schemas”

Importance: high

kafkastore.topic.replication.factor

The desired replication factor of the schema topic. The actual replication factor will be the smaller of this value and the number of live Kafka brokers.

Type: int

Default: 3

Importance: high

response.mediatype.default

The default response media type that should be used if no specify types are requested in an Accept header.

Type: string

Default: “application/vnd.schemaregistry.v1+json”

Importance: high

ssl.keystore.location

Used for HTTPS. Location of the keystore file to use for SSL. IMPORTANT: Jetty requires that the key’s CN, stored in the keystore, must match the FQDN.

Type: string

Default: “”

Importance: high

ssl.keystore.password

Used for HTTPS. The store password for the keystore file.

Type: password

Default: “”

Importance: high

ssl.key.password

Used for HTTPS. The password of the private key in the keystore file.

Type: password

Default: “”

Importance: high

ssl.truststore.location

Used for HTTPS. Location of the trust store. Required only to authenticate HTTPS clients.

Type: string

Default: “”

Importance: high

ssl.truststore.password

Used for HTTPS. The store password for the trust store file.

Type: password

Default: “”

Importance: high

response.mediatype.preferred

An ordered list of the server’s preferred media types used for responses, from most preferred to least.

Whether or not to set an ACL in ZooKeeper when znodes are created and ZooKeeper SASL authentication is configured. IMPORTANT: if set to true, the ZooKeeper SASL principal must be the same as the Kafka brokers.

Type: boolean

Default: false

Importance: high

kafkastore.init.timeout.ms

The timeout for initialization of the Kafka store, including creation of the Kafka topic that stores schema data.

Type: int

Default: 60000

Importance: medium

kafkastore.security.protocol

The security protocol to use when connecting with Kafka, the underlying persistent storage. Values can be PLAINTEXT or SSL.

Type: string

Default: “PLAINTEXT”

Importance: medium

kafkastore.ssl.enabled.protocols

Protocols enabled for SSL connections.

Type: string

Default: “TLSv1.2,TLSv1.1,TLSv1”

Importance: medium

kafkastore.ssl.keystore.type

The file format of the keystore.

Type: string

Default: “JKS”

Importance: medium

kafkastore.ssl.protocol

The SSL protocol used.

Type: string

Default: “TLS”

Importance: medium

kafkastore.ssl.provider

The name of the security provider used for SSL.

Type: string

Default: “”

Importance: medium

kafkastore.ssl.truststore.type

The file format of the trust store.

Type: string

Default: “JKS”

Importance: medium

kafkastore.timeout.ms

The timeout for an operation on the Kafka store

Type: int

Default: 500

Importance: medium

master.eligibility

If true, this node can participate in master election. In a multi-colo setup, turn this off for clusters in the slave data center.

Type: boolean

Default: true

Importance: medium

kafkastore.sasl.kerberos.service.name

The Kerberos principal name that the Kafka client runs as. This can be defined either in the JAAS config file or here.

Type: string

Default: “”

Importance: medium

kafkastore.sasl.mechanism

The SASL mechanism used for Kafka connections. GSSAPI is the default.

Type: string

Default: “GSSAPI”

Importance: medium

access.control.allow.methods

Set value to Jetty Access-Control-Allow-Origin header for specified methods

Type: string

Default: “”

Importance: low

ssl.keystore.type

Used for HTTPS. The type of keystore file.

Type: string

Default: “JKS”

Importance: medium

ssl.truststore.type

Used for HTTPS. The type of trust store file.

Type: string

Default: “JKS”

Importance: medium

ssl.protocol

Used for HTTPS. The SSL protocol used to generate the SslContextFactory.

Type: string

Default: “TLS”

Importance: medium

ssl.provider

Used for HTTPS. The SSL security provider name. Leave blank to use Jetty’s default.

Type: string

Default: “” (Jetty’s default)

Importance: medium

ssl.client.auth

Used for HTTPS. Whether or not to require the HTTPS client to authenticate via the server’s trust store.

Type: boolean

Default: false

Importance: medium

ssl.enabled.protocols

Used for HTTPS. The list of protocols enabled for SSL connections. Comma-separated list. Leave blank to use Jetty’s defaults.

Type: list

Default: “” (Jetty’s default)

Importance: medium

kafkastore.bootstrap.servers

A list of Kafka brokers to connect to. For example, PLAINTEXT://hostname:9092,SSL://hostname2:9092

If this configuration is not specified, the Schema Registry’s internal Kafka clients will get their Kafka bootstrap server list
from ZooKeeper (configured with kafkastore.connection.url). Note that if kafkastore.bootstrap.servers is configured,
kafkastore.connection.url still needs to be configured, too.

This configuration is particularly important when Kafka security is enabled, because Kafka may expose multiple endpoints that
all will be stored in ZooKeeper, but the Schema Registry may need to be configured with just one of those endpoints.

The endpoint identification algorithm to validate the server hostname using the server certificate.

Type: string

Default: “”

Importance: low

kafkastore.ssl.keymanager.algorithm

The algorithm used by key manager factory for SSL connections.

Type: string

Default: “SunX509”

Importance: low

kafkastore.ssl.trustmanager.algorithm

The algorithm used by the trust manager factory for SSL connections.

Type: string

Default: “PKIX”

Importance: low

kafkastore.zk.session.timeout.ms

Zookeeper session timeout

Type: int

Default: 30000

Importance: low

metric.reporters

A list of classes to use as metrics reporters. Implementing the <code>MetricReporter</code> interface allows plugging in classes that will be notified of new metric creation. The JmxReporter is always included to register JMX statistics.

Type: list

Default: []

Importance: low

metrics.jmx.prefix

Prefix to apply to metric names for the default JMX reporter.

Type: string

Default: “kafka.schema.registry”

Importance: low

metrics.num.samples

The number of samples maintained to compute metrics.

Type: int

Default: 2

Importance: low

metrics.sample.window.ms

The metrics system maintains a configurable number of samples over a fixed window size. This configuration controls the size of the window. For example we might maintain two samples each measured over a 30 second period. When a window expires we erase and overwrite the oldest window.

Type: long

Default: 30000

Importance: low

port

DEPRECATED: port to listen on for new connections. Use listeners instead.

Type: int

Default: 8081

Importance: low

request.logger.name

Name of the SLF4J logger to write the NCSA Common Log Format request log.

Type: string

Default: “io.confluent.rest-utils.requests”

Importance: low

schema.registry.zk.namespace

The string that is used as the zookeeper namespace for storing schema registry metadata. SchemaRegistry instances which are part of the same schema registry service should have the same ZooKeeper namespace.

Type: string

Default: “schema_registry”

Importance: low

shutdown.graceful.ms

Amount of time to wait after a shutdown request for outstanding requests to complete.

Type: int

Default: 1000

Importance: low

ssl.keymanager.algorithm

Used for HTTPS. The algorithm used by the key manager factory for SSL connections. Leave blank to use Jetty’s default.

Type: string

Default: “” (Jetty’s default)

Importance: low

ssl.trustmanager.algorithm

Used for HTTPS. The algorithm used by the trust manager factory for SSL connections. Leave blank to use Jetty’s default.

Type: string

Default: “” (Jetty’s default)

Importance: low

ssl.cipher.suites

Used for HTTPS. A list of SSL cipher suites. Comma-separated list. Leave blank to use Jetty’s defaults.

Type: list

Default: “” (Jetty’s default)

Importance: low

ssl.endpoint.identification.algorithm

Used for HTTPS. The endpoint identification algorithm to validate the server hostname using the server certificate. Leave blank to use Jetty’s default.

Type: string

Default: “” (Jetty’s default)

Importance: low

kafkastore.sasl.kerberos.kinit.cmd

The Kerberos kinit command path.

Type: string

Default: “/usr/bin/kinit”

Importance: low

kafkastore.sasl.kerberos.min.time.before.relogin

The login time between refresh attempts.

Type: long

Default: 60000

Importance: low

kafkastore.sasl.kerberos.ticket.renew.jitter

The percentage of random jitter added to the renewal time.

Type: double

Default: 0.05

Importance: low

kafkastore.sasl.kerberos.ticket.renew.window.factor

Login thread will sleep until the specified window factor of time from last refresh to ticket’s expiry has been reached, at which time it will try to renew the ticket.