The Confluent Platform is available in a variety of formats. For most of the platform, including services like Kafka,
Confluent Control Center, Kafka Connect, and Schema Registry, you should use one of our easy to install packages:

Docker images for the Confluent Platform are currently available on DockerHub. Alternatively, the source files for the images are available on Github if you would prefer to extend and/or rebuild the images and upload them to your own DockerHub repository. If you are interested in deploying with Docker, please refer to our full Docker image documentation.

Confluent does not currently support Windows. Windows users can download and use
the zip and tar archives, but will have to run the jar files directly rather
than use the wrapper scripts in the bin/ directory.

Confluent Platform also includes client libraries for multiple languages that provide both low level access to Kafka as well
as higher level stream processing. These libraries are available through the native packaging systems for each language:

These archives contain the jars, driver scripts, and configuration files in
plain zip and tar archives. These should be used on OS X and as a
simple way to evaluate the platform.

Start by downloading one of the archives. You can choose between Confluent Enterprise, which includes all of Confluent’s components and Confluent Open Source, which includes the open source parts of Confluent Platform. The complete list of downloads and their contents can be
found on http://confluent.io/download/.

The number at the end of the package name specifies the Scala version. Currently
supported versions are 2.11 (recommended) and 2.10. Individual components of
the Confluent Platform are also available as standalone packages. See the
Available Packages section for a listing of packages.

The number at the end of the package name specifies the Scala version. Currently
supported versions are 2.11 (recommended) and 2.10. Individual components of
the Confluent Platform are also available as standalone packages. See the
Available Packages section for a listing of packages.

The platform packages of CP (confluent-platform-<scala_version> and confluent-platform-oss-<scala_version> ), which are umbrella packages to install all components of Confluent Enterprise or Confluent Open Source respectively , i.e. all the individual packages. See next point.

The individual packages of the Confluent Platform such as confluent-kafka-<scala_version> and confluent-schema-registry.

Here, the platform packages (confluent-platform-<scala_version>) as well as the individual Kafka packages (confluent-kafka-<scala_version>) – and only these – are available in two variants, named by the respective Scala version of Apache Kafka that is included in the packages.

Note

Why Scala, and why different Scala versions? Apache Kafka is implemented in Scala and built for multiple versions of Scala. However, the Scala version only matters if you are actually using Scala for implementing your applications and you want a Kafka version built for the same Scala version you use. Otherwise any version should work, with 2.11 being recommended.

If you choose to install via an apt or yum repository, you can install
individual components instead of the complete Confluent Platform. Use these
packages when you only need one component on your server, e.g. on production
servers. The following are packages corresponding to each of the Confluent
Platform components:

The path for migrating to Enterprise version of Confluent depends on how you originally installed the Open Source version.

DEB Packages via apt or RPM Packages via yum

If you installed Confluent Open Source with DEB packages via apt or RPM packages via yum, the upgrade is very simple. Since the Enterprise packages are simply umbrella packages that contain everything you already have in Confluent Open Source plus additional enterprise packages, you can install the additional packages in your existing deployment by running:

$ sudo yum install confluent-platform-2.11

or

$ sudo apt-get install confluent-platform-2.11

TAR or ZIP archives

If you installed Confluent Open Source from TAR or ZIP archives, you will download and install a new Confluent Enterprise archive that contains the entire platform - both the open source and the enterprise components and start running Confluent from the new installation.

Next extract the contents of the archive into a new Confluent install directory. For zip files, use a GUI to extract the

contents or run this command in a terminal:

$ unzip confluent-3.1.2-2.11.zip

For tar files run this command:

$ tar xzf confluent-3.1.2-2.11.tar.gz

Copy all configuration files from ./etc, including ./etc/kafka, ./etc/kafka-rest, ./etc/schema-registry, ./etc/confluent-control-center and ./etc/camus into the new directory you created in previous step.

Stop all Kafka services running in Confluent OpenSource directory. Depending on what was running, this will include kafka-rest, schema-registry, connect-distributed, kafka-server and zookeeper-server.

Start the corresponding services in Confluent Enterprise directory, and in addition start any new Enterprise services you wish you use - for example confluent-control-center.

Repeat these steps on all Confluent servers, one server at a time, to perform rolling migration.

If at any time you want to move back to Confluent OpenSource, simply stop the services in the Enterprise installation directory and start them in the OpenSource directory.

Version names of Kafka in Apache vs. Kafka in Confluent Platform:
Confluent always contributes patches back to the Apache Kafka open source project.
However, the exact versions (and version names) being included in Confluent Platform
may differ from the Apache artifacts when Confluent Platform and Apache
Kafka releases do not align. In the case they do differ, we keep the groupId
and artifactId identical but append the suffix -cpX to the version identifier
of the CP version (with X being a digit) in order to distinguish these from
the Apache artifacts.

For example, to reference the Kafka version 0.9.0.1 that is included with
Confluent Platform 2.0.1, you would use the following snippet in your pom.xml:

The C/C++ client, called librdkafka, is available in source form and as precompiled binaries for Debian and Redhat based
Linux distributions. Most users will want to use the precompiled binaries.

For Linux distributions, follow the instructions for Debian or Redhat distributions to setup the repositories, then use yum or apt-get
to install the appropriate packages. For example, a developer building a C application on a Redhat-based
distribution would use the librdkafka-devel package:

$ sudo yum install librdkafka-devel

And on a Debian-based distribution they would use the librdkafka-dev package:

The Python client, called confluent-kafka-python, is available on PyPI. The
Python client uses librdkafka, the C client, internally. To install the Python client, first install the C client including its development package, then install the library with pip (for both Linux and macOS):

$ sudo pip install confluent-kafka

Note that this will install the package globally for your Python environment. You may also use a virtualenv to install it only for your project.

The Go client, called confluent-kafka-go, is distributed via GitHub
and `gopkg.in http://labix.org/gopkg.in`_ to pin to specific versions. The Go client uses librdkafka, the C client,
internally and exposes it as Go library using cgo. To install the Go client, first install
the C client including its development package as well as a C build toolchain including
pkg-config. On Redhat-based Linux distributions install the following packages in addition to librdkafka:

$ sudo yum groupinstall "Development Tools"

On Debian-based distributions, install the following in addition to librdkafka:

If you would like to statically link librdkafka, add the flag -tagsstatic to the goget commands. This will
statically link librdkafka itself so its dynamic library will not be required on the target deployment system. Note,
however, that librdkafka’s dependencies that are linked statically (such as ssl, sasl2, lz4, etc) will still be linked
dynamically and required on the target system. An experimental option for creating a completely statically linked binary is
available as well. Use the flag -tagsstatic_all. This requires all dependencies to be available as static libraries (e.g., libsasl2.a). Static libraries are
typically not installed by default but are available in the corresponding -dev or -devel packages (e.g.,
libsasl2-dev).