In this section, we provide a tutorial for running a secure three-node Kafka cluster and Zookeeper ensemble with SASL. By the end of this tutorial, you will have successfully installed and run a simple deployment with SSL and SASL security enabled on Docker. If you’re looking for a simpler tutorial, please refer to our quickstart guide, which is limited to a single node Kafka cluster.

Note

It is worth noting that we will be configuring Kafka and Zookeeper to store secrets locally in the Docker containers. For production deployments (or generally whenever you care about not losing data), you should use mounted volumes for persisting data in the event that a container stops running or is restarted. This is important when running a system like Kafka on Docker, as it relies heavily on the filesystem for storing and caching messages. Refer to our documentation on Docker external volumes for an example of how to add mounted volumes to the host machine.

If you’re running on Windows or Mac OS X, you’ll need to use Docker Machine to start the Docker host. Docker runs natively on Linux, so the Docker host will be your local machine if you go that route. If you are running on Mac or Windows, be sure to allocate at least 4 GB of ram to the Docker Machine.

Now that we have all of the Docker dependencies installed, we can create a Docker machine and begin starting up Confluent Platform.

Note

In the following steps we’ll be running each Docker container in detached mode. However, we’ll also demonstrate how access the logs for a running container. If you prefer to run the containers in the foreground, you can do so by replacing the -d flags with --it.

Create and configure the Docker machine. If you are running a docker-machine VM in the cloud like AWS, then you will need to SSH into the VM and run these commands. You may need to run them as root.

You will need to generate CA certificates (or use yours if you already have one) and then generate keystore and truststore for brokers and clients. You can use the create-certs.sh script in examples/kafka-cluster-sasl/secrets to generate them. For production, please use these scripts for generating certificates : https://github.com/confluentinc/confluent-platform-security-tools

For this example, we will use the create-certs.sh available in the examples/kafka-cluster-sasl/secrets directory in cp-docker-images. See “security” section for more details on security. Make sure that you have OpenSSL and JDK installed.

To configure SASL, all your nodes will need to have a proper hostname. It is not advisable to use localhost as the hostname.

We need to create an entry in /etc/hosts with hostname quickstart.confluent.io that points to eth0 IP. In Linux, run the below commands on the Linux host. If running Docker Machine (eg for Mac or Windows), you will need to SSH into the VM and run the below commands as root. You can SSH into the Docker Machine VM by running docker-machinesshconfluent.

You should see the following (it might take some time for this command to return data. Kafka has to create the __consumers_offset topic behind the scenes when you consume data for the first time and this may take some time):

Name Command State Ports
-------------------------------------------------------------------------------
kafkaclustersasl_kafka-sasl-1_1 /etc/confluent/docker/run Up
kafkaclustersasl_kafka-sasl-2_1 /etc/confluent/docker/run Up
kafkaclustersasl_kafka-sasl-3_1 /etc/confluent/docker/run Up
kafkaclustersasl_kerberos_1 /config.sh Up
kafkaclustersasl_zookeeper-sasl-1_1 /etc/confluent/docker/run Up
kafkaclustersasl_zookeeper-sasl-2_1 /etc/confluent/docker/run Up
kafkaclustersasl_zookeeper-sasl-3_1 /etc/confluent/docker/run Up

Check the zookeeper logs to verify that Zookeeper is healthy. For example, for service zookeeper-1: