Installation and Configuration Tutorial

4.1 Lesson 4—Installation and Configuration

Hello and welcome to lesson 4 of the Apache Kafka Developer course offered by Simplilearn. This lesson provides steps to install and configure Kafka.

4.1 Lesson 4—Installation and Configuration

Hello and welcome to lesson 4 of the Apache Kafka Developer course offered by Simplilearn. This lesson provides steps to install and configure Kafka.

4.2 Objectives

After completing this lesson, you will be able to:
• Demonstrate how to install Kafka on a Ubuntu system
• List the recommended machine configurations to install Kafka
• Demonstrate how to configure Kafka
• Demonstrate how to run Kafka on a Ubuntu system
• Describe the steps required to install Kafka

4.3 Kafka Versions

Kafka has multiple versions. We need to choose the latest stable version for installation. Version 0.8.2.1 is the current stable version. Kafka is developed in Scala, which is a programming language. If you are using a version of Scala, then you need to get the Kafka version compatible with your version of Scala.
The stable version can be downloaded using the link given on the screen.

4.4 OS Selection

You can choose any of the following Linux operating systems for installation:
• Ubuntu 12.04 or later
• Red Hat Enterprise Linux, which is also referred as RHEL
• CentOS, which is a free version of RHEL
• Debian systems

4.6 Preparing for Installation

The prerequisite softwares to install Kafka are:
• Java JRE 1.7 or higher.
• Oracle JRE is recommended; however, Open JRE also works well.
• ZooKeeper need not be installed separately as Kafka comes with its own ZooKeeper version.

4.7 Demo 1—Kafka Installation and Configuration

In this demo, we will learn how to install and configure Kafka.
Type wget http://mirrors.advancedhosters.com/apache/kafka/0.8.2.1/kafka_2.9.1-0.8.2.1.tgz and press Enter to download Kafka directly from the Apache Kafka website.
You may choose a different mirror based on your location at: https://www.apache.org/dyn/closer.cgi?path=/kafka/0.8.2.1/kafka_2.9.1-0.8.2.1.tgz.
Download Kafka on each machine that requires Kafka.
The file with .tgz extension is called a tarball, which is a compressed tar archive on linux. Tar is the tape archive command on linux.
After the download, the archives have to be unzipped and moved to an appropriate location.
Type tar –xzf kafka_2.9.1-0.8.2.1.tgz and press Enter to unzip the package using tar utility.
Type sudo mv kafka_2.9.1-0.8.2.1 /usr/local/kafka and press Enter to move to an appropriate directory.
Note that sudo may ask for the Simplilearn password.
Type vi .bashr and press Enter to edit the .bashrc file in your home directory using the cd command.
In vi, add export KAFKA_PREFIX=/usr/local/kafka and export PATH=$PATH:$KAFKA_PREFIX/bin at the end of the file using i to go to insert mode and escape to get out of insert mode.
Type :wq and press Enter to save the file.
Note that all the commands are case sensitive; so, you need to type exactly as shown.
To restart bash for changes to take effect, type exec bash and press Enter. This will set up the path to include the Kafka directory.
Some development systems have low memory; hence, default heap memory settings do not work on them. Hence, a few changes are required for a development cluster with low memory.
Type cd /usr/local/kafka/bin and press Enter to change the directory to bin directory of Kafka installation.
Type vi zookeeper-server-start.sh and press Enter to edit the zookeeper-server-start.sh file using vi editor.
You can use i to enter insert mode in vi, and escape to get out of insert mode. Escape key is generally located at the top left corner of the keyboard.
Change the line,
export KAFKA_HEAP_OPTS="-Xmx512M -Xms512M"
to
export KAFKA_HEAP_OPTS="-Xmx64M -Xms64M"
Press Escape; type :wq and press Enter to save the file.
Type vi kafka-server-start.sh and press Enter to edit the kafka-server-start.sh file using vi.
Change the line,
export KAFKA_HEAP_OPTS="-Xmx1G -Xms1G"
to
export KAFKA_HEAP_OPTS="-Xmx128M -Xms128M"
Press Escape; type :wq and press Enter to save the file.
Since Kafka uses ZooKeeper for distributed coordination, ZooKeeper needs to be configured.
Type cd /usr/local/kafka/config and press Enter to modify the zookeeper.properties file in the kafka configuration directory.
Type vi zookeeper.properties and press Enter to edit the file.
If the lines are not already present,
add
initLimit=5
syncLimit=2
maxClientCnxns=0
server.1=localhost:2888:3888
Press Escape; type :wq and press Enter to save and exit the editor.
Use the command, sudo mkdir /tmp/zookeeper to create a directory.
Use the command, echo 1 > /tmp/myid and sudo cp /tmp/myid /tmp/zookeeper/myid to create a myid file for zookeeper.
To make the necessary changes required for Kafka configuration,
Type cd /usr/local/kafka/config and press Enter.
Type vi server.properties and press Enter.
Change the line, broker.id=0 to broker.id=1.
Check if the default port is set to 9092.
Check if the zookeeper is set to connect at port 2181.
In case of multiple zookeeper instances, specify each of them separated by commas.
A few changes are required in the server.properties file.
Type queued.max.requests=1000 and auto.create.topics.enable=false at the end of the file and press Enter.
This line ensures topics have to be explicitly created before creating a message for the topic.
Press escape to exit insert mode; Type :wq and press Enter to save the file.
The zookeeper server needs to be started. Type sudo nohup /usr/local/kafka/bin/zookeeper-server-start.sh /usr/local/kafka/config/zookeeper.properties > /tmp/zk.out 2>/tmp/zk.err & and press Enter.
Enter Simplilearn password if asked.
The command ‘sudo’ is used to ensure you have permissions. The & (ampersand) is added at the end so that the process runs in the background. For background processes, nohup is added at the beginning so that the background process does not end, even if your session is terminated. The standard output from the server is sent to /tmp/zk.out file and the standard error is sent to /tmp/zk.err file with the 2> option.
To start the kafka server, type sudo nohup /usr/local/kafka/bin/kafka-server-start.sh /usr/local/kafka/config/server.properties > /tmp/kafka.out 2>/tmp/kafka.err & and press Enter.

4.7 Demo 1—Kafka Installation and Configuration

In this demo, we will learn how to install and configure Kafka.
Type wget http://mirrors.advancedhosters.com/apache/kafka/0.8.2.1/kafka_2.9.1-0.8.2.1.tgz and press Enter to download Kafka directly from the Apache Kafka website.
You may choose a different mirror based on your location at: https://www.apache.org/dyn/closer.cgi?path=/kafka/0.8.2.1/kafka_2.9.1-0.8.2.1.tgz.
Download Kafka on each machine that requires Kafka.
The file with .tgz extension is called a tarball, which is a compressed tar archive on linux. Tar is the tape archive command on linux.
After the download, the archives have to be unzipped and moved to an appropriate location.
Type tar –xzf kafka_2.9.1-0.8.2.1.tgz and press Enter to unzip the package using tar utility.
Type sudo mv kafka_2.9.1-0.8.2.1 /usr/local/kafka and press Enter to move to an appropriate directory.
Note that sudo may ask for the Simplilearn password.
Type vi .bashr and press Enter to edit the .bashrc file in your home directory using the cd command.
In vi, add export KAFKA_PREFIX=/usr/local/kafka and export PATH=$PATH:$KAFKA_PREFIX/bin at the end of the file using i to go to insert mode and escape to get out of insert mode.
Type :wq and press Enter to save the file.
Note that all the commands are case sensitive; so, you need to type exactly as shown.
To restart bash for changes to take effect, type exec bash and press Enter. This will set up the path to include the Kafka directory.
Some development systems have low memory; hence, default heap memory settings do not work on them. Hence, a few changes are required for a development cluster with low memory.
Type cd /usr/local/kafka/bin and press Enter to change the directory to bin directory of Kafka installation.
Type vi zookeeper-server-start.sh and press Enter to edit the zookeeper-server-start.sh file using vi editor.
You can use i to enter insert mode in vi, and escape to get out of insert mode. Escape key is generally located at the top left corner of the keyboard.
Change the line,
export KAFKA_HEAP_OPTS="-Xmx512M -Xms512M"
to
export KAFKA_HEAP_OPTS="-Xmx64M -Xms64M"
Press Escape; type :wq and press Enter to save the file.
Type vi kafka-server-start.sh and press Enter to edit the kafka-server-start.sh file using vi.
Change the line,
export KAFKA_HEAP_OPTS="-Xmx1G -Xms1G"
to
export KAFKA_HEAP_OPTS="-Xmx128M -Xms128M"
Press Escape; type :wq and press Enter to save the file.
Since Kafka uses ZooKeeper for distributed coordination, ZooKeeper needs to be configured.
Type cd /usr/local/kafka/config and press Enter to modify the zookeeper.properties file in the kafka configuration directory.
Type vi zookeeper.properties and press Enter to edit the file.
If the lines are not already present,
add
initLimit=5
syncLimit=2
maxClientCnxns=0
server.1=localhost:2888:3888
Press Escape; type :wq and press Enter to save and exit the editor.
Use the command, sudo mkdir /tmp/zookeeper to create a directory.
Use the command, echo 1 > /tmp/myid and sudo cp /tmp/myid /tmp/zookeeper/myid to create a myid file for zookeeper.
To make the necessary changes required for Kafka configuration,
Type cd /usr/local/kafka/config and press Enter.
Type vi server.properties and press Enter.
Change the line, broker.id=0 to broker.id=1.
Check if the default port is set to 9092.
Check if the zookeeper is set to connect at port 2181.
In case of multiple zookeeper instances, specify each of them separated by commas.
A few changes are required in the server.properties file.
Type queued.max.requests=1000 and auto.create.topics.enable=false at the end of the file and press Enter.
This line ensures topics have to be explicitly created before creating a message for the topic.
Press escape to exit insert mode; Type :wq and press Enter to save the file.
The zookeeper server needs to be started. Type sudo nohup /usr/local/kafka/bin/zookeeper-server-start.sh /usr/local/kafka/config/zookeeper.properties > /tmp/zk.out 2>/tmp/zk.err & and press Enter.
Enter Simplilearn password if asked.
The command ‘sudo’ is used to ensure you have permissions. The & (ampersand) is added at the end so that the process runs in the background. For background processes, nohup is added at the beginning so that the background process does not end, even if your session is terminated. The standard output from the server is sent to /tmp/zk.out file and the standard error is sent to /tmp/zk.err file with the 2> option.
To start the kafka server, type sudo nohup /usr/local/kafka/bin/kafka-server-start.sh /usr/local/kafka/config/server.properties > /tmp/kafka.out 2>/tmp/kafka.err & and press Enter.

4.8 Demo 1—Kafka Installation and Configuration

In this demo, we will learn how to create and send messages in Kafka.
Before we send or receive messages using kafka, a topic needs to be created. For example, let us create a topic called ‘test.’
Type kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --
partitions 1 --topic test and press Enter. This command creates a topic called test with replication factor of 1 and single partition.
To check the topic,
Type kafka-topics.sh --list --zookeeper localhost:2181 and press Enter.
Type kafka-topics.sh --describe --zookeeper localhost:2181 --topic test and press Enter.
We can add a message to the topic with the default producer provided by kafka.
Type kafka-console-producer.sh --broker-list localhost:9092 --topic test and press Enter.
Type This is first message and press Enter.
Type This is second message and press Enter.
Type This is third message and press Enter.
Press Ctrl-D.
Note that Ctrl-D at the end is entered by pressing CTRL key and letter D together and indicates the end of a file in Linux.
We can check the received message with the default consumer provided by kafka.
Type kafka-console-consumer.sh --zookeeper localhost:2181 --topic test –from-beginning and press Enter.
Type This is first message and press Enter.
Type This is second message and press Enter.
Type This is third message and press Enter.
Note that the consumer uses the zookeeper address to connect to Kafka cluster. The option from-beginning is used to read all the messages from the beginning.

4.8 Demo 1—Kafka Installation and Configuration

In this demo, we will learn how to create and send messages in Kafka.
Before we send or receive messages using kafka, a topic needs to be created. For example, let us create a topic called ‘test.’
Type kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --
partitions 1 --topic test and press Enter. This command creates a topic called test with replication factor of 1 and single partition.
To check the topic,
Type kafka-topics.sh --list --zookeeper localhost:2181 and press Enter.
Type kafka-topics.sh --describe --zookeeper localhost:2181 --topic test and press Enter.
We can add a message to the topic with the default producer provided by kafka.
Type kafka-console-producer.sh --broker-list localhost:9092 --topic test and press Enter.
Type This is first message and press Enter.
Type This is second message and press Enter.
Type This is third message and press Enter.
Press Ctrl-D.
Note that Ctrl-D at the end is entered by pressing CTRL key and letter D together and indicates the end of a file in Linux.
We can check the received message with the default consumer provided by kafka.
Type kafka-console-consumer.sh --zookeeper localhost:2181 --topic test –from-beginning and press Enter.
Type This is first message and press Enter.
Type This is second message and press Enter.
Type This is third message and press Enter.
Note that the consumer uses the zookeeper address to connect to Kafka cluster. The option from-beginning is used to read all the messages from the beginning.

4.11 Stop the Kafka Server

We may need to stop the servers and start again when some configuration parameters are changed. We have to stop the Kafka server first and then the ZooKeeper servers.
To stop the kafka server, type sudo /usr/local/kafka/bin/zookeeper-server-stop.sh and press Enter.
To stop the ZooKeeper server, type sudo /usr/local/kafka/bin/zookeeper-server-stop.sh and press Enter.
To start the servers, follow the same steps as described earlier.

4.12 Setting up Multi-Node Kafka Cluster—Step 1

To setup a multi-node cluster, let us take an example of setting up a 3-node cluster with the IP addresses node1, node2, and node3.
Kafka needs to be installed on each machine as specified earlier. Download the kafka tarball, unzip the compressed archive, and move the expanded directory to /usr/local/kafka.

4.12 Setting up Multi-Node Kafka Cluster—Step 1

To setup a multi-node cluster, let us take an example of setting up a 3-node cluster with the IP addresses node1, node2, and node3.
Kafka needs to be installed on each machine as specified earlier. Download the kafka tarball, unzip the compressed archive, and move the expanded directory to /usr/local/kafka.

4.13 Setting up Multi-Node Kafka Cluster—Step 2

Setup ZooKeeper on each node:
Type cd /usr/local/kafka/config and press Enter.
Type vi zookeeper.properties and press Enter.
If the lines are not already present, add the following lines:
initLimit=5
syncLimit=2
maxClientCnxns=0
server.1=node1:2888:3888
server.2=node2:2888:3888
server.3=node3:2888:3888
Press Escape and type :wq to save and exit the editor.
Note that the node1, node2, and node3 are the IP addresses of the 3 servers.

4.13 Setting up Multi-Node Kafka Cluster—Step 2

Setup ZooKeeper on each node:
Type cd /usr/local/kafka/config and press Enter.
Type vi zookeeper.properties and press Enter.
If the lines are not already present, add the following lines:
initLimit=5
syncLimit=2
maxClientCnxns=0
server.1=node1:2888:3888
server.2=node2:2888:3888
server.3=node3:2888:3888
Press Escape and type :wq to save and exit the editor.
Note that the node1, node2, and node3 are the IP addresses of the 3 servers.

4.15 Setting up Multi-Node Kafka Cluster—Step 4

Setup the kafka broker properties:
Type cd /usr/local/kafka/config and press Enter.
Type vi server.properties and press Enter.
The changes required for Kafka configuration on each machine are as follows:
Change broker.id=0 to:
broker.id=1 on node1,
broker.id=2 on node2, and
broker.id=3 on node3.
Check if the default port is set to 9092.

4.15 Setting up Multi-Node Kafka Cluster—Step 4

Setup the kafka broker properties:
Type cd /usr/local/kafka/config and press Enter.
Type vi server.properties and press Enter.
The changes required for Kafka configuration on each machine are as follows:
Change broker.id=0 to:
broker.id=1 on node1,
broker.id=2 on node2, and
broker.id=3 on node3.
Check if the default port is set to 9092.

4.16 Setting up Multi-Node Kafka Cluster—Step 5

A few changes are required to server.properties.
Check if the ZooKeeper is set to connect at port 2181 on all the nodes.
In the above parameter, you can specify the local node address first in the list, for faster ZooKeeper access. For example, on node3, you can specify node3:2181 first in the above list.
Press escape; type :wq and press Enter to save the changes.

4.16 Setting up Multi-Node Kafka Cluster—Step 5

A few changes are required to server.properties.
Check if the ZooKeeper is set to connect at port 2181 on all the nodes.
In the above parameter, you can specify the local node address first in the list, for faster ZooKeeper access. For example, on node3, you can specify node3:2181 first in the above list.
Press escape; type :wq and press Enter to save the changes.

4.17 Setting up Multi-Node Kafka Cluster—Step 6

Start the ZooKeeper server on each node.
Type sudo nohup /usr/local/kafka/bin/zookeeper-server-start.sh /usr/local/kafka/config/zookeeper.properties > /tmp/zk.out 2>/tmp/zk.err & and press Enter.
Start the kafka server on each node.
Type sudo nohup /usr/local/kafka/bin/kafka-server-start.sh /usr/local/kafka/config/server.properties > /tmp/kafka.out 2>/tmp/kafka.err & and press Enter.
Now that the cluster is setup, you can create a topic and check sending and receiving messages.
This completes setting up the multi-node Kafka cluster.

4.17 Setting up Multi-Node Kafka Cluster—Step 6

Start the ZooKeeper server on each node.
Type sudo nohup /usr/local/kafka/bin/zookeeper-server-start.sh /usr/local/kafka/config/zookeeper.properties > /tmp/zk.out 2>/tmp/zk.err & and press Enter.
Start the kafka server on each node.
Type sudo nohup /usr/local/kafka/bin/kafka-server-start.sh /usr/local/kafka/config/server.properties > /tmp/kafka.out 2>/tmp/kafka.err & and press Enter.
Now that the cluster is setup, you can create a topic and check sending and receiving messages.
This completes setting up the multi-node Kafka cluster.

4.18 Quiz

A few questions will be presented in the following screens. Select the correct option and click submit to see the feedback.

4.19 Summary

Let us summarize the topics covered in this lesson:
• Kafka has multiple versions. We need to choose the latest stable version for installation.
• Version 0.8.2.1 is the current stable version of Kafka.
• Kafka can be installed by downloading the latest tarball.
• The recommended machine configurations to install Kafka are: Minimum 2GB RAM, 1 CPU for Kafka and ZooKeeper, and 1 TB hard disk.
• The ZooKeeper server has to be started before starting Kafka.

4.20 Conclusion

This concludes ‘Installation and Configuration.’
The next lesson is ‘Kafka Interfaces.’

4.1 Lesson 4—Installation and Configuration

Hello and welcome to lesson 4 of the Apache Kafka Developer course offered by Simplilearn. This lesson provides steps to install and configure Kafka.

4.7 Demo 1—Kafka Installation and Configuration

In this demo, we will learn how to install and configure Kafka.
Type wget http://mirrors.advancedhosters.com/apache/kafka/0.8.2.1/kafka_2.9.1-0.8.2.1.tgz and press Enter to download Kafka directly from the Apache Kafka website.
You may choose a different mirror based on your location at: https://www.apache.org/dyn/closer.cgi?path=/kafka/0.8.2.1/kafka_2.9.1-0.8.2.1.tgz.
Download Kafka on each machine that requires Kafka.
The file with .tgz extension is called a tarball, which is a compressed tar archive on linux. Tar is the tape archive command on linux.
After the download, the archives have to be unzipped and moved to an appropriate location.
Type tar –xzf kafka_2.9.1-0.8.2.1.tgz and press Enter to unzip the package using tar utility.
Type sudo mv kafka_2.9.1-0.8.2.1 /usr/local/kafka and press Enter to move to an appropriate directory.
Note that sudo may ask for the Simplilearn password.
Type vi .bashr and press Enter to edit the .bashrc file in your home directory using the cd command.
In vi, add export KAFKA_PREFIX=/usr/local/kafka and export PATH=$PATH:$KAFKA_PREFIX/bin at the end of the file using i to go to insert mode and escape to get out of insert mode.
Type :wq and press Enter to save the file.
Note that all the commands are case sensitive; so, you need to type exactly as shown.
To restart bash for changes to take effect, type exec bash and press Enter. This will set up the path to include the Kafka directory.
Some development systems have low memory; hence, default heap memory settings do not work on them. Hence, a few changes are required for a development cluster with low memory.
Type cd /usr/local/kafka/bin and press Enter to change the directory to bin directory of Kafka installation.
Type vi zookeeper-server-start.sh and press Enter to edit the zookeeper-server-start.sh file using vi editor.
You can use i to enter insert mode in vi, and escape to get out of insert mode. Escape key is generally located at the top left corner of the keyboard.
Change the line,
export KAFKA_HEAP_OPTS="-Xmx512M -Xms512M"
to
export KAFKA_HEAP_OPTS="-Xmx64M -Xms64M"
Press Escape; type :wq and press Enter to save the file.
Type vi kafka-server-start.sh and press Enter to edit the kafka-server-start.sh file using vi.
Change the line,
export KAFKA_HEAP_OPTS="-Xmx1G -Xms1G"
to
export KAFKA_HEAP_OPTS="-Xmx128M -Xms128M"
Press Escape; type :wq and press Enter to save the file.
Since Kafka uses ZooKeeper for distributed coordination, ZooKeeper needs to be configured.
Type cd /usr/local/kafka/config and press Enter to modify the zookeeper.properties file in the kafka configuration directory.
Type vi zookeeper.properties and press Enter to edit the file.
If the lines are not already present,
add
initLimit=5
syncLimit=2
maxClientCnxns=0
server.1=localhost:2888:3888
Press Escape; type :wq and press Enter to save and exit the editor.
Use the command, sudo mkdir /tmp/zookeeper to create a directory.
Use the command, echo 1 > /tmp/myid and sudo cp /tmp/myid /tmp/zookeeper/myid to create a myid file for zookeeper.
To make the necessary changes required for Kafka configuration,
Type cd /usr/local/kafka/config and press Enter.
Type vi server.properties and press Enter.
Change the line, broker.id=0 to broker.id=1.
Check if the default port is set to 9092.
Check if the zookeeper is set to connect at port 2181.
In case of multiple zookeeper instances, specify each of them separated by commas.
A few changes are required in the server.properties file.
Type queued.max.requests=1000 and auto.create.topics.enable=false at the end of the file and press Enter.
This line ensures topics have to be explicitly created before creating a message for the topic.
Press escape to exit insert mode; Type :wq and press Enter to save the file.
The zookeeper server needs to be started. Type sudo nohup /usr/local/kafka/bin/zookeeper-server-start.sh /usr/local/kafka/config/zookeeper.properties > /tmp/zk.out 2>/tmp/zk.err & and press Enter.
Enter Simplilearn password if asked.
The command ‘sudo’ is used to ensure you have permissions. The & (ampersand) is added at the end so that the process runs in the background. For background processes, nohup is added at the beginning so that the background process does not end, even if your session is terminated. The standard output from the server is sent to /tmp/zk.out file and the standard error is sent to /tmp/zk.err file with the 2> option.
To start the kafka server, type sudo nohup /usr/local/kafka/bin/kafka-server-start.sh /usr/local/kafka/config/server.properties > /tmp/kafka.out 2>/tmp/kafka.err & and press Enter.

4.8 Demo 1—Kafka Installation and Configuration

In this demo, we will learn how to create and send messages in Kafka.
Before we send or receive messages using kafka, a topic needs to be created. For example, let us create a topic called ‘test.’
Type kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --
partitions 1 --topic test and press Enter. This command creates a topic called test with replication factor of 1 and single partition.
To check the topic,
Type kafka-topics.sh --list --zookeeper localhost:2181 and press Enter.
Type kafka-topics.sh --describe --zookeeper localhost:2181 --topic test and press Enter.
We can add a message to the topic with the default producer provided by kafka.
Type kafka-console-producer.sh --broker-list localhost:9092 --topic test and press Enter.
Type This is first message and press Enter.
Type This is second message and press Enter.
Type This is third message and press Enter.
Press Ctrl-D.
Note that Ctrl-D at the end is entered by pressing CTRL key and letter D together and indicates the end of a file in Linux.
We can check the received message with the default consumer provided by kafka.
Type kafka-console-consumer.sh --zookeeper localhost:2181 --topic test –from-beginning and press Enter.
Type This is first message and press Enter.
Type This is second message and press Enter.
Type This is third message and press Enter.
Note that the consumer uses the zookeeper address to connect to Kafka cluster. The option from-beginning is used to read all the messages from the beginning.

4.12 Setting up Multi-Node Kafka Cluster—Step 1

To setup a multi-node cluster, let us take an example of setting up a 3-node cluster with the IP addresses node1, node2, and node3.
Kafka needs to be installed on each machine as specified earlier. Download the kafka tarball, unzip the compressed archive, and move the expanded directory to /usr/local/kafka.

4.13 Setting up Multi-Node Kafka Cluster—Step 2

Setup ZooKeeper on each node:
Type cd /usr/local/kafka/config and press Enter.
Type vi zookeeper.properties and press Enter.
If the lines are not already present, add the following lines:
initLimit=5
syncLimit=2
maxClientCnxns=0
server.1=node1:2888:3888
server.2=node2:2888:3888
server.3=node3:2888:3888
Press Escape and type :wq to save and exit the editor.
Note that the node1, node2, and node3 are the IP addresses of the 3 servers.

4.15 Setting up Multi-Node Kafka Cluster—Step 4

Setup the kafka broker properties:
Type cd /usr/local/kafka/config and press Enter.
Type vi server.properties and press Enter.
The changes required for Kafka configuration on each machine are as follows:
Change broker.id=0 to:
broker.id=1 on node1,
broker.id=2 on node2, and
broker.id=3 on node3.
Check if the default port is set to 9092.

4.16 Setting up Multi-Node Kafka Cluster—Step 5

A few changes are required to server.properties.
Check if the ZooKeeper is set to connect at port 2181 on all the nodes.
In the above parameter, you can specify the local node address first in the list, for faster ZooKeeper access. For example, on node3, you can specify node3:2181 first in the above list.
Press escape; type :wq and press Enter to save the changes.

4.17 Setting up Multi-Node Kafka Cluster—Step 6

Start the ZooKeeper server on each node.
Type sudo nohup /usr/local/kafka/bin/zookeeper-server-start.sh /usr/local/kafka/config/zookeeper.properties > /tmp/zk.out 2>/tmp/zk.err & and press Enter.
Start the kafka server on each node.
Type sudo nohup /usr/local/kafka/bin/kafka-server-start.sh /usr/local/kafka/config/server.properties > /tmp/kafka.out 2>/tmp/kafka.err & and press Enter.
Now that the cluster is setup, you can create a topic and check sending and receiving messages.
This completes setting up the multi-node Kafka cluster.