This topic describes how to install community connectors that are not available from Confluent Hub.
If a connector is not available on Confluent Hub, you must first obtain or build the JARs, and then install the connectors
into your Apache Kafka® installation.

Important

Confluent Hub hosts many popular connectors developed by companies and
open source organizations and individuals. If a connector is available on Confluent Hub, you can skip this
topic.

In the following example, the HDFS Sink Connector is installed manually.

Clone the GitHub repo for the connector.

git clone https://github.com/confluentinc/kafka-connect-hdfs.git

Navigate to your cloned repo, checkout the version you want, and build the JAR with Maven.
You will want to checkout a released version typically. This example uses the v3.0.1 release tag:

cd kafka-connect-hdfs; git checkout v3.0.1; mvn package

Locate the connector's uber JAR or plugin directory,
and copy that into one of the directories on the Kafka Connect worker's plugin path.
For example, if the plugin path includes the /usr/local/share/kafka/plugins directory, you can use one of the following
techniques to make the connector available as a plugin.

If the connector were to create an uber JAR file named kafka-connect-hdfs-3.0.1-package.jar, you could
copy that file into the /usr/local/share/kafka/plugins directory:

If you're running Kafka Connect distributed worker processes, you must repeat these steps on all of your machines.
Every connector must be available on all workers, since Kafka Connect will distribute the connector tasks to any of
the workers.