Installation

Unzip the package in the paththat we selected for our installation, in my particular case I defined the / opt to perform the installation of all my BD packages.

cd /opt
tar -xvzf nifi-1.4.0-bin.tar.gz

In the case that I expose, the user who owns my infra of BigData is the hadoopuser, then I will assign the necessary permissions, but you can choose the one you have available or create a user called nifi.

cd /opt
tar -xvzf
chown -R hadoop. /opt/nifi-1.4.0/

As a recommendation, I propose to create a symbolic Link in order to have a simple name.

This point will be easy search the data or integrate the env variable, and the specific version that we have.

ln -s /opt/nifi-1.4.0/ /opt/nifi

Let’s configure NIFI

In order to start with the application startup, we must review in the following configuration file:

vi /opt/nifi/conf/nifi.properties

Here we could change for example the default port, which in 8080.

Configurations related to the configuration in Cluster mode, kerberos and zookeper.

NIFI Prerequistes

Have Java Installed.

Have the Java environment variables set in the .bash_profile file.

Starting the Service

Starting the Service.

In order to start the NIFI service, we can do it manually or as a service, explore the two options.

Starting the Service, as a Linux Service

We must start with the installation as a service, So Let’s start by going to the NIFI home, where the binaries are located.

cd /opt/nifi/
bin/nifi.sh install

Remember that we can choose the name of the service with which we will identify our NIFI, it is a dataflow, we could identify it under that name, or with the name that we think convenient as administrators.

cd /opt/nifi/
bin/nifi.sh install dataflow

In the case of not choosing any name, remember that you will use the default name of the service, identified as nifi.

Already configured, we upload the services with the classic commands for it: