How To Install Elasticsearch, Logstash, and Kibana (ELK Stack) on CentOS/RHEL 7

If you are a person who is, or has been in the past, in charge of inspecting and analyzing system logs in Linux, you know what a nightmare that task can become if multiple services are being monitored simultaneously.

In days past, that task had to be done mostly manually, with each log type being handled separately. Fortunately, the combination of Elasticsearch, Logstash, and Kibana on the server side, along with Filebeat on the client side, makes that once difficult task look like a walk in the park today.

The first three components form what is called an ELK stack, whose main purpose is to collect logs from multiple servers at the same time (also known as centralized logging).

A built-in java-based web interface allows you to inspect logs quickly at a glance for easier comparison and troubleshooting. These client logs are sent to a central server by Filebeat, which can be described as a log shipping agent.

Let’s see how all of these pieces fit together. Our test environment will consist of the following machines:

Please note that the RAM values provided here are not strict prerequisites, but recommended values for successful implementation of the ELK stack on the central server. Less RAM on clients will not make much difference, if any, at all.

Installing ELK Stack on the Server

Let’s begin by installing the ELK stack on the server, along with a brief explanation on what each component does:

Elasticsearch stores the logs that are sent by the clients.

Logstash processes those logs.

Kibana provides the web interface that will help us to inspect and analyze the logs.

Install the following packages on the central server. First off, we will install Java JDK version 8 (update 102, the latest one at the time of this writing), which is a dependency of the ELK components.

You may want to check first in the Java downloads page here to see if there is a newer update available.

Input: Create /etc/logstash/conf.d/input.conf and insert the following lines into it. This is necessary for Logstash to “learn” how to process beats coming from clients. Make sure the path to the certificate and key match the right paths as outlined in the previous step:

Configure Filebeat

A word of caution here. Filebeat configuration is stored in a YAML file, which requires strict indentation. Be careful with this as you edit /etc/filebeat/filebeat.yml as follows:

Under paths, indicate which log files should be “shipped” to the ELK server.

Under prospectors:

input_type: log
document_type: syslog

Under output:

Uncomment the line that begins with logstash.

Indicate the IP address of your ELK server and port where Logstash is listening in hosts.

Make sure the path to the certificate points to the actual file you created in Step I (Logstash section) above.

The above steps are illustrated in the following image:

Configure Filebeat in Client Servers

Save changes, and then restart Filebeat on the clients:

# systemctl restart filebeat

Once we have completed the above steps on the clients, feel free to proceed.

Testing Filebeat

In order to verify that the logs from the clients can be sent and received successfully, run the following command on the ELK server:

# curl -XGET 'http://localhost:9200/filebeat-*/_search?pretty'

The output should be similar to (notice how messages from /var/log/messages and /var/log/secure are being received from client1 and client2):

Testing Filebeat

Otherwise, check the Filebeat configuration file for errors.

# journalctl -xe

after attempting to restart Filebeat will point you to the offending line(s).

Testing Kibana

After we have verified that logs are being shipped by the clients and received successfully on the server. The first thing that we will have to do in Kibana is configuring an index pattern and set it as default.

You can describe an index as a full database in a relational database context. We will go with filebeat-* (or you can use a more precise search criteria as explained in the official documentation).

Enter filebeat-* in the Index name or pattern field and then click Create:

Testing Kibana

Please note that you will be allowed to enter a more fine-grained search criteria later. Next, click the star inside the green rectangle to configure it as the default index pattern:

Configure Default Kibana Index Pattern

Finally, in the Discover menu you will find several fields to add to the log visualization report. Just hover over them and click Add:

Add Log Visualization Report

The results will be shown in the central area of the screen as shown above. Feel free to play around (add and remove fields from the log report) to become familiar with Kibana.

By default, Kibana will display the records that were processed during the last 15 minutes (see upper right corner) but you can change that behavior by selecting another time frame:

Kibana Log Reports

Summary

In this article we have explained how to set up an ELK stack to collect the system logs sent by two clients, a CentOS 7 and a Debian 8 machines.

If You Appreciate What We Do Here On TecMint, You Should Consider:

TecMint is the fastest growing and most trusted community site for any kind of Linux Articles, Guides and Books on the web. Millions of people visit TecMint! to search or browse the thousands of published articles available FREELY to all.

If you like what you are reading, please consider buying us a coffee ( or 2 ) as a token of appreciation.

Gabriel Cánepa is a GNU/Linux sysadmin and web developer from Villa Mercedes, San Luis, Argentina. He works for a worldwide leading consumer product company and takes great pleasure in using FOSS tools to increase productivity in all areas of his daily work.

Your name can also be listed here. Got a tip? Submit it here to become an TecMint author.

35 Responses

[2020-03-07T13:35:38,777][WARN ][org.logstash.instrument.metrics.gauge.LazyDelegatingGauge][main] A gauge metric of an unknown type (org.jruby.specialized.RubyArrayOneObject) has been create for key: cluster_uuids. This may result in invalid serialization. It is recommended to log an issue to the responsible developer/development team.
[2020-03-07T13:35:38,340][INFO ][logstash.outputs.elasticsearch][main] ES Output version determined {:es_version=>7}
[2020-03-07T13:35:38,345][WARN ][logstash.outputs.elasticsearch][main] Detected a 6.x and above cluster: the `type` event field won’t be used to determine the document _type {:es_version=>7}