Secure Shared Files

Related articles

How to configure Storage Connector health metrics using Splunk

This article will demonstrate how to configure the Syncplicity Storage Connector to send its health metrics data feed to Splunk, and how to set up some basic graphs which can be used to monitor the performance and health of the Storage Connector.

Overview

Splunk helps you search, monitor, and analyze machine-generated big data via a web-style interface. Splunk captures, indexes, and correlates real-time data in a searchable repository from which it can generate graphs, reports, alerts, dashboards and visualizations.

This guide will walk you through the process of connecting Storage Connectors to Splunk.

Universal forwarder

The Syncplicity Storage Connector supports sending metrics to Splunk via TCP in plain text format. Alternatively, since Storage Connectors provide the same health data via JMX interface, you can set-up a Splunk “universal forwarder” to collect health metrics and logs and forward them to Splunk.

Set-up Splunk

We assume that a Splunk instance is already installed and configured and that Splunk UI is available at host “splunk.internaldomain.com” and port “8000”. For installation and troubleshooting guidance, consult Splunk documentation.

Storage Connector configuration

The first action is to add several new configuration settings in the Storage Connector config file. The default location of this config file is /etc/syncp-storage/syncp-storage.conf. The file is written in HOCON format; please see Using HOCON for reference.

You will need to add the following 5 lines to your config file:

syncplicity.health.enable

syncplicity.health.external.enable

syncplicity.health.external.host

syncplicity.health.external.port

syncplicity.health.external.prefix

It is recommended that you add these to the very end of the file.

The proceeding steps will go through how to input the values for each setting.

Step 1

On the Storage Connector node, open the config file for editing.

Step 2

Ensure that health monitoring is enabled.

syncplicity.health.enable = true

Step 3

Enable health metrics to be exported to an external port.

syncplicity.health.external.enable = true

Step 4

Specify the Splunk host URL:

syncplicity.health.external.host = splunk.internaldomain.com

Step 5

Specify the Splunk TCP input port.

syncplicity.health.external.port = 4003

Step 6

Set a prefix for the the health metrics. When an additional prefix is specified, such as "myCluster.myNode”, all metrics roll up under the myCluster.myNode.syncp.compute.v1.* path.

syncplicity.health.external.prefix = myCluster.myNode

Alternatively you can use UNIX environment variables to set the prefix. For example, if you have these environment variables set:

HOST=$(hostname -s)

CLUSTER=”myCluster”

then you can configure this setting in the config file as:

syncplicity.health.external.prefix = $CLUSTER.$HOST

Step 7

Save your changes to the config file and restart the Storage Connector Service.

Splunk configuration

Now that you have changed the settings on the Storage Connector to enable health metrics and send them to Splunk, you need to make some changes within Splunk to start visualizing those metrics.

Step 8

In the top right menu click Data inputs and add another TCP source with port 4003 and name it “storage”.

Step 9

Proceed to the next screen. Select “New” source type, and for the source type select Custom and name it “storage”.

Step 10

Proceed to the next screen. You should see the message “TCP input has been created successfully.”

Step 11

Now we’ll need to extract useful fields from the received metrics. Click on the “Extract fields” button and click on the “I prefer to write the regular expression myself” link.

The metrics will be received in the following format:

${dot.separated.path} ${value} ${unix_timestamp}\n

We’ll assume that custom prefix is specified as a clusterName.hostName.

Put the following regex into the “Regular Expression” text input in order to extract the cluster name, host name, metric id, metric value, and metric timestamp fields from the input events: