Logstash

Share

Logstash is an open source, server-side data processing pipeline that ingests data from a multitude of sources simultaneously transforms it, and then sends it to your favorite “stash.”

Best Practices

Release and tune this chart once per Logstash pipeline

To achieve multiple pipelines with this chart, current best practice is to maintain one pipeline per chart release. In this way, the configuration is simplified and pipelines are more isolated from one another.

Default Pipeline: Beats Input -> Elasticsearch Output

Current best practice for ELK logging is to ship logs from hosts using Filebeat to logstash where persistent queues are enabled. Filebeat supports structured (e.g. JSON) and unstructured (e.g. log lines) log shipment.

Load Beats-generated index template into Elasticsearch

To best utilize the combination of Beats, Logstash and Elasticsearch, load Beats-generated index templates into Elasticsearch as described here.

On a remote-to-Kubernetes Linux instance, you might run the following command to load that instance’s Beats-generated index template into Elasticsearch (Elasticsearch hostname will vary).

As data travels from source to store, Logstash filters parse each event, identify named fields to build the structure, and transform them to converge on a common format for easier, accelerated analysis and business value.

Centrally Manage Deployments With a Single UI

Take the helm of your Logstash deployments with the Pipeline Management UI, which makes orchestrating and managing your pipelines a breeze. The management controls also integrate seamlessly with the built-in security features to prevent any unintended rewiring.