Logstash is a light-weight, open-source, server-side data processing pipeline that allows you to collect data from a wide variety of sources, transform it on the fly, and send it to your desired destination. It is most often used as a data pipeline for Elasticsearch, a popular analytics and search engine. Logstash is a popular choice for loading data into Elasticsearch because of its tight integration, powerful log processing capabilities, and over 200 pre-built open-source plugins that can help you get your data indexed the way you want it.

With over 200 plugins already available on Github, it is likely that someone has already built the plugin you need to customize your data pipeline. But if none is available that suits your requirements, you can easily write your own plugin.

Capture server logs and push them into your Elasticsearch cluster using Logstash. Elasticsearch indexes the data and makes it available for analysis in near real-time (less than one second). You can then use Kibana to visualize the data and perform operational analyses like identifying network issues and disk I/O problems. Your on-call teams can perform statistical aggregations to identify root cause and fix issues.

It’s easy to get started with Logstash on AWS. Amazon Elasticsearch Service supports integration with Logstash. Simply sign into the AWS Management Console, launch your first Amazon Elasticsearch Service domain and start loading your data from your Logstash server.

Depending on your specific use case, there is a number of alternative solutions that can help you more easily ingest data into Elasticsearch. Amazon Elasticsearch Service offers built-in integrations with Amazon Kinesis Firehose, Amazon CloudWatch Logs, and AWS IoT for this purpose. You can also build your own data pipeline using open-source solutions including Apache Kafka and Apache Fluentd. For more information, see Amazon Elasticsearch Service Data Ingestion page.