Logstash 5.3.0 released

We are pleased to announce the release of Logstash 5.3.0. If you can't wait to get your hands on it, head straight to our downloads page. You can also view the release notes here.

Persistent Queues

We've made a few important enhancements and resiliency improvements to the persistent queue feature. Among them:

Breaking Change: The default queue location on disk has been changed to include the pipeline ID in the path hierarchy. This change was made to accommodate an upcoming feature where multiple, isolated pipelines could be run on the same Logstash instance. In this upcoming feature, each pipeline will have its own queue; hence the new directory structure. Please consult the breaking changes docs for upgrade instructions.

Added an automatic recovery process that runs when Logstash starts up, to recover data written to the persistent queue, but not yet checkpointed. This is useful in a situation when the input had written data to the queue, but Logstash crashed before writing to the checkpoint file.

Added exclusive access to the persistent queue on disk, as defined by the path.queue setting. Using a file lock we ensure that only a single Logstash instance has access to write to the queue on the same path, to guard against corruption.

You can now safely reload the pipeline config when using persistent queues. Previously, the reloading of a pipeline could cause data corruption. In 5.3, the reload sequence has been changed to reliably shut down the first pipeline before a new one is started with same settings. The addition of locks described above also helps to make sure multiple pipelines don't concurrently modify the same queue.

While the persistent queue feature is still in beta for 5.3.0, we've incorporated feedback from our users and fixed important bugs reported during this phase. We have a couple of features yet to complete before we can remove the beta tag off of this feature. Please stay tuned for our updates, and your feedback is very welcome!

X-Pack Monitoring

After releasing monitoring UI for Logstash in 5.2 as part of the X-Pack basic license, we've got lots of feedback from users. In this release, we've fixed a few defects and added more graphs!

Cgroup information: Version 5.2 added the ability to collect and report on cgroup information in the monitoring API. These stats are useful when you are running Logstash in a container. Version 5.3 brings new charts that show cgroup information in comparison to the regular machine level CPU. There are also new graphs under the “Advanced” tab to show how many times the container is being throttled, duration, etc.

Persistent queue stats: Under the “Advanced” tab, a new graph shows the number of events queued on disk over time. This provides a better understanding of the event lag during ingestion.

Plugins

To add to the awesomeness, this release is also packed with tons of plugin goodies. Here’s some highlights:

Moar Lookup Enrichment

Have you ever dreamed that Logstash could conduct streaming joins against your content management databases or data warehouse dimension tables? Well, you finally can! The new JDBC_streaming filter enables you to lookup and enrich your events with database data. Since this plugin will execute a JDBC query per event, it can become a performance bottleneck due to the network round trip costs. We’ve added a LRU caching layer to help mitigate this. A big thank you to our community maintainer Philippe Weber (wiibaa) for the initial contribution!

In the near future, we’ll introduce another filter plugin (JDBC_static) which will cache a full JDBC query result set natively in Logstash at startup time for increased throughput and scalability at lookup time. More details to come!

Lastly, the Elasticsearch filter has been a popular plugin for lookup enrichment against Elasticsearch. We’re doubling down on this use case and the plugin is now officially Elastic supported. For a full list of Elastic supported Logstash plugins, please see our support matrix.

Elasticsearch Output Enhancements

Elasticsearch 5.3 introduced an option to require content type headers with any incoming HTTP request. If using Elasticsearch 5.3 with this option turned on...