Elasticsearch output plugin v9.1.3

Getting Help

For questions about the plugin, open a topic in the Discuss forums. For bugs or feature requests, open an issue in Github.
For the list of Elastic supported plugins, please consult the Elastic Support Matrix.

Description

Compatibility Note

Starting with Elasticsearch 5.3, there’s an HTTP setting
called http.content_type.required. If this option is set to true, and you
are using Logstash 2.4 through 5.2, you need to update the Elasticsearch output
plugin to version 6.2.5 or higher.

This plugin is the recommended method of storing logs in Elasticsearch.
If you plan on using the Kibana web interface, you’ll want to use this output.

This output only speaks the HTTP protocol. HTTP is the preferred protocol for interacting with Elasticsearch as of Logstash 2.0.
We strongly encourage the use of HTTP over the node protocol for a number of reasons. HTTP is only marginally slower,
yet far easier to administer and work with. When using the HTTP protocol one may upgrade Elasticsearch versions without having
to upgrade Logstash in lock-step.

Template management for Elasticsearch 5.x

Index template for this version (Logstash 5.0) has been changed to reflect Elasticsearch’s mapping changes in version 5.0.
Most importantly, the subfield for string multi-fields has changed from .raw to .keyword to match ES default
behavior.

Users installing ES 5.x and LS 5.x

This change will not affect you and you will continue to use the ES defaults.

Users upgrading from LS 2.x to LS 5.x with ES 5.x

LS will not force upgrade the template, if logstash template already exists. This means you will still use
.raw for sub-fields coming from 2.x. If you choose to use the new template, you will have to reindex your data after
the new template is installed.

Retry Policy

The retry policy has changed significantly in the 8.1.1 release.
This plugin uses the Elasticsearch bulk API to optimize its imports into Elasticsearch. These requests may experience
either partial or total failures. The bulk API sends batches of requests to an HTTP endpoint. Error codes for the HTTP
request are handled differently than error codes for individual documents.

HTTP requests to the bulk API are expected to return a 200 response code. All other response codes are retried indefinitely.

The following document errors are handled as follows:

400 and 404 errors are sent to the dead letter queue (DLQ), if enabled. If a DLQ is not enabled, a log message will be emitted, and the event will be dropped. See DLQ Policy for more info.

409 errors (conflict) are logged as a warning and dropped.

Note that 409 exceptions are no longer retried. Please set a higher retry_on_conflict value if you experience 409 exceptions.
It is more performant for Elasticsearch to retry these exceptions than this plugin.

DLQ Policy

Mapping (404) errors from Elasticsearch can lead to data loss. Unfortunately
mapping errors cannot be handled without human intervention and without looking
at the field that caused the mapping mismatch. If the DLQ is enabled, the
original events causing the mapping errors are stored in a file that can be
processed at a later time. Often times, the offending field can be removed and
re-indexed to Elasticsearch. If the DLQ is not enabled, and a mapping error
happens, the problem is logged as a warning, and the event is dropped. See
dead-letter-queues for more information about processing events in the DLQ.

Batch Sizes

This plugin attempts to send batches of events as a single request. However, if
a request exceeds 20MB we will break it up until multiple batch requests. If a single document exceeds 20MB it will be sent as a single request.

DNS Caching

This plugin uses the JVM to lookup DNS entries and is subject to the value of networkaddress.cache.ttl,
a global setting for the JVM.

As an example, to set your DNS TTL to 1 second you would set
the LS_JAVA_OPTS environment variable to -Dnetworkaddress.cache.ttl=1.

Keep in mind that a connection with keepalive enabled will
not reevaluate its DNS value while the keepalive is in effect.

HTTP Compression

This plugin supports request and response compression. Response compression is enabled by default and
for Elasticsearch versions 5.0 and later, the user doesn’t have to set any configs in Elasticsearch for
it to send back compressed response. For versions before 5.0, http.compression must be set to truein
Elasticsearch to take advantage of response compression when using this plugin

For requests compression, regardless of the Elasticsearch version, users have to enable http_compression
setting in their Logstash config file.

Elasticsearch Output Configuration Options

This plugin supports the following configuration options plus the Common Options described later.

create: indexes a document, fails if a document by that id already exists in the index.

update: updates a document by id. Update has a special case where you can upsert — update a
document if not already present. See the upsert option. NOTE: This does not work and is not supported
in Elasticsearch 1.x. Please upgrade to ES 2.x or greater to use this feature with Logstash!

A sprintf style string to change the action based on the content of the event. The value %{[foo]}
would use the foo field for the action

document_type

Note: This option is deprecated due to the removal of types in Elasticsearch 6.0.
It will be removed in the next major version of Logstash.
This sets the document type to write events to. Generally you should try to write only
similar events to the same type. String expansion %{foo} works here.
If you don’t set a value for this option:

for elasticsearch clusters 6.x and above: the value of doc will be used;

for elasticsearch clusters 5.x and below: the event’s type field will be used, if the field is not present the value of doc will be used.

healthcheck_path

HTTP Path where a HEAD request is sent when a backend is marked down
the request is sent in the background to see if it has come back again
before it is once again eligible to service requests.
If you have custom firewall rules you may need to change this

hosts

Sets the host(s) of the remote instance. If given an array it will load balance requests across the hosts specified in the hosts parameter.
Remember the http protocol uses the http address (eg. 9200, not 9300).
"127.0.0.1"["127.0.0.1:9200","127.0.0.2:9200"]["http://127.0.0.1"]["https://127.0.0.1:9200"]["https://127.0.0.1:9200/mypath"] (If using a proxy on a subpath)
It is important to exclude dedicated master nodes from the hosts list
to prevent LS from sending bulk requests to the master nodes. So this parameter should only reference either data or client nodes in Elasticsearch.

Any special characters present in the URLs here MUST be URL escaped! This means # should be put in as %23 for instance.

index

The index to write events to. This can be dynamic using the %{foo} syntax.
The default value will partition your indices by day so you can more easily
delete old data or only search specific date ranges.
Indexes may not contain uppercase characters.
For weekly indexes ISO 8601 format is recommended, eg. logstash-%{+xxxx.ww}.
LS uses Joda to format the index pattern from event timestamp.
Joda formats are defined here.

keystore_password

manage_template

From Logstash 1.3 onwards, a template is applied to Elasticsearch during
Logstash’s startup if one with the name template_name does not already exist.
By default, the contents of this template is the default template for
logstash-%{+YYYY.MM.dd} which always matches indices based on the pattern
logstash-*. Should you require support for other index names, or would like
to change the mappings in the template in general, a custom template can be
specified by setting template to the path of a template file.

Setting manage_template to false disables this feature. If you require more
control over template creation, (e.g. creating indices dynamically based on
field names) you should set manage_template to false and use the REST
API to apply your templates manually.

parameters

Pass a set of key value pairs as the URL query string. This query string is added
to every host listed in the hosts configuration. If the hosts list contains
urls that already have query strings, the one specified here will be appended.

password

path

HTTP Path at which the Elasticsearch server lives. Use this if you must run Elasticsearch behind a proxy that remaps
the root path for the Elasticsearch HTTP API lives.
Note that if you use paths as components of URLs in the hosts field you may
not also set this field. That will raise an error at startup

pool_max

While the output tries to reuse connections efficiently we have a maximum.
This sets the maximum number of open connections the output will create.
Setting this too low may mean frequently closing / opening connections
which is bad.

pool_max_per_route

While the output tries to reuse connections efficiently we have a maximum per endpoint.
This sets the maximum number of open connections per endpoint the output will create.
Setting this too low may mean frequently closing / opening connections
which is bad.

script_lang

Set the language of the used script. If not set, this defaults to painless in ES 5.0.
When using indexed (stored) scripts on Elasticsearch 6 and higher, you must set this parameter to "" (empty string).

scripted_upsert

sniffing

This setting asks Elasticsearch for the list of all cluster nodes and adds them to the hosts list.
For Elasticsearch 1.x and 2.x any nodes with http.enabled (on by default) will be added to the hosts list, including master-only nodes!
For Elasticsearch 5.x and 6.x any nodes with http.enabled (on by default) will be added to the hosts list, excluding master-only nodes.

sniffing_delay

sniffing_path

HTTP Path to be used for the sniffing requests
the default value is computed by concatenating the path value and "_nodes/http"
if sniffing_path is set it will be used as an absolute path
do not use full URL here, only paths, e.g. "/sniff/_nodes/http"

ssl

Enable SSL/TLS secured communication to Elasticsearch cluster. Leaving this unspecified will use whatever scheme
is specified in the URLs listed in hosts. If no explicit protocol is specified plain HTTP will be used.
If SSL is explicitly disabled here the plugin will refuse to start if an HTTPS URL is given in hosts

template_name

This configuration option defines how the template is named inside Elasticsearch.
Note that if you have used the template management features and subsequently
change this, you will need to prune the old template manually, e.g.

template_overwrite

The template_overwrite option will always overwrite the indicated template
in Elasticsearch with either the one indicated by template or the included one.
This option is set to false by default. If you always want to stay up to date
with the template provided by Logstash, this option could be very useful to you.
Likewise, if you have your own template file managed by puppet, for example, and
you wanted to be able to update it regularly, this option could help there as well.

Please note that if you are using your own customized version of the Logstash
template (logstash), setting this to true will make Logstash to overwrite
the "logstash" template (i.e. removing all customized settings)

user

validate_after_inactivity

How long to wait before checking if the connection is stale before executing a request on a connection using keepalive.
You may want to set this lower, if you get connection errors regularly
Quoting the Apache commons docs (this client is based Apache Commmons):
Defines period of inactivity in milliseconds after which persistent connections must
be re-validated prior to being leased to the consumer. Non-positive value passed to
this method disables connection validation. This check helps detect connections that
have become stale (half-closed) while kept inactive in the pool.
See these docs for more info

id

Add a unique ID to the plugin configuration. If no ID is specified, Logstash will generate one.
It is strongly recommended to set this ID in your configuration. This is particularly useful
when you have two or more plugins of the same type. For example, if you have 2 elasticsearch outputs.
Adding a named ID in this case will help in monitoring Logstash when using the monitoring APIs.