Issue with Logstash 6.0.0 document_type when writing to Elasticsearch 6.xedit

We’d like to alert users to behavior in Logstash 6.0.0 that can cause errors when writing to Elasticsearch 6.0+ clusters. When Logstash attempts to index events that result in multiple type values, Logstash encounters indexing errors. These errors look similar to the following example, which has been shortened from the full message:

[2017-11-21T14:26:01,991][WARN ][logstash.outputs.elasticsearch] Could not index
event to Elasticsearch.{:status=>400, :response=>{"error"=>{"reason"=>"Rejecting
mapping update to [myindex] as the final mapping would have more than 1 type:
[type1, type2]"}}}}

Users are likely to encounter this error when Logstash is receiving data from:

multiple types of Beats

instances of Filebeat tailing mutliple files with different types

multiple Logstash inputs that specify different type values

To work around this problem in Logstash 6.0.0, add the setting document_type => doc to the Elasticsearch output configuration. We will issue a patch to address this issue soon in a new version of Logstash.

Logstash has historically used the value of the type field to set the Elasticsearch type by default. Elasticsearch 6.0 no longer supports more than one type per index. This is why the new behavior will only be applied to Elasticsearch 6.0+ clusters with our upcoming fixes.

Please read on for more information about document types with Logstash and Elasticsearch 6.0

As of Elasticsearch 6.0, document types are on the way out, and only a single mapping type per index is supported. For Logstash users this means transitioning to using the type field inside of the document instead of the document type. The effect is the same, but the usage is slightly different. This may mean reconfiguring existing Kibana dashboards to use the new type field instead of the document type.

If you are using the default mapping templates in Logstash, you will need to upgrade your mapping templates. To do this, after migrating Elasticsearch to 6.0, you must override the existing template with the 6.x template. This can be done by ensuring that all configured Elasticsearch outputs have the following setting specified: template_overwrite => true.

Fresh installations can and should start with the same version across the Elastic Stack.

Elasticsearch 6.0 does not require Logstash 6.0. An Elasticsearch 6.0 cluster will happily receive data from a
Logstash 5.x instance via the default HTTP communication layer. This provides some flexibility to decide when to upgrade
Logstash relative to an Elasticsearch upgrade. It may or may not be convenient for you to upgrade them together, and it
is not required to be done at the same time as long as Elasticsearch is upgraded first.

You should upgrade in a timely manner to get the performance improvements that come with Logstash 6.0, but do so in
the way that makes the most sense for your environment.

If any Logstash plugin that you require is not compatible with Logstash 6.0, then you should wait until it is ready
before upgrading.

Although we make great efforts to ensure compatibility, Logstash 6.0 is not completely backwards compatible. As noted
in the Elastic Stack upgrade guide, Logstash 6.0 should not be upgraded before Elasticsearch 6.0. This is both
practical and because some Logstash 6.0 plugins may attempt to use features of Elasticsearch 6.0 that did not exist
in earlier versions.