From Elasticsearch 5.0 on all settings are validated before they are applied.
Node level and default index level settings are validated on node startup,
dynamic cluster and index setting are validated before they are updated/added
to the cluster state.

Every setting must be a known setting. All settings must have been
registered with the node or transport client they are used with. This implies
that plugins that define custom settings must register all of their settings
during plugin loading using the SettingsModule#registerSettings(Setting)
method.

In previous versions Elasticsearch allowed to specify index level setting
as defaults on the node level, inside the elasticsearch.yaml file or even via
command-line parameters. From Elasticsearch 5.0 on only selected settings like
for instance index.codec can be set on the node level. All other settings must be
set on each individual index. To set default values on every index, index templates
should be used instead.

Node level attributes used for allocation filtering, forced awareness or other node identification / grouping
must be prefixed with node.attr. In previous versions it was possible to specify node attributes with the node.
prefix. All node attributes except of node.master, node.data and node.ingest must be moved to the new node.attr.
namespace.

The node.client setting has been removed. A node with such a setting set will not
start up. Instead, each node role needs to be set separately using the existing
node.master, node.data and node.ingest supported static settings.

All settings with a netty infix have been replaced by their already existing
transport synonyms. For instance transport.netty.bind_host is no longer
supported and should be replaced by the superseding setting
transport.bind_host.

The _non_loopback_ value for settings like network.host would arbitrarily
pick the first interface not marked as loopback. Instead, specify by address
scope (e.g. _local_,_site_ for all loopback and private network addresses)
or by explicit interface names, hostnames, or addresses.

The netty.epollBugWorkaround settings is removed. This settings allow people to enable
a netty work around for a high CPU usage issue with early JVM versions.
This bug was fixed in Java 7. Since Elasticsearch 5.0 requires Java 8 the settings is removed. Note that if the workaround needs to be reintroduced you can still set the org.jboss.netty.epollBugWorkaround system property to control Netty directly.

Previously, thread pool types could be dynamically
adjusted. The thread pool type effectively controls the backing queue for the
thread pool and modifying this is an expert setting with minimal practical
benefits and high risk of being misused. The ability to change the thread pool
type for any thread pool has been removed. It is still possible to adjust
relevant thread pool parameters for each of the thread pools (e.g., depending
on the thread pool type, keep_alive, queue_size, etc.).

Previously, there were three settings for the ping timeout:
discovery.zen.initial_ping_timeout, discovery.zen.ping.timeout and
discovery.zen.ping_timeout. The former two have been removed and the only
setting key for the ping timeout is now discovery.zen.ping_timeout. The
default value for ping timeouts remains at three seconds.

discovery.zen.master_election.filter_client and discovery.zen.master_election.filter_data have
been removed in favor of the new discovery.zen.master_election.ignore_non_master_pings. This setting control how ping responses
are interpreted during master election and should be used with care and only in extreme cases. See documentation for details.

index.shard.recovery.translog_size is superseded by indices.recovery.translog_size

index.shard.recovery.translog_ops is superseded by indices.recovery.translog_ops

index.shard.recovery.file_chunk_size is superseded by indices.recovery.file_chunk_size

indices.recovery.concurrent_streams is superseded by cluster.routing.allocation.node_concurrent_recoveries

index.shard.recovery.concurrent_small_file_streams is superseded by indices.recovery.concurrent_small_file_streams

indices.recovery.max_size_per_sec is superseded by indices.recovery.max_bytes_per_sec

If you are using any of these settings please take the time to review their
purpose. All of the settings above are considered expert settings and should
only be used if absolutely necessary. If you have set any of the above setting
as persistent cluster settings please use the settings update API and set
their superseded keys accordingly.

The following settings have been removed without replacement

indices.recovery.concurrent_small_file_streams - recoveries are now single threaded. The number of concurrent outgoing recoveries are throttled via allocation deciders

indices.recovery.concurrent_file_streams - recoveries are now single threaded. The number of concurrent outgoing recoveries are throttled via allocation deciders

The index.translog.flush_threshold_ops setting is not supported anymore. In
order to control flushes based on the transaction log growth use
index.translog.flush_threshold_size instead.

Changing the translog type with index.translog.fs.type is not supported
anymore, the buffered implementation is now the only available option and
uses a fixed 8kb buffer.

The translog by default is fsynced after every index, create, update,
delete, or bulk request. The ability to fsync on every operation is not
necessary anymore. In fact, it can be a performance bottleneck and it’s trappy
since it enabled by a special value set on index.translog.sync_interval.
Now, index.translog.sync_interval doesn’t accept a value less than 100ms
which prevents fsyncing too often if async durability is enabled. The special
value 0 is no longer supported.

The indices.memory.min_shard_index_buffer_size and
indices.memory.max_shard_index_buffer_size have been removed as
Elasticsearch now allows any one shard to use amount of heap as long as the
total indexing buffer heap used across all shards is below the node’s
indices.memory.index_buffer_size (defaults to 10% of the JVM heap).

Setting the system property es.max-open-files to true to get
Elasticsearch to print the number of maximum open files for the
Elasticsearch process has been removed. This same information can be
obtained from the Nodes Info API, and a warning is logged
on startup if it is set too low.

Disabling Netty from using NIO gathering could be done via the escape
hatch of setting the system property "es.netty.gathering" to "false".
Time has proven enabling gathering by default is a non-issue and this
non-documented setting has been removed.

The system property es.useLinkedTransferQueue could be used to
control the queue implementation used in the cluster service and the
handling of ping responses during discovery. This was an undocumented
setting and has been removed.

Two cache concurrency level settings
indices.requests.cache.concurrency_level and
indices.fielddata.cache.concurrency_level because they no longer apply to
the cache implementation used for the request cache and the field data cache.

Elasticsearch could previously be configured on the command line by
setting settings via --name.of.setting value.of.setting. This feature
has been removed. Instead, use -Ename.of.setting=value.of.setting.

The discovery.zen.minimum_master_nodes must be set for nodes that have
network.host, network.bind_host, network.publish_host,
transport.host, transport.bind_host, or transport.publish_host
configuration options set. We see those nodes as in "production" mode
and thus require the setting.

The action.get.realtime setting has been removed. This setting was
a fallback realtime setting for the get and mget APIs when realtime
wasn’t specified. Now if the parameter isn’t specified we always
default to true.

Previous versions of Elasticsearch defaulted to allowing multiple nodes to share the same data
directory (up to 50). This can be confusing where users accidentally startup multiple nodes and end
up thinking that they’ve lost data because the second node will start with an empty data directory.
While the default of allowing multiple nodes is friendly to playing with forming a small cluster on
a laptop, and end-users do sometimes run multiple nodes on the same host, this tends to be the
exception. Keeping with Elasticsearch’s continual movement towards safer out-of-the-box defaults,
and optimizing for the norm instead of the exception, the default for
node.max_local_storage_nodes is now one.

Previously script mode settings (e.g., "script.inline: true",
"script.engine.groovy.inline.aggs: false", etc.) accepted a wide range of
"truthy" or "falsy" values. This is now much stricter and supports only the
true and false options.