_delete_by_query gets a snapshot of the index when it starts and deletes what
it finds using internal versioning. That means that you’ll get a version
conflict if the document changes between the time when the snapshot was taken
and when the delete request is processed. When the versions match the document
is deleted.

Since internal versioning does not support the value 0 as a valid
version number, documents with version equal to zero cannot be deleted using
_delete_by_query and will fail the request.

During the _delete_by_query execution, multiple search requests are sequentially
executed in order to find all the matching documents to delete. Every time a batch
of documents is found, a corresponding bulk request is executed to delete all
these documents. In case a search or bulk request got rejected, _delete_by_query
relies on a default policy to retry rejected requests (up to 10 times, with
exponential back off). Reaching the maximum retries limit causes the _delete_by_query
to abort and all failures are returned in the failures of the response.
The deletions that have been performed still stick. In other words, the process
is not rolled back, only aborted. While the first failure causes the abort, all
failures that are returned by the failing bulk request are returned in the failures
element; therefore it’s possible for there to be quite a few failed entities.

If you’d like to count version conflicts rather than cause them to abort then
set conflicts=proceed on the url or "conflicts": "proceed" in the request body.

Back to the API format, you can limit _delete_by_query to a single type. This
will only delete tweet documents from the twitter index:

In addition to the standard parameters like pretty, the Delete By Query API
also supports refresh, wait_for_completion, wait_for_active_shards, and timeout.

Sending the refresh will refresh all shards involved in the delete by query
once the request completes. This is different than the Delete API’s refresh
parameter which causes just the shard that received the delete request
to be refreshed.

If the request contains wait_for_completion=false then Elasticsearch will
perform some preflight checks, launch the request, and then return a task
which can be used with Tasks APIs
to cancel or get the status of the task. Elasticsearch will also create a
record of this task as a document at .tasks/task/${taskId}. This is yours
to keep or remove as you see fit. When you are done with it, delete it so
Elasticsearch can reclaim the space it uses.

wait_for_active_shards controls how many copies of a shard must be active
before proceeding with the request. See here
for details. timeout controls how long each write request waits for unavailable
shards to become available. Both work exactly how they work in the
Bulk API.

requests_per_second can be set to any positive decimal number (1.4, 6,
1000, etc) and throttles the number of requests per second that the delete-by-query
issues or it can be set to -1 to disabled throttling. The throttling is done
waiting between bulk batches so that it can manipulate the scroll timeout. The
wait time is the difference between the time it took the batch to complete and
the time requests_per_second * requests_in_the_batch. Since the batch isn’t
broken into multiple bulk requests large batch sizes will cause Elasticsearch
to create many requests and then wait for a while before starting the next set.
This is "bursty" instead of "smooth". The default is -1.

this object contains the actual status. It is just like the response json
with the important addition of the total field. total is the total number
of operations that the reindex expects to perform. You can estimate the
progress by adding the updated, created, and deleted fields. The request
will finish when their sum is equal to the total field.

With the task id you can look up the task directly:

GET /_tasks/taskId:1

The advantage of this API is that it integrates with wait_for_completion=false
to transparently return the status of completed tasks. If the task is completed
and wait_for_completion=false was set on it then it’ll come back with
results or an error field. The cost of this feature is the document that
wait_for_completion=false creates at .tasks/task/${taskId}. It is up to
you to delete that document.

The value of requests_per_second can be changed on a running delete by query
using the _rethrottle API:

POST _delete_by_query/task_id:1/_rethrottle?requests_per_second=-1

The task_id can be found using the tasks API above.

Just like when setting it on the _delete_by_query API requests_per_second
can be either -1 to disable throttling or any decimal number
like 1.7 or 12 to throttle to that level. Rethrottling that speeds up the
query takes effect immediately but rethrotting that slows down the query will
take effect on after completing the current batch. This prevents scroll
timeouts.

Adding slices to _delete_by_query just automates the manual process used in
the section above, creating sub-requests which means it has some quirks:

You can see these requests in the
Tasks APIs. These sub-requests are "child"
tasks of the task for the request with slices.

Fetching the status of the task for the request with slices only contains
the status of completed slices.

These sub-requests are individually addressable for things like cancellation
and rethrottling.

Rethrottling the request with slices will rethrottle the unfinished
sub-request proportionally.

Canceling the request with slices will cancel each sub-request.

Due to the nature of slices each sub-request won’t get a perfectly even
portion of the documents. All documents will be addressed, but some slices may
be larger than others. Expect larger slices to have a more even distribution.

Parameters like requests_per_second and size on a request with slices
are distributed proportionally to each sub-request. Combine that with the point
above about distribution being uneven and you should conclude that the using
size with slices might not result in exactly size documents being
`_delete_by_query`ed.

Each sub-requests gets a slightly different snapshot of the source index
though these are all taken at approximately the same time.