action: the action of the request that was just made.
The request attempted to modify node.value via a PUT HTTP request, thus the value of action is set.

node.key: the HTTP path to which the request was made.
We set /message to Hello world, so the key field is /message.
etcd uses a file-system-like structure to represent the key-value pairs, therefore all keys start with /.

node.value: the value of the key after resolving the request.
In this case, a successful request was made that attempted to change the node's value to Hello world.

node.createdIndex: an index is a unique, monotonically-incrementing integer created for each change to etcd.
This specific index reflects the point in the etcd state member at which a given key was created.
You may notice that in this example the index is 2 even though it is the first request you sent to the server.
This is because there are internal commands that also change the state behind the scenes, like adding and syncing servers.

node.modifiedIndex: like node.createdIndex, this attribute is also an etcd index.
Actions that cause the value to change include set, delete, update, create, compareAndSwap and compareAndDelete.
Since the get and watch commands do not change state in the store, they do not change the value of node.modifiedIndex.

Response Headers

etcd includes a few HTTP headers in responses that provide global information about the etcd cluster that serviced a request:

X-Etcd-Index: 35
X-Raft-Index: 5398
X-Raft-Term: 1

X-Etcd-Index is the current etcd index as explained above. When request is a watch on key space, X-Etcd-Index is the current etcd index when the watch starts, which means that the watched event may happen after X-Etcd-Index.

X-Raft-Index is similar to the etcd index but is for the underlying raft protocol.

X-Raft-Term is an integer that will increase whenever an etcd master election happens in the cluster. If this number is increasing rapidly, you may need to tune the election timeout. See the tuning section for details.

Get the value of a key

We can get the value that we just set in /message by issuing a GET request:

Here we introduce a new field: prevNode. The prevNode field represents what the state of a given node was before resolving the request at hand. The prevNode field follows the same format as the node, and is omitted in the event that there was no previous state for a given node.

However, the watch command can do more than this.
Using the index, we can watch for commands that have happened in the past.
This is useful for ensuring you don't miss events between watch commands.
Typically, we watch again from the modifiedIndex + 1 of the node we got.

Let's try to watch for the set command of index 7 again:

curl 'http://127.0.0.1:2379/v2/keys/foo?wait=true&waitIndex=7'

The watch command returns immediately with the same response as previously.

If we were to restart the watch from index 8 with:

curl 'http://127.0.0.1:2379/v2/keys/foo?wait=true&waitIndex=8'

Then even if etcd is on index 9 or 800, the first event to occur to the /foo
key between 8 and the current index will be returned.

Note: etcd only keeps the responses of the most recent 1000 events across all etcd keys.
It is recommended to send the response to another thread to process immediately
instead of blocking the watch while processing the result.

Watch from cleared event index

If we miss all the 1000 events, we need to recover the current state of the
watching key space through a get and then start to watch from the
X-Etcd-Index + 1.

For example, we set /other="bar" for 2000 times and try to wait from index 8.

curl 'http://127.0.0.1:2379/v2/keys/foo?wait=true&waitIndex=8'

We get the index is outdated response, since we miss the 1000 events kept in etcd.

{"errorCode":401,"message":"The event in requested index is outdated and cleared","cause":"the requested history has been cleared [1008/8]","index":2007}

Unlike watches we use the X-Etcd-Index + 1 of the response as a waitIndex
instead of the node's modifiedIndex + 1 for two reasons:

The X-Etcd-Index is always greater than or equal to the modifiedIndex when
getting a key because X-Etcd-Index is the current etcd index, and the modifiedIndex
is the index of an event already stored in etcd.

None of the events represented by indexes between modifiedIndex and
X-Etcd-Index will be related to the key being fetched.

Using the modifiedIndex + 1 is functionally equivalent for subsequent
watches, but since it is smaller than the X-Etcd-Index + 1, we may receive a
401 EventIndexCleared error immediately.

So the first watch after the get should be:

curl 'http://127.0.0.1:2379/v2/keys/foo?wait=true&waitIndex=2008'

Connection being closed prematurely

The server may close a long polling connection before emitting any events.
This can happen due to a timeout or the server being shutdown.
Since the HTTP header is sent immediately upon accepting the connection, the response will be seen as empty: 200 OK and empty body.
The clients should be prepared to deal with this scenario and retry the watch.

Atomically Creating In-Order Keys

Using POST on a directory, you can create keys with key names that are created in-order.
This can be used in a variety of useful patterns, like implementing queues of keys which need to be processed in strict order.
An example use case would be ensuring clients get fair access to a mutex.

If you create another entry some time later, it is guaranteed to have a key name that is greater than the previous key.
Also note the key names use the global etcd index, so the next key can be more than previous + 1.

Read Linearization

If you want a read that is fully linearized you can use a quorum=true GET.
The read will take a very similar path to a write and will have a similar
speed. If you are unsure if you need this feature feel free to email etcd-dev
for advice.

Statistics

An etcd cluster keeps track of a number of statistics including latency, bandwidth and uptime.
These are exposed via the statistics endpoint to understand the internal health of a cluster.

Leader Statistics

The leader has a view of the entire cluster and keeps track of two interesting statistics: latency to each peer in the cluster, and the number of failed and successful Raft RPC requests.
You can grab these statistics from the /v2/stats/leader endpoint:

Store Statistics

The store statistics include information about the operations that this node has handled.
Note that v2 store Statistics is stored in-memory. When a member stops, store statistics will reset on restart.

Operations that modify the store's state like create, delete, set and update are seen by the entire cluster and the number will increase on all nodes.
Operations like get and watch are node local and will only be seen on this node.