Before this change, a storage node did 3 commits per transaction:
- once all data are stored
- when locking the transaction
- when unlocking the transaction
The last one is not important for ACID. In case of a crash, the transaction
is unlocked again (verification phase). By deferring it by 1 second, we
only have 2 commits per transaction during high activity because all pending
changes are merged with the commits caused by other transactions.
This change compensates the extra commit(s) per transaction that were
introduced in commit 7eb7cf1b
("Minimize the amount of work during tpc_finish").

Since commit d2d77437 ("client: make the cache
tolerant to late invalidations when the entry is in the history queue"),
invalidated items became current again when they were moved to the history
queue, which was wrong for 2 reasons:
- only the last items of _oid_dict values may have next_tid=None,
- and for such items, they could be wrongly reused when caching the real
current data.

This fixes the following scenario:
1. the master sends invalidations to clients,
and unlocks to storages (oid1, tid1)
2. the storage receives/processes the unlock
3. the client asks data (oid1, tid0)
4. the storage returns tid1 as next tid, whereas it's still None in the cache
(before, it caused an assertion failure)
6. the client processes invalidations

With the previous commit, the request to truncate the DB was not stored
persistently, which means that this operation was still vulnerable to the case
where the master is restarted after some nodes, but not all, have already
truncated. The master didn't have the information to fix this and the result
was a DB partially truncated.
-> On a Truncate packet, a storage node only stores the tid somewhere, to send
it back to the master, which stays in RECOVERING state as long as any node
has a different value than that of the node with the latest partition table.
We also want to make sure that there is no unfinished data, because a user may
truncate at a tid higher than a locked one.
-> Truncation is now effective at the end on the VERIFYING phase, just before
returning the last ids to the master.
At last all nodes should be truncated, to avoid that an offline node comes back
with a different history. Currently, this would not be an issue since
replication is always restart from the beginning, but later we'd like they
remember where they stopped to replicate.
-> If a truncation is requested, the master waits for all nodes to be pending,
even if it was previously started (the user can still force the cluster to
start with neoctl). And any lost node during verification also causes the
master to go back to recovery.
Obviously, the protocol has been changed to split the LastIDs packet and
introduce a new Recovery, since it does not make sense anymore to ask last ids
during recovery.

Currently, the database may only be truncated when leaving backup mode, but
the issue will be the same when neoctl gets a new command to truncate at an
arbitrary tid: we want to be sure that all nodes are truncated before anything
else.
Therefore, we stop sending Truncate orders before stopping operation because
nodes could fail/exit before actually processing them. Truncation must also
happen before asking nodes their last ids.
With this commit, if a truncation is requested:
- this is always the first thing done when a storage node connects to the
primary master during the RECOVERING phase,
- and the cluster does not start automatically if there are missing nodes,
unless an admin forces it.
Other changes:
- Connections to storage nodes don't need to be aborted anymore when leaving
backup mode.
- The master always initiates communication when a storage node identifies,
which simplifies code and reduces the number of exchanged packets.

At some point, the master asks a storage node its partition table. If this node
is lost before getting an answer, another node (or the same one if it comes
back) must be asked.
Before this change, the master node had to be restarted.