Inverse-replay or fast-forward attack

So I think there is something like an "inverse replay attack", best
illustrated by an example:

1. A release is made at version 1.
1.1. A timestamp is made at version 1, which signs for release version 1.
2. A new TUF client downloads timestamp, sees a release for version 1.
3. Simultaneously, a new release is made, and its version is incremented
to 2.
3.1. A new timestamp is made, and its version is incremented to 2.
4. The client from (2) downloads release 2 when it expects release 1.
4.1. The client throws a BadHashError because it expects release 1.

So this is probably best called a "fast-forward problem". This would
happen when metadata is updated so quickly that "non-atomic read
transactions" will fail erroneously due what the client thinks is an
arbitrary metadata attack, when actually the metadata has been updated.

Would the problem be solved if each metadata file includes, besides the
length and hashes, the version numbers of the metadata files it signs
for? This way, if a TUF client sees a properly signed metadata file, but
also sees that its version number has been increased by the time it
reads it, will retry the update process instead of suspecting an
arbitrary metadata attack?

The more I think about it, the more I like Justin's idea of
addressing metadata or target by its hash.

Here is a simple change to the TUF specification that will
accommodate this idea.

(Assume that file reads and writes are exclusive; i.e. no one will
be able to write to a file that is being read, or read a file that
is being written.) The first step of updating with TUF is to
download timestamp.txt. This will remain unchanged. However, recall
that timestamp.txt will contain the hashes of release.txt. It will
then be an option for the client to download release.txt this way:

This is a signal to the TUF repository at example.com to return a
file (release.txt in this case) with that SHA256 hash. TUF will be
agnostic with respect to the choice of key-value store used to
implement this.

Everything else from timestamp onwards should be download-able this
way. We can then keep consistent, read-only snapshots of the
repository. Eventually, the repository will run out of space to keep
new snapshots. We can use something like a "mark-and-sweep"
algorithm to preserve the contents of the latest release: walk the
latest release, mark all visited objects, delete all unmarked
objects. The last few releases may be preserved in a similar manner.