Although Dafny does not directly support the temporal
logic  and ◊ operators, Dafny’s logic is powerful enough to
encode  and ◊ using universal and existential quantifiers
(∀ and ∃). Section 4 describes our encoding, which is a simple
library written in Dafny that does not require any extensions
to the Dafny language. Thus, we do not need a separate tool
for reasoning about TLA, nor do we modify Dafny; instead,
we use the existing Dafny language to reason about both our
executable implementation and our high-level TLA-style
specifications. Using a single language avoids any semantic
gaps between implementation and specification.

2. 4. Assumptions

Our guarantees rely on the following assumptions.

A small amount of our code is assumed, rather than proven,
correct. Thus, to trust the system, a user must read this code.
Specifically, the spec for each system is trusted, as is the
brief main-event loop that runs ImplInit and ImplNext
(see Section 3). We do not assume reliable packet delivery, so
the network may arbitrarily delay, drop, or duplicate packets.
We do assume the network does not tamper with packets, and
that addresses in packet headers are trustworthy. These integrity assumptions can be enforced within, say, a datacenter or
VPN, and could be relaxed by modeling the necessary cryptographic primitives to talk about keys instead of addresses. 6

We assume the correctness of Dafny, the .NET compiler
and runtime, and the underlying Windows OS. Our previous work6 shows how to compile Dafny code into verifiable
assembly code to avoid these dependencies. We also rely on
the correctness of the underlying hardware.

Our liveness properties depend on further assumptions.
For IronRSL, we assume a quorum of replicas run their respective main loops with a minimum frequency, never running
out of memory, and the network eventually delivers messages
synchronously among them. For IronKV, we assume that
each host’s main loop executes infinitely often and that the
network is fair, that is, a message sent infinitely often is eventually delivered.

3. VERIFICATION METHODOLOGY

IronFleet organizes a distributed system’s implemen-tation and proof into layers (Figure 3), all of which areexpressed in Dafny. This layering avoids the intermin-gling of subtle distributed protocols with implementationcomplexity. At the top (Section 3. 1), we write a simplespec for the system’s behavior. We then write an abstractdistributed protocol layer (Section 3. 2) and use TLA-styletechniques to prove that it refines the spec layer (Section 3. 3).Then we write an imperative implementation layer torun on each host (Section 3. 4) and prove that, despite thecomplexities introduced when writing real systems code,the implementation correctly refines the protocol layer(Section 3. 5). Section 4 extends this methodology to live-ness properties.

To avoid complex reasoning about interleaved execution
of low-level operations at multiple hosts, we use a concurrency
containment strategy: the proofs above assume that every
implementation step performs an atomic protocol step. Since
the real implementation’s execution is not atomic, we use a
verified reduction argument to show that a proof assuming
atomicity is equally valid as a proof for the real system. This
argument imposes a mechanically verified property on the
implementation.

3. 1. The high-level spec layer

What does it mean for a system to be correct One can informally enumerate properties and hope they suffice to provide
correctness. A more rigorous way is to define a spec, a succinct description of all allowable behaviors of the system,
and prove that an implementation always generates outputs
consistent with the spec.

With IronFleet, the developer writes the system’s spec as
a state machine expressed in Dafny (Section 2. 2): starting
with some initial state, the spec succinctly describes how
that state can be transformed. The spec defines the state
machine via three predicates, that is, functions that return
true or false. SpecInit describes acceptable starting states,
SpecNext describes acceptable ways to move from an old
to a new state, and SpecRelation describes the required
conditions on the relation between an implementation state
and its corresponding spec state. For instance, in Figure 3,
SpecInit constrains H0, SpecNext constrains steps such
as H0→H1 and H1→H2, and SpecRelation constrains
corresponding state pairs such as (I1, H1) and (I3, H2). To
avoid unnecessary constraints on implementations of the
spec, SpecRelation should only talk about the externally
visible behavior of the implementation, for example, the set
of messages it has sent so far.

As a toy example, the Dafny spec in Figure 4 describes a
simple distributed lock service with a single lock that passes
among the hosts. It defines the system’s state as a history: a
sequence of host IDs such that the nth host in the sequence
held the lock in epoch n. Initially, this history contains one
valid host. The system can step from an old state to a new
state by appending a valid host to the history. An implementation is consistent with the spec if all lock messages for
epoch n come from the nth host in the history.

By keeping the spec simple, a skeptic can study the spec to
understand the system’s properties. In our example, she can
easily conclude that the lock is never held by more than one
host. Since the spec captures all permitted system behaviors,
she can later verify additional properties of the implementation just by verifying they are implied by the spec.

I0 I1 I2 I3
H0 H1 H2

P0 P1 P2 P3

High-level spec (Section 3. 1)

Distributed protocol (Section 3. 2)

Implementation (Section 3. 4)
refinement (Section 3. 3)

refinement (Section 3. 5)

Figure 3. Verification overview. IronFleet divides a distributed
system into carefully chosen layers. We use TLA-style verification
to prove that any behavior of the protocol layer (e.g., P0…P3) refines
some behavior of the high-level spec (e.g., H0…H2). We then use
Floyd–Hoare style to prove that any behavior of the implementation
(e.g., I0…I3) refines a behavior of the protocol layer.