Abstract

Measurement-based quantum computation has emerged from the physics
community as a new approach to quantum computation where the notion of
measurement is the main driving force of computation. This is in
contrast with the more traditional circuit model which is based on
unitary operations. Among measurement-based quantum computation methods,
the recently introduced one-way quantum computer [RB01] stands out
as fundamental.

We develop a rigorous mathematical model underlying the one-way quantum
computer and present a concrete syntax and operational semantics for
programs, which we call patterns, and an algebra of these patterns
derived from a denotational semantics. More importantly, we present a
calculus for reasoning locally and compositionally about these patterns.
We present a rewrite theory and prove a general standardization theorem
which allows all patterns to be put in a semantically equivalent standard
form. Standardization has far-reaching consequences: a new physical
architecture based on performing all the entanglement in the beginning,
parallelization by exposing the dependency structure of measurements and
expressiveness theorems.

Furthermore we formalize several other measurement-based models e.g. Teleportation, Phase and Pauli models and present compositional
embeddings of them into and from the one-way model. This allows us to
transfer all the theory we develop for the one-way model to these models.
This shows that the framework we have developed has a general impact on
measurement-based computation and is not just particular to the one-way
quantum computer.

The emergence of quantum computation has changed our perspective on many
fundamental aspects of computing: the nature of information and how it
flows, new algorithmic design strategies and complexity classes and the very
structure of computational models [NC00]. New challenges have been
raised in the physical implementation of quantum computers. This paper is
a contribution to a nascent discipline: quantum programming languages.

This is more than a search for convenient notation, it is an investigation
into the structure, scope and limits of quantum computation. The main
issues are questions about how quantum processes are defined, how quantum
algorithms compose, how quantum resources are used and how classical and
quantum information interact.

Quantum computation emerged in the early 1980s with Feynman’s observations
about the difficulty of simulating quantum systems on a classical computer.
This hinted at the possibility of turning around the issue and exploiting
the power of quantum systems to perform computational tasks more
efficiently than was classically possible. In the mid 1980s
Deutsch [Deu87] and later Deutsch and Jozsa [DJ92] showed how
to use superposition – the ability to produce linear combinations of
quantum states – to obtain computational speedup. This led to interest in
algorithm design and the complexity aspects of quantum computation by
computer scientists. The most dramatic results were Shor’s celebrated
polytime factorization algorithm [Sho94] and Grover’s sublinear
search algorithm [Gro98]. Remarkably one of the problematic
aspects of quantum theory, the presence of non-local correlation – an
example of which is called “entanglement” – turned out to be crucial for
these algorithmic developments.

If efficient factorization is indeed possible in practice, then much of
cryptography becomes insecure as it is based on the difficulty of
factorization. However, entanglement makes it possible to design
unconditionally secure key distribution [BB84, Eke91]. Furthermore,
entanglement led to the remarkable – but simple – protocol for transferring
quantum states using only classical communication [BBC+93]; this
is the famous so-called “teleportation” protocol. There continues to be
tremendous activity in quantum cryptography, algorithmic design, complexity
and information theory. Parallel to all this work there has been intense
interest from the physics community to explore possible implementations,
see, for example, [NC00] for a textbook account of some of these
ideas.

On the other hand, only recently has there been significant interest in
quantum programming languages; i.e. the development of formal syntax and
semantics and the use of standard machinery for reasoning about quantum
information processing. The first quantum programming languages were
variations on imperative probabilistic languages and emphasized logic and
program development based on weakest preconditions [SZ00, Ö01].
The first definitive treatment of a quantum programming language was the
flowchart language of Selinger [Sel04b]. It was based on
combining classical control, as traditionally seen in flowcharts, with
quantum data. It also gave a denotational semantics based on completely
positive linear maps. The notion of quantum weakest preconditions was
developed in [DP06]. Later people proposed languages based on
quantum control [AG05]. The search for a sensible notion of
higher-type computation [SV05, vT04] continues, but is
problematic [Sel04c].

A related recent development is the work of Abramsky and
Coecke [AC04, Coe04] where they develop a categorical
axiomatization of quantum mechanics. This can be used to verify the
correctness of quantum communication protocols. It is very interesting
from a foundational point of view and allows one to explore exactly what
mathematical ingredients are required to carry out certain quantum
protocols. This has also led to work on a categorical quantum
logic [AD04].

The study of quantum communication protocols has led to formalizations
based on process algebras [GN05, JL04] and to proposals to use model
checking for verifying quantum protocols. A survey and a complete list of
references on this subject up to 2005 is available [Gay05].

These ideas have proven to be of great utility in the world of classical
computation. The use of logics, type systems, operational semantics,
denotational semantics and semantic-based inference mechanisms have led to
notable advances such as: the use of model checking for verification,
reasoning compositionally about security protocols, refinement-based
programming methodology and flow analysis.

The present paper applies this paradigm to a very recent development:
measurement-based quantum computation. None of the cited research on
quantum programming languages is aimed at measurement-based computation.
On the other hand, the work in the physics literature does not clearly
separate the conceptual layers of the subject from implementation issues.
A formal treatment is necessary to analyze the foundations of
measurement-based computation.

So far the main framework to explore quantum computation has been the
circuit model [Deu89], based on unitary evolution. This is very
useful for algorithmic development and complexity analysis [BV97].
There are other models such as quantum Turing machines [Deu85] and
quantum cellular automata [Wat95, vD96, DS96, SW04]. Although
they are all proved to be equivalent from the point of view of expressive
power, there is no agreement on what is the canonical model for exposing
the key aspects of quantum computation.

Recently physicists have introduced novel ideas based on the use of
measurement and entanglement to perform computation
[GC99, RB01, RBB03, Nie03]. This is very different from the
circuit model where measurement is done only at the end to extract
classical output. In measurement-based computation the main operation to
manipulate information and control computation is measurement. This is
surprising because measurement creates indeterminacy, yet it is used to
express deterministic computation defined by a unitary evolution.

The idea of computing based on measurements emerged from the teleportation
protocol [BBC+93]. The goal of this protocol is for an agent to
transmit an unknown qubit to a remote agent without actually sending the
qubit. This protocol works by having the two parties share a maximally
entangled state called a Bell pair. The parties perform local
operations – measurements and unitaries – and communicate only classical
bits. Remarkably, from this classical information the second party can
reconstruct the unknown quantum state. In fact one can actually use this
to compute via teleportation by choosing an appropriate
measurement [GC99]. This is the key idea of measurement-based
computation.

It turns out that the above method of computing is actually universal.
This was first shown by Gottesman and Chuang [GC99] who used
two-qubit measurements and given Bell pairs. Later
Nielsen [Nie03] showed that one could do this with only 4-qubit
measurements with no prior Bell pairs, however this works only
probabilistically. Leung [Leu04] improved this to two qubits, but her
method also works only probabilistically. Later Perdrix and
Jorrand [Per03, PJ04] gave the minimal set measurements to perform
universal quantum computing – but still in the probabilistic setting –
and introduced the state-transfer and measurement-based quantum Turing
machine. Finally the one-way computer was invented by Raussendorf and
Briegel [RB01, RB02] which used only single-qubit measurements with a
particular multi-party entangled state, the cluster state.

More precisely, a computation consists of a phase in which a collection of
qubits are set up in a standard entangled state. Then measurements are
applied to individual qubits and the outcomes of the measurements may be
used to determine further measurements. Finally – again depending on
measurement outcomes – local unitary operators, called corrections, are
applied to some qubits; this allows the elimination of the indeterminacy
introduced by measurements. The phrase “one-way” is used to emphasize
that the computation is driven by irreversible measurements.

There are at least two reasons to take measurement-based models seriously:
one conceptual and one pragmatic. The main pragmatic reason is that the
one-way model is believed by physicists to lend itself to easier
implementations [Nie04, CAJ05, BR05, TPKV04, TPKV06, WkJRR+05, KPA06, BES05, CCWD06, BBFM06].
Physicists have investigated various properties of the cluster state and
have accrued evidence that the physical implementation is scalable and
robust against decoherence
[Sch03, HEB04, DAB03, dNDM04b, dNDM04a, MP04, GHW05, HDB05, DHN06].
Conceptually the measurement-based model highlights the role of
entanglement and separates the quantum and classical aspects of
computation; thus it clarifies, in particular, the interplay between
classical control and the quantum evolution process.

Our approach to understanding the structural features of measurement-based
computation is to develop a formal calculus. One can think of this as an
“assembly language” for measurement-based computation. Ours is the first
programming framework specifically based on the one-way model. We first
develop a notation for such classically correlated sequences of
entanglements, measurements, and local corrections. Computations are
organized in patterns1, and we give a
careful treatment of the composition and tensor product (parallel
composition) of patterns. We show next that such pattern combinations
reflect the corresponding combinations of unitary operators. An easy proof
of universality follows.

So far, this is primarily a clarification of what was already known from
the series of papers introducing and investigating the properties of the
one-way model [RB01, RB02, RBB03]. However, we work here with an
extended notion of pattern, where inputs and outputs may overlap in any way
one wants them to, and this results in more efficient – in the sense of
using fewer qubits – implementations of unitaries. Specifically, our
universal set consists of patterns using only 2 qubits. From it
we obtain a 3 qubit realization of the Rz rotations and
a 14 qubit realization for the controlled-U family: a significant
reduction over the hitherto known implementations.

The main point of this paper is to introduce a calculus of local equations
over patterns that exploits some special algebraic properties of the
entanglement, measurement and correction operators. More precisely, we use
the fact that that 1-qubit XY measurements are closed under conjugation
by Pauli operators and the entanglement command belongs to the normalizer
of the Pauli group; these terms are explained in the appendix. We show
that this calculus is sound in that it preserves the interpretation of
patterns. Most importantly, we derive from it a simple algorithm by which
any general pattern can be put into a standard form where entanglement is
done first, then measurements, then corrections. We call this
standardization.

The consequences of the existence of such a procedure are far-reaching.
Since entangling comes first, one can prepare the entire entangled state
needed during the computation right at the start: one never has to do “on
the fly” entanglements. Furthermore, the rewriting of a pattern to
standard form reveals parallelism in the pattern computation. In a general
pattern, one is forced to compute sequentially and to strictly obey the
command sequence, whereas, after standardization, the dependency structure
is relaxed, resulting in lower computational depth complexity. Last, the
existence of a standard form for any pattern also has interesting
corollaries beyond implementation and complexity matters, as it follows
from it that patterns using no dependencies, or using only the restricted
class of Pauli measurements, can only realize a unitary belonging to the
Clifford group, and hence can be efficiently simulated by a classical
computer [Got97].

As we have noted before, there are other methods for measurement-based
quantum computing: the teleportation technique based on two-qubit
measurements and the state-transfer approach based on single qubit
measurements and incomplete two-qubit measurements. We will analyze the
teleportation model and its relation to the one-way model. We will show
how our calculus can be smoothly extended to cover this case as well as new
models that we introduce in this paper. We get several benefits from our
treatment. We get a workable syntax for handling the dependencies of
operators on previous measurement outcomes just by mimicking the one
obtained in the one-way model. This has never been done before for the
teleportation model. Furthermore, we can use this embedding to obtain a
standardization procedure for the models. Finally these extended calculi
can be compositionally embedded back in the original one-way model. This
clarifies the relation between different measurement-based models and shows
that the one-way model of Raussendorf and Briegel is the canonical one.

This paper develops the one-way model ab initio but certain concepts
that the reader may be unfamiliar with: qubits, unitaries, measurements,
Pauli operators and the Clifford group are in an appendix. These are also
readily accessible through the very thorough book of Nielsen and
Chuang [NC00].

In the next section we define the basic model, followed by its operational
and denotational semantics, for completeness a simple proof of universality
is given in section 4, this has appeared earlier in the physics
literature [DKP05], in section 5 we develop the rewrite
theory and prove the fundamental standardization theorem. In
section 6 we develop several examples that illustrate the use
of our calculus in designing efficient patterns. In section 7 we
prove some theorems about the expressive power of the calculus in the
absence of adaptive measurements. In section 8 we discuss other
measurement-based models and their compositional embedding to and from the
one-way model. In section 9 we discuss further directions and
some more related work. In the appendix we review basic notions of quantum
mechanics and quantum computation.

We first develop a notation for 1-qubit measurement based computations. The
basic commands one can use in a pattern are:

1-qubit auxiliary preparation Ni

2-qubit entanglement operators Eij

1-qubit measurements Mαi

and 1-qubit Pauli operators corrections Xi and Zi

The indices i, j represent the qubits on which each of these operations
apply, and α is a parameter in [0,2π]. Expressions involving angles
are always evaluated modulo 2π. These types of command will be referred
to as N, E, M and C. Sequences of such commands, together with two
distinguished – possibly overlapping – sets of qubits corresponding to
inputs and outputs, will be called measurement patterns, or simply
patterns. These patterns can be combined by composition and tensor product.

Importantly, corrections and measurements are allowed to depend on previous
measurement outcomes. We shall prove later that patterns without these
classical dependencies can only realize unitaries that are in the Clifford
group. Thus, dependencies are crucial if one wants to define a universal
computing model; that is to say, a model where all unitaries over
⊗nC2 can be realized. It is also crucial to develop a
notation that will handle these dependencies. This is what we do now.

2.1 Commands

Preparation Ni prepares qubit i in state |+⟩i. The
entanglement commands are defined as Eij:=∧Zij
(controlled-Z), while the correction commands are the Pauli operators
Xi and Zi.

Measurement Mαi is defined by orthogonal projections on

|+α⟩:=1√2(|0⟩+eiα|1⟩)|−α⟩:=1√2(|0⟩−eiα|1⟩)

followed by a trace-out operator. The parameter α∈[0,2π] is
called the angle of the measurement. For α=0, α=π2, one
obtains the X and Y Pauli measurements. Operationally, measurements
will be understood as destructive measurements, consuming their qubit. The
outcome of a measurement done at qubit i will be denoted by
si∈Z2. Since one only deals here with patterns where qubits are
measured at most once (see condition (D1) below), this is unambiguous. We
take the specific convention that si=0 if under the corresponding
measurement the state collapses to |+α⟩, and si=1 if to |−α⟩.

Outcomes can be summed together resulting in expressions of the form
s=∑i∈Isi which we call signals, and where the summation
is understood as being done in Z2. We define the domain of a
signal as the set of qubits on which it depends.

As we have said before, both corrections and measurements may depend on
signals. Dependent corrections will be written Xsi and Zsi and
dependent measurements will be written t[Mαi]s, where s,t∈Z2 and α∈[0,2π]. The meaning of dependencies for corrections
is straightforward: X0i=Z0i=I, no correction is applied, while
X1i=Xi and Z1i=Zi. In the case of dependent measurements,
the measurement angle will depend on s, t and α as follows:

t[Mαi]s

:=

M(−1)sα+tπi

(1)

so that, depending on the parities of s and t, one may have to modify
the α to one of −α, α+π and −α+π. These modifications
correspond to conjugations of measurements under X and Z:

XiMαiXi

=

M−αi

(2)

ZiMαiZi

=

Mα+πi

(3)

accordingly, we will refer to
them as the X and Z-actions. Note that these two actions commute,
since −α+π=−α−π up to 2π, and hence the order in which one
applies them does not matter.

As we will see later, relations (2) and (3) are key to the
propagation of dependent corrections, and to obtaining patterns in the
standard entanglement, measurement and correction form. Since the measurements
considered here are destructive, the above equations actually simplify
to

MαiXi

=

M−αi

(4)

MαiZi

=

Mα−πi

(5)

Another point worth noticing is that the domain of the signals of a
dependent command, be it a measurement or a correction, represents the set
of measurements which one has to do before one can determine the actual
value of the command.

We have completed our catalog of basic commands, including dependent ones,
and we turn now to the definition of measurement patterns. For convenient
reference, the language syntax is summarized in Figure 1.

2.2 Patterns

Definition 1

Patterns consists of three finite sets V, I, O, together with two
injective maps ι:I→V and o:O→V
and a finite sequence of commands An…A1, read from right to left,
applying to qubits in V in that order, i.e. A1 first and An last,
such that:

(D0)

no command depends on an outcome not yet measured;

(D1)

no command acts on a qubit already measured;

(D2)

no command acts on a qubit not yet prepared, unless it is an
input qubit;

(D3)

a qubit i is measured if and only if i is not an output.

The set V is called the pattern computation space, and we write
HV for the associated quantum state space ⊗i∈VC2. To
ease notation, we will omit the maps ι and o,
and write simply I, O instead of ι(I) and o(O). Note, however,
that these maps are useful to define classical manipulations of the quantum
states, such as permutations of the qubits. The sets I, O are
called respectively the pattern inputs and outputs, and we
write HI, and HO for the associated quantum state spaces.
The sequence An…A1 is called the pattern command
sequence, while the triple (V,I,O) is called the pattern
type.

To run a pattern, one prepares the input qubits in some input state
ψ∈HI, while the non-input qubits are all set to the |+⟩
state, then the commands are executed in sequence, and finally the result
of the pattern computation is read back from outputs as some ϕ∈HO. Clearly, for this procedure to succeed, we had to impose the (D0),
(D1), (D2) and (D3) conditions. Indeed if (D0) fails, then at some point
of the computation, one will want to execute a command which depends on
outcomes that are not known yet. Likewise, if (D1) fails, one will try to
apply a command on a qubit that has been consumed by a measurement (recall
that we use destructive measurements). Similarly, if (D2) fails, one will
try to apply a command on a non-existent qubit. Condition (D3) is there to
make sure that the final state belongs to the output space HO, i.e.,
that all non-output qubits, and only non-output qubits, will have been
consumed by a measurement when the computation ends.

We write (D) for the conjunction of our definiteness conditions (D0),
(D1), (D2) and (D3). Whether a given pattern satisfies (D) or not is
statically verifiable on the pattern command sequence. We could have
imposed a simple type system to enforce these constraints but, in the
interests of notational simplicity, we chose not to do so.

Here is a concrete example:

H:=({1,2},{1},{2},Xs12M01E12N2)

with computation space {1,2}, inputs {1}, and outputs
{2}. To run H, one first prepares the first qubit in some
input state ψ, and the second qubit in state |+⟩, then these are
entangled to obtain ∧Z12(ψ1⊗|+⟩2). Once this is
done, the first qubit is measured in the |+⟩, |−⟩ basis. Finally an
X correction is applied on the output qubit, if the measurement outcome
was s1=1. We will do this calculation in detail later, and prove that
this pattern implements the Hadamard operator H.

In general, a given pattern may use auxiliary qubits that are neither
input nor output qubits. Usually one tries to use as few such qubits as
possible, since these contribute to the space complexity of the
computation.

A last thing to note is that one does not require inputs and outputs to be
disjoint subsets of V. This, seemingly innocuous, additional flexibility
is actually quite useful to give parsimonious implementations of
unitaries [DKP05]. While the restriction to disjoint inputs and
outputs is unnecessary, it has been discussed whether imposing it results
in patterns that are easier to realize physically. Recent
work [HEB04, BR05, CAJ05] however, seems to indicate it is not the
case.

2.3 Pattern combination

We are interested in how one can combine patterns in order to obtain bigger
ones.

The first way to combine patterns is by composing them.
Two patterns P1 and P2 may be composed if
V1∩V2=O1=I2. Provided that P1 has as many outputs
as P2 has inputs, by renaming the pattern qubits, one can always
make them composable.

Definition 2

The other way of combining patterns is to tensor them.
Two patterns P1 and P2
may be tensored if V1∩V2=∅.
Again one can always meet this condition by renaming qubits
in a way that these sets are made disjoint.

Definition 3

In contrast to the composition case, all the unions involved here are
disjoint. Therefore commands from distinct patterns freely commute, since
they apply to disjoint qubits, and when we say that commands have to be
concatenated, this is only for definiteness.
It is routine to verify that the definiteness conditions (D) are preserved
under composition and tensor product.

Before turning to this matter, we need a clean definition
of what it means for a pattern to implement or to realize a unitary operator,
together with a proof that the way one can combine patterns is reflected in
their interpretations. This is key to our proof of universality.

In this section we give a formal operational semantics for the pattern
language as a probabilistic labeled transition system. We define
deterministic patterns and thereafter concentrate on them. We show that
deterministic patterns compose. We give a denotational semantics of
deterministic patterns; from the construction it will be clear that these
two semantics are equivalent.

Besides quantum states, which are non-zero vectors in some Hilbert space
HV, one needs a classical state recording the outcomes of the
successive measurements one does in a pattern. If we let V stand for the
finite set of qubits that are still active (i.e. not yet measured) and W
stands for the set of qubits that have been measured (i.e. they are now
just classical bits recording the measurement outcomes), it is natural to
define the computation state space as:

S:=ΣV,WHV×ZW2.

In other words the computation states form a V,W-indexed family of
pairs2q,
Γ, where q is a quantum state from HV and Γ is a map from
some W to the outcome space Z2. We call this classical component
Γ an outcome map, and denote by ∅ the empty outcome map in
Z∅2. We will treat these states as pairs unless it becomes
important to show how V and W are altered during a computation, as
happens during a measurement.

3.1 Operational semantics

We need some preliminary notation. For any signal s and classical state
Γ∈ZW2, such that the domain of s is included in W, we take
sΓ
to be the value of s given by the outcome map Γ. That is to say,
if s=∑Isi, then sΓ:=∑IΓ(i) where the sum is taken in
Z2.
Also if Γ∈ZW2, and x∈Z2, we define:

Γ[x/i](i)=x,Γ[x/i](j)=Γ(j) for j≠i

which is a map in ZW∪{i}2.

We may now view each of our commands as acting on the state space S,
we have suppressed V and W in the first 4 commands:

where αΓ=(−1)sΓα+tΓπ following equation (1).
Note how the measurement moves an index from V to W; a qubit once
measured cannot be neasured again.
Suppose q∈HV, for the above relations to be defined, one needs the
indices i, j on which the various command apply to be in V. One also
needs Γ to contain the domains of s and t, so that sΓ and
tΓ are well-defined. This will always be the case during the run of a
pattern because of condition (D).

All commands except measurements are deterministic and only modify the
quantum part of the state. The measurement actions on S are not
deterministic, so that these are actually binary relations on S, and
modify both the quantum and classical parts of the state. The usual
convention has it that when one does a measurement the resulting state is
renormalized and the probabilities are associated with the
transition. We do not adhere to this convention here, instead we leave the
states unnormalized. The reason for this choice of convention is that this
way, the probability of reaching a given state can be read off its norm,
and the overall treatment is simpler. As we will show later, all the
patterns implementing unitary operators will have the same probability for
all the branches and hence we will not need to carry these probabilities
explicitly.

We introduce an additional command called signal shifting:

q,Γ\lx@stackrelSsi⟶q,Γ[Γ(i)+sΓ/i]

It consists in shifting the
measurement outcome at i by the amount sΓ.
Note that the Z-action leaves measurements globally invariant, in the
sense that |+α+π⟩,|−α+π⟩=|−α⟩,|+α⟩.
Thus changing α to α+π amounts to swapping the outcomes of the
measurements, and one has:

t[Mαi]s

=

Sti0[Mαi]s

(6)

and signal shifting allows to dispose of the Z action of a measurement,
resulting sometimes in convenient optimizations of standard forms.

3.2 Denotational semantics

Let P be a pattern with computation space V, inputs
I, outputs O and command sequence An…A1.
To execute a
pattern, one starts with some input state q in HI,
together with the empty outcome map ∅.
The input state q is then tensored with as many |+⟩s as there are
non-inputs in V (the N commands), so as to obtain a state in the full
space HV.
Then E, M and C commands in P are applied in sequence from
right to left. We can
summarize the
situation as follows:

Unsupported use of \hfil

If m is the number of measurements, which is also the number of non
outputs, then the run may follow 2m different branches. Each branch is
associated with a unique binary string s of length m,
representing the classical outcomes of the measurements along that branch,
and a unique branch mapAs representing the linear
transformation from HI to HO along that branch. This map is
obtained from the operational semantics via the sequence
(qi,Γi) with 1≤i≤n+1, such that:

q1,Γ1=q⊗|+…+⟩,∅qn+1=q′≠0and for all i≤n:qi,Γi\lx@stackrelAi⟶qi+1,Γi+1.

Definition 4

A pattern P realizes a map on density matrices ρ given by
ρ↦∑sA†s(ρ)As.
We write [[P]] for the map realized by P.

Proposition 5

Each pattern realizes a completely positive trace preserving map.

Proof. Later on we will show that every pattern can be put in a
semantically
equivalent form where all the preparations and entanglements appear first,
followed by a sequence of measurements and finally local Pauli corrections.
Hence
branch maps decompose as As=CsΠsU,
where Cs is a unitary map over HO collecting all
corrections on outputs, Πs is a projection from HV to
HO representing the particular measurements performed along the
branch, and U is a unitary embedding from HI to HV collecting
the branch preparations, and entanglements. Note that U is the same on
all branches.
Therefore,

∑sA†sAs=∑sU†Π†sC†sCsΠsU=∑sU†Π†sΠsU=U†(∑sΠs)U=U†U=I

where we have used the fact that Cs is unitary,
Πs is a projection and U is independent of the branches
and is also unitary. Therefore the map T(ρ):=∑sAs(ρ)A†s is a trace-preserving
completely-positive map (cptp-map), explicitly given as a Kraus
decomposition.
□Hence the denotational semantics of a pattern is a cptp-map.
In our denotational semantics we view the pattern as defining a map from
the input qubits to the output qubits. We do not explicitly represent
the result of measuring the final qubits; these may be of interest in some
cases. Techniques for dealing with classical output explicitly are given
by Selinger [Sel04b] and Unruh [Unr05].

Definition 6

A pattern is said to be deterministic if it realizes a
cptp-map that sends pure states to pure states. A pattern is said to be
strongly deterministic when branch maps are equal.

This is equivalent to saying that for a deterministic pattern branch maps
are proportional, that is to say, for all q∈HI and all
s1, s2∈Zn2, As1(q) and As2(q) differ only up to
a scalar. For a strongly deterministic pattern we have for all
s1, s2∈Zn2,
As1=As2.

Proposition 7

If a pattern is strongly deterministic, then it realizes a unitary
embedding.

Proof.
Define T to be the map realized by the pattern. We have
T=∑sA†sAs. Since the
pattern in strongly deterministic all the branch maps are the same. Define
A to be 2n/2As, then A must be a unitary embedding,
because A†A=I. □

3.3 Short examples

For the rest of paper we assume that all the non-input qubits are prepared
in the state |+⟩ and hence for simplicity we omit the preparation
commands NIc.

First we give a quick example of a deterministic pattern that has branches
with different probabilities. Its type is V={1,2}, I=O={1}, and
its command sequence is Mα2.
Therefore, starting with input q, one gets two branches:

q⊗|+⟩,∅\lx@stackrelMα2⟶⎧⎪
⎪⎨⎪
⎪⎩12(1+e−iα)q,∅[0/2]12(1−e−iα)q,∅[1/2]

Thus this pattern is indeed deterministic, and implements the identity up
to a global phase, and yet the two branches have respective probabilities
(1+cosα)/2 and (1−cosα)/2, which are not equal in general and
hence this pattern is not strongly deterministic.

There is an interesting variation on this first example. The pattern of
interest, call it T, has the same type as above
with command sequence Xs21M02E12. Again,
T is deterministic, but not strongly deterministic: the branches have
different probabilities, as in
the preceding example. Now, however, these probabilities may depend
on the input. The associated transformation is a
cptp-map, T(ρ):=AρA†+BρB† with:

A:=(1000),B:=(0100)

One has A†A+B†B=I, so T is indeed a completely positive and
trace-preserving linear map and
T(|ψ⟩⟨ψ|)=⟨ψ,ψ⟩|0⟩⟨0| and clearly for no
unitary U does one have T(ρ):=UρU†.

For our final example, we return to the pattern H, already defined
above. Consider the pattern with the same qubit space
{1,2}, and the same inputs and outputs I={1}, O={2}, as H,
but with a shorter
command sequence namely M01E12.
Starting with input q=(a|0⟩+b|1⟩)|+⟩, one has two computation
branches, branching at M01:

Unknown environment '%

and since ∥a+b∥2+∥a−b∥2=2(∥a∥2+∥b∥2),
both transitions happen with equal probabilities 12.
Both branches end up with non proportional outputs, so the pattern is not
deterministic. However, if one applies the local correction X2 on
either of the branches’ ends, both outputs will be made to coincide. If we
choose to let the correction apply to the second branch, we obtain the pattern
H, already defined. We have just proved H=UH,
that is to say H realizes the Hadamard operator.

3.4 Compositionality of the Denotational Semantics

With our definitions in place, we will show that the denotational semantics
is compositional.

Theorem 1

For two patterns P1 and P2 we have
[[P1P2]]=[[P2]][[P1]] and
Missing dimension or its units for \hskip

Proof.
Recall that two patterns P1, P2 may be combined by
composition provided P1 has as many outputs as P2 has
inputs. Suppose
this is the case, and suppose further that P1 and P2
respectively realize some cptp-maps T1 and T2. We need to show that
the composite pattern P2P1 realizes T2T1.

Indeed, the two diagrams representing branches in P1 and P2:

Unsupported use of \hfil

can be pasted together, since O1=I2, and HO1=HI2.
But then, it is enough to notice 1) that preparation steps p2 in P2
commute with all actions in P1 since they apply on disjoint sets
of qubits, and 2) that no action taken in P2 depends on
the measurements outcomes in P1. It follows that
the pasted diagram describes the same branches as does
the one associated to the composite P2P1.

A similar argument applies to the case of a tensor combination,
and one has that P2⊗P1 realizes T2⊗T1.
□

If one wanted to give a categorical treatment3 one can define a
category where the objects are finite sets representing the input and
output qubits and the morphisms are the patterns. This is clearly a
monoidal category with our tensor operation as the monoidal structure. One
can show that the denotational semantics gives a monoidal functor into the
category of superoperators or into any suitably enriched strongly compact
closed category [AC04] or dagger category [Sel05a]. It
would be very interesting to explore exactly what additional categorical
structures are required to interpret the measurement calculus presented
below. Duncan Ross[Dun05] has skectched a polycategorical
presentation of our measurement calculus.

Proposition 8

We have already seen in our example that J(0)=H implements
H=J(0), thus we already know this in the particular
case where α=0. The general case follows by the same kind of computation.4
The case of ∧Z is obvious.
Second, we know that these unitaries form a universal set for
⊗nC2[DKP05]. Therefore, from the preceding
section, we infer that combining the corresponding patterns
will generate patterns realizing any unitary in ⊗nC2.
□

These patterns are indeed among the simplest possible. As a consequence, in
the section devoted to examples, we will find that our implementations
often have lower space complexity than the traditional implementations.

Remarkably, in our set of generators, one finds a single measurement and a
single dependency, which occurs in the correction phase of J(α).
Clearly one needs at least one measurement, since patterns without
measurements can only implement unitaries in the Clifford group. It is also
true that dependencies are needed for universality, but we have to wait for
the development of the measurement calculus in the next section to give a
proof of this fact.

We turn to the next important matter of the paper, namely standardization.
The idea is quite simple. It is enough to provide local pattern-rewrite
rules pushing Es to the beginning of the pattern and Cs to the end.
The crucial point is to justify using the equations as rewrite rules.

5.1 The equations

The expressions appearing as commands are all linear operators on Hilbert
space. At first glance, the appropriate equality between commands is
equality as operators. For the deterministic commands, the equality that
we consider is indeed equality as operators. This equality implies
equality in the denotational semantics. However, for measurement commands
one needs a stricter definition for equality in order to be able to apply
them as rewriting rules. Essentially we have to take into the account the
effect of different branches that might result from the measurement
process. The precise definition is below.

Definition 9

Consider two patterns P and P′ we define P=P′ if
and only if for any branch s, we have APs=AP′s, where
APs and AP′s are the branch map As defined in
Section 3.2.

The first set of equations gives the means to propagate local Pauli
corrections through the entangling operator Eij.

EijXsi

=

XsiZsjEij

(9)

EijXsj

=

XsjZsiEij

(10)

EijZsi

=

ZsiEij

(11)

EijZsj

=

ZsjEij

(12)

These equations are easy to verify and are natural since Eij belongs
to the Clifford group, and therefore maps under conjugation the Pauli group
to itself. Note that, despite the symmetry of the Eij operator
qua operator, we have to consider all the cases, since the rewrite
system defined below does not allow one to rewrite Eij to Eji.
If we did allow this the reqrite process could loop forever.

A second set of equations allows one to push corrections through
measurements acting on the same qubit. Again there are two cases:

t[Mαi]sXri

=

t[Mαi]s+r

(13)

t[Mαi]sZri

=

t+r[Mαi]s

(14)

These equations follow easily from equations (4) and
(5).
They express the fact that the
measurements Mαi are closed under conjugation by the Pauli group,
very much like equations (9),(10),(11)
and (12) express the fact that the Pauli group is closed under
conjugation by the entanglements Eij.

Define the following convenient abbreviations:

[Mαi]s:=0[Mαi]s,t[Mαi]:=t[Mαi]0,Mαi:=0[Mαi]0,Mxi:=M0i,Myi:=Mπ2i

Particular cases of the equations above are:

MxiXsi=MxiMyiXsi=[Myi]s=s[Myi]=MyiZsi

The first equation, follows from the fact that −0=0, so the X action
on Mxi
is trivial; the second equation, is because
−π2 is equal π2+π modulo 2π,
and therefore the X and Z actions coincide on Myi.
So we obtain the following:

t[Mxi]s

=

t[Mxi]

(15)

t[Myi]s

=

s+t[Myi]

(16)

which we will use later to prove that patterns with measurements
of the form Mx and My may only realize unitaries in the
Clifford group.

5.2 The rewrite rules

We now define a set of rewrite rules, obtained by orienting the equations
above5:

to which we need to add the free commutation rules,
obtained when commands operate on disjoint sets of qubits:

EijA→k⇒A→kEijwhere A is not an
entanglementA→kXsi⇒XsiA→kwhere A is %
not
a correctionA→kZsi⇒ZsiA→kwhere A is %
not a
correction

where →k represent the qubits acted upon by command A,
and are supposed to be distinct from i and j. Clearly these rules
could be reversed since they hold as equations but we are orienting them
this way in order to obtain termination.

Condition (D) is easily seen to be preserved under rewriting.

Under rewriting, the computation space, inputs and outputs remain the same,
and so do the entanglement commands. Measurements might be modified, but
there is still the same number of them, and they still act on the same
qubits. The only induced modifications concern local corrections and
dependencies. If there was no dependency at the start, none will be
created in the rewriting process.

In order to obtain rewrite rules, it was essential that the entangling
command (∧Z) belongs to the normalizer of the Pauli group. The point
is that the Pauli operators are the correction operators and they can be
dependent, thus we can commute the entangling commands to the beginning
without inheriting any dependency. Therefore the entanglement resource can
indeed be prepared at the outset of the computation.

5.3 Standardization

Write P⇒P′, respectively P⇒⋆P′, if both
patterns have the same type, and one obtains the command sequence of P′
from the command sequence of P by applying one, respectively any
number, of the rewrite rules of the previous section. We say that P is
standard if for no P′, P⇒P′ and the procedure
of writing a pattern to standard form is called standardization6.

One of the most important results about the rewrite system is that it has
the desirable properties of determinacy (confluence) and termination
(standardization). In other words, we will show that for all P,
there exists a unique standard P′, such that P⇒⋆P′.
It is, of course, crucial that the standardization process leaves the
semantics of patterns invariant. This is the subject of the next simple,
but important, proposition,

Proposition 10

Whenever P⇒⋆P′, [[P]]=[[P′]].

Proof.
It is enough to prove it when P⇒P′. The
first group of rewrites has been proved to be sound in the preceding
subsections, while the free commutation rules are obviously sound. □

We now begin the main proof of this section. First, we prove termination.

Theorem 2 (Termination)

All rewriting sequences beginning with a
pattern P terminate after finitely many steps. For our rewrite
system, this implies that for
all P there exist finitely many P′ such that P⇒⋆P′ where the P′ are standard.

Proof.
Suppose P has command sequence An…A1; so the number of
commands is n. Let e≤n be the number of E commands in P.
As we have noted earlier, this number is invariant under ⇒. Moreover
E commands in P can be ordered by increasing depth, read from
right to left, and this order, written <E, is also invariant, since EE
commutations are forbidden explicitly in the free commutation rules.

Define the following depth function d on E and C commands in P:

d(Ai)={iif% Ai=Ejkn−iif Ai=Cj

Define further the following sequence of length e, dE(P)(i) is the depth of the E-command of rank i according to <E. By
construction this sequence is strictly increasing. Finally, we define the
measure m(P):=(dE(P),dC(P)) with:

dC(P)=∑C∈Pd(C)

We claim the measure we just defined
decreases lexicographically under rewriting, in other words P⇒P′ implies m(P)>m(P′), where < is the lexicographic ordering
on Ne+1.

To clarify these definitions, consider the following example. Suppose
P’s command sequence is of the form EXZE, then e=2, dE(P)=(1,4), and m(P)=(1,4,3). For the command sequence EEX we get
that e=2, dE(P)=(2,3) and m(P)=(2,3,2). Now, if one
considers the rewrite EEX⇒EXZE, the measure of the left hand side is
(2,3,2), while the measure of the right hand side, as said, is (1,4,3),
and indeed (2,3,2)>(1,4,3). Intuitively the reason is clear: the Cs
are being pushed to the left, thus decreasing the depths of Es, and
concomitantly, the value of dE.

Let us now consider all cases starting with an EC rewrite. Suppose the
E command under rewrite has depth d and rank i in the order <E.
Then all Es of smaller rank have same depth in the right hand side, while
E has now depth d−1 and still rank i. So the right hand side has a
strictly smaller measure. Note that when C=X, because of the creation of
a Z (see the example above), the last element of m(P) may
increase, and for the same reason all elements of index j>i in dE(P) may increase. This is why we are working with a lexicographical
ordering.

Suppose now one does an MC rewrite, then dC(P) strictly
decreases, since one correction is absorbed, while all E commands have
equal or smaller depths. Again the measure strictly decreases.

Next, suppose one does an EA rewrite, and the E command under rewrite
has depth d and rank i. Then it has depth d−1 in the right hand side,
and all other E commands have invariant depths, since we forbade the case
when A is itself an E. It follows that the measure strictly decreases.

Finally, upon an AC rewrite, all E commands have invariant depth,
except possibly one which has smaller depth in the case A=E, and
dC(P) decreases strictly because we forbade the case where A=C.
Again the claim follows.

So all rewrites decrease our ordinal measure, and therefore all sequences
of rewrites are finite, and since the system is finitely branching (there
are no more than n possible single step rewrites on a given sequence of
length n), we get the statement of the theorem.

The final statement of the theorem follows from the fact that we have
finitely many rules so the system is finitely branching. In any
finitely branching rewrite system with the property that every rewrite
sequence terminates, it is clearly true that there can be only finitely many
standard forms.
□

The next theorem establishes the important determinacy property and
furthermore shows that the standard patterns have a certain canonical
form which we call the NEMC form. The precise definition is:

Definition 11

A pattern has a NEMC form if its commands occur in the order of Ns
first, then Es , then Ms, and finally Cs.

We will usually just say “EMC” form since we can assume that all the
auxiliary qubits are prepared in the |+⟩ state we usually just elide
these N commands.

Theorem 3 (Confluence)

For all P, there exists a unique standard P′, such that P⇒⋆P′, and P′ is in EMC form.

Proof. Since the rewriting system is terminating, confluence
follows from local
confluence 7 by
Newman’s lemma, see, for example, [Bar84]. The uniqueness of
the standard is form an immediate consequence.

We look for critical pairs, that is occurrences of three successive
commands where two rules can be applied simultaneously. One finds that
there are only five types of critical pairs, of these the three involve the
N command, these are of the form: NMC, NEC and NEM; and the
remaining two are: EijMkCk with i, j and k all distinct,
EijMkCl with k and l distinct.
In all cases local confluence is easily verified.

Suppose now P′ does not satisfy the EMC form conditions. Then,
either there is a pattern EA with A not of type E, or there is a
pattern AC with A not of type C. In the former case, E and A
must operate on overlapping qubits, else one may apply a free commutation
rule, and A may not be a C since in this case one may apply an EC
rewrite. The only remaining case is when A is of type M, overlapping
E’s qubits, but this is what condition (D1) forbids, and since (D1) is
preserved under rewriting, this contradicts the assumption. The latter
case is even simpler. □

We have shown that under rewriting any pattern can be put in EMC form,
which is what we wanted. We actually proved more, namely that the standard
form obtained is unique. However, one has to be a bit careful about the
significance of this additional piece of information. Note first that
uniqueness is obtained because we dropped the CC and EE free
commutations, thus having a rigid notion of command sequence. One cannot
put them back as rewrite rules, since they obviously ruin termination and
uniqueness of standard forms.

A reasonable thing to do, would be to take this set of equations as
generating an equivalence relation on command sequences, call it ≡,
and hope to strengthen the results obtained so far, by proving that all
reachable standard forms are equivalent.

But this is too naive a strategy, since
E12X1X2≡E12X2X1, and:

E12Xs1Xt2⇒⋆Xs1Zs2Xt2Zt1E12≡Xs1Zt1Zs2Xt2E12

obtaining an expression which is not symmetric in 1 and 2. To conclude,
one has to extend ≡ to include the additional equivalence
Xs1Zt1≡Zt1Xs1, which fortunately is sound since these two
operators are equal up to a global phase. Thus, these are all equivalent in
our semantics of patterns. We summarize this discussion as follows.

Definition 12

We define an equivalence relation ≡ on patterns by taking all the
rewrite rules as equations and adding the equation
Xs1Zt1≡Zt1Xs1 and generating the smallest equivalence
relation.

With this definition we can state the following proposition.

Proposition 13

All patterns that are equivalent by ≡ are equal in the denotational semantics.

This ≡ relation preserves both the type (the (V,I,O) triple) and the underlying entanglement graph. So clearly semantic equality does not entail equality up to ≡. In fact, by composing teleportation patterns one obtains infinitely many patterns for the identity which are all different up to ≡. One may wonder whether two patterns with same semantics, type and underlying entanglement graph are necessarily equal up to ≡. This is not true either. One has J(α)J(0)J(β)=J(α+β)=J(β)J(0)J(α) (where J(α) is defined in Section 4), and this readily gives a counter-example.

Algorithm 1

Commute all the correction commands to the left side using the EC and
MC rewriting rules.

Commute all the entanglement commands to the right side after the
preparation commands.

Note that since each qubit can be entangled with at most N−1 other
qubits, and can be measured or corrected only once, we have O(N2)
entanglement commands and O(N) measurement commands. According to the
definiteness condition, no command acts on a qubit not yet prepared, hence
the first step of the above algorithm is based on trivial commuting rules;
the same is true for the last step as no entanglement command can act on a
qubit that has been measured. Both steps can be done in O(N2). The
real complexity of the algorithm comes from the second step and the EX
commuting rule. In the worst case scenario, commuting an X correction to
the left might create O(N2) other Z corrections, each of which has to
be commuted to the left themselves. Thus one can have at most O(N3) new
corrections, each of which has to be commuted past O(N2) measurement or
entanglement commands. Therefore the second step, and hence the algorithm,
has a worst case complexity of O(N5).

We conclude this subsection by emphasizing the importance of the EMC form.
Since the entanglement can always be done first, we can always derive the
entanglement resource needed for the whole computation right at the
beginning. After that only local operations will be performed. This will
separate the analysis of entanglement resource requirements from the
classical control. Furthermore, this makes it possible to extract the
maximal parallelism for the execution of the pattern since the necessary
dependecies are explicitly expressed, see the example in
section 6 for further discussion. Finally, the EMC form
provides us with tools to prove general theorems about patterns, such as
the fact that they always compute cptp-maps and the expressiveness theorems
of section 7.

5.4 Signal shifting

One can extend the calculus to include the signal shifting command Sti.
This allows one to dispose of dependencies induced by the Z-action, and
obtain sometimes standard patterns with smaller computational depth
complexity, as we will see in the next section which is devoted to examples.

where s[t/si] denotes the substitution of si with
t in s, s, t being signals. Note that when we write a t
explicitly on the upper left of an M, we mean that t≠0.
The first additional rewrite rule was already introduced
as equation (6), while the other ones merely propagate
the signal shift. Clearly one can dispose of Sti when it hits
the end of the pattern command sequence. We will refer to this new set of rules as ⇒S. Note that we always apply first the standardization rules and then signal shifting, hence we do not need any commutation rule for E and S commands.

It is important to note that both theorem 2 and 3
still hold for this extended rewriting system. In order to prove
termination one can start with the EMC form and then adapt the proof of
Theorem 2 by defining a depth function for a signal shift
similar to the depth of a correction command. As with the correction,
signal shifts can also be commuted to the left hand side of a command
sequence. Now our measure can be modified to account for the new signal
shifting terms and shown to be decreasing under each step of signal
shifting. Confluence can be also proved from local confluence using again
Newman’s Lemma [Bar84]. One typical critical pair is t[Mαj]Ssi where i appears in the domain of signal t and hence the
signal shifting command Ssi will have an effect on the measurement.
Now there are two possible ways to rewrite this pair, first, commute the
signal shifting command and then replace the left signal of the measurement
with its own signal shifting command:

t[Mαj]Ssi⇒Ssit+s[Mαj]⇒SsiSs+tjMαj

The other way is to first replace
the left signal of the measurement and then commute the signal shifting
command:

t[Mαj]Ssi⇒StjMαjSsi⇒StjSsiMαj

Now one more step of rewriting on
the last equation will give us the same result for both choices.

In this section we develop some examples illustrating pattern
composition, pattern standardization, and signal shifting. We compare our
implementations with the implementations given in the reference
paper [RBB03]. To combine patterns
one needs to rename their qubits as we already noted. We use the
following concrete notation: if P is a pattern over
{1,…,n},
and f is an injection, we write P(f(1),…,f(n))
for the same pattern with qubits renamed according to f. We also
write P2∘P1 for pattern composition, in order to make it
more readable.
Finally we define the computational depth complexity to be the
number of measurement rounds plus one final correction round. More details
on depth complexity, specially on the preparation depth, i.e. depth of the
entanglement commands, can be found in [BK06].

Teleportation.

Consider the composite pattern J(β)(2,3)∘J(α)(1,2) with
computation space {1,2,3}, inputs {1}, and outputs {3}.
We run our standardization procedure so as to obtain an equivalent standard
pattern: