International Workshop on Quantum LDPC Codes

The subject of this workshop is at the leading edge of current research in Quantum Information, specifically in the field of quantum error correction. Quantum LDPC codes is the only class of codes where a finite rate and a finite fault-tolerant error correction threshold are known to coexist.

This workshop will bring together researchers working on quantum LDPC codes, including the surface codes.

Austin Fowler, University of California, Santa BarbaraMeasuring the overhead of a quantum error correcting code

Bob Room

12:30 – 2:30pm

Lunch

Bistro – 1st Floor

2:30 - 3:30pm

Jean-Pierre Tillich, INRIASpatially coupled quantum LDPC codes

Bob Room

3:30 – 4:30pm

Closing Discussion and Wrap Up

Bob Room

Hector Bombin, Perimeter Institute

Gauge color codes

I will describe a new class of topological quantum error correcting codes with surprising features. The constructions is based on color codes: it preserves their unusual transversality properties but removes important drawbacks. In 3D, the new codes allow the effectively transversal implementation of a universal set of gates by gauge fixing, while error-dectecting measurements involve only 4 or 6 qubits. Furthermore, they do not require multiple rounds of error detection to achieve fault-tolerance.

Sergey Bravyi, IBM Watson Research Center

Homological product codes

All examples of quantum LDPC codes known to this date suffer from a poor distance scaling limited by the square-root of the code length. This is in a sharp contrast with the classical case where good LDPC codes are known that combine constant encoding rate and linear distance. In this talk I will describe the first family of good quantum "almost LDPC" codes. The new codes have a constant encoding rate, linear distance, and stabilizers acting on at most square root of n qubits, where n is the code length. For comparison, all previously known families of good quantum codes have stabilizers of linear weight. The proof combines two techniques: randomized constructions of good quantum codes and the homological product operation from algebraic topology. We conjecture that similar methods can produce good quantum codes with stabilizer weight n^a for any a>0. Finally, we apply the homological product to construct new small codes with low-weight stabilizers.

Only a rare number of constructions of quantum LDPC codes are equipped with an unbounded minimum distance. Most of them are inspired by Kitaev toric codes constructed from the a tiling of the torus such as, color codes which are based on 3-colored tilings of surfaces, hyperbolic codes which are defined from hyperbolic tilings, or codes based on higher dimensional manifolds. These constructions are based on tilings of surfaces or manifolds and their parameters depend on the homology of the tiling.

In the first part of this talk, we recall homological bounds on the parameters of these quantum LDPC codes. In particular, the injectivity radius of the tiling provides a general lower bound on the minimum distance of these quantum LDPC codes.

Then, we extend the injectivity radius method to bound the minimum distance of a family of quantum LDPC codes based on Cayley graphs.Finally, we improve these results by studying a notion of expansion of these Cayley graphs.

This talk is based on a joint work with Alain Couvreur and Gilles Zémor, and a joint work with Zhentao Li and Stephan Tommassé.

Steve Flammia, University of Sydney

Quantum Error Correction for Ising Anyon Systems

We consider two-dimensional lattice models that support Ising anyonic excitations and are coupled to a thermal bath, and we propose a phenomenological model to describe the resulting short-time dynamics, including pair-creation, hopping, braiding, and fusion of anyons. By explicitly constructing topological quantum error-correcting codes for this class of system, we use our thermalization model to estimate the lifetime of quantum information stored in the code space. To decode and correct errors in these codes, we adapt several existing topological decoders to the non-Abelian setting: one based on Edmond's perfect matching algorithm and one based on the renormalization group. These decoders provably run in polynomial time, and one of them has a provable threshold against a simple iid noise model. Using numerical simulations, we find that the error correction thresholds for these codes/decoders are comparable to similar values for the toric code (an Abelian sub-model consisting of a restricted set of allowed anyons). To our knowledge, these are the first threshold results for quantum codes without explicit Pauli algebraic structure. Joint work with Courtney Brell, Simon Burton, Guillaum Dauphinais, and David Poulin, arXiv:1311.0019

Austin Fowler, University of California, Santa Barbara

Measuring the overhead of a quantum error correcting code

If one's goal is large-scale quantum computation, ultimately one wishes to minimize the amount of time, number of qubits, and qubit connectivity required to outperform a classical system, all while assuming some physically reasonable gate error rate. We present two examples of such an overhead study, focusing on the surface code with and without long-range interactions.

Daniel Gottesman, Perimeter Institute

Fault-Tolerant Quantum Computation with Constant Overhead

The threshold theorem for fault tolerance tells us that it is possible to build arbitrarily large reliable quantum computers provided the error rate per physical gate or time step is below some threshold value. Most research on the threshold theorem so far has gone into optimizing the tolerable error rate under various assumptions, with other considerations being secondary. However, for the foreseeable future, the number of qubits may be an even greater restriction than error rates. The overhead, the ratio of physical qubits to logical qubits, determines how expensive (in qubits) a fault-tolerant computation is. Earlier results on fault tolerance used a large overhead which grows (albeit slowly) with the size of the computation. I show that using quantum LDPC codes, it is possible in principle to do fault-tolerant quantum computation with low overhead, and with the overhead constant in the size of the computation.

Alexey Kovalev, University of Nebraska

Spin glass reflection of the decoding transition for space-time codes

We introduce space-time quantum code construction which is based on repeating the layers of an arbitrary quantum error correcting code. The error threshold of such space-time construction is shown to be related to the fault tolerant error threshold of the original quantum error correcting code in the presence of errors in syndrome measurements. The decoding transition for space-time codes can be further mapped to random-bond Wegner spin models.

Families of quantum low density parity-check (LDPC) codes with a finite decoding threshold lead to both known models (e.g., random bond Ising and random plaquette Z2 gauge models) as well as unexplored earlier and generally non-local disordered spin models with non-trivial phase diagrams that include the spin glass phase.

We apply this construction to the simplest examples of recently discovered hypergraph-product codes and numerically find the fault tolerant threshold in excess of 5% by employing Monte-Carlo simulations.

Andrew Landahl, Sandia National Lab

Quantum computing by color-code lattice surgery

In this talk, I will explain how to use lattice surgery to enact a universal set of fault-tolerant quantum operations with color codes. Along the way, I will also show how to improve existing surface-code lattice-surgery methods. Lattice-surgery methods use fewer qubits and the same time or less than associated defect-braiding methods. Per code distance, color-code lattice surgery uses approximately half the qubits and the same time or less than surface-code lattice surgery. Color-code lattice surgery can also implement the Hadamard and phase gates in a single transversal step—much faster than surface-code lattice surgery can. I will show that against uncorrelated circuit-level depolarizing noise, color-code lattice surgery uses fewer qubits to achieve the same degree of fault-tolerant error suppression as surface-code lattice-surgery when the noise rate is low enough and the error suppression demand is high enough.

Hidetoshi Nishimori, Tokyo Institute of Technology

Overview of the theory of spin glasses and its applications to quantum codes

I will review the theory of spin glasses with an emphasis on gauge symmetry. A number of exact results will be shown to be derived, some of which are useful to discuss the properties of quantum LDPC codes. Also will be explained is the combination of gauge symmetry, replica method, and duality argument to predict the precise location of a multicritical point, which is equivalent to the error-tolerance limit of toric code.

David Poulin, University of Sherbrooke

An introduction to quantum LDPC code

In this talk, I will cover some basic notions of quantum LDPC codes, focusing on the similarities and distinctions with their classical cousins. Topics will include definitions of stabilizer quantum LDPC codes (CSS and general), iterative decoding algorithms, dual spin models, and obstructions caused by error degeneracy. The talk will be informal and a good occasion to ask questions.

Leonid Pryadko, University of California, Riverside

Maximum likelihood decoding threshold as a phase transition

In maximum likelihood (ML) decoding, we are trying to find the most likely error given the measured syndrome. While this is hardly ever practical, such a decoder is expected to have the highest threshold.I will discuss the mapping between the ML threshold for an infinite family of stabilizer codes and a phase transition in an associated family of Ising models with bond disorder [1]. This is a generalization of the map between the toric codes and the square lattice Ising model. Quantum LDPC codes produce generally non-local spin models with few-body interactions. A relatively simple Monte Carlo simulation of such a model can give an upper bound on the decoding threshold for the original code family. This can be used to compare code families irrespectively of decoders, and to establish an absolute measure of decoder performance.

This talk is divided into two parts. In the first part, I discuss a scheme of fault-tolerant quantum computation for a web-like physical architecture of a quantum computer. Small logical units of a few qubits (realized in ion traps, for example) are linked via a photonic interconnect which provides probabilistic heralded Bell pairs [1]. Two time scales compete in this system, namely the characteristic decoherence time T_D and the typical time T_E it takes to provide a Bell pair. We show that, perhaps unexpectedly, this system can be used for fault-tolerant quantum computation for all values of the ratio T_D/T_E.

The second part of my talk is about something entirely different, namely the role of contextuality in quantum computation by magic state distillation. Recently, Howard et al. [2] have shown that contextuality is a necessary resource for such computation on qudits of odd prime dimension. Here we provide an analogous result for 2-level systems. However, we require them to be rebits. [joint work with Jake Bian, Philippe Guerin and Nicolas Delfosse]

Is scalable quantum error correction realistic? Some projects, thoughts and open questions.

Jean-Pierre Tillich, INRIA

Spatially coupled quantum LDPC codes

Spatially coupled LDPC were introduced by Felström and Zigangirov in 1999. They might be viewed in the following way, take several several instances of a certain LDPC code family, arrange them in a row and then mix the edges of the codes randomly among neighboring layers. Moreover fix the bits of the first and last layers to zero. It has soon been found out that iterative decoding behaves much better for this code than for the original LDPC code. A breakthrough occurred when it was proved by Kudekar, Richardson and Urbanke that these codes attain the capacity of all binary input memoryless output-symmetric channels.

All these nice features of classical spatially coupled LDPC codes suggest to study whether they have a quantum analogue. The fact that spatially coupled LDPC codes may afford to have large degrees and still perform well under iterative decoding would be quite interesting in the quantum setting, since by the very nature of the quantum construction of stabilizer codes the rows of the parity-check matrix of the quantum code have to belong to the code which is decoded by the iterative decoder. This implies that we should have rather large row weights to avoid severe error-floor phenomena and/or oscillatory behavior of iterative decoding which degrades significantly its performance.

With Andriyanova and Maurice, I showed last year that it is possible to come up with coupled versions of quantum LDPC codes that perform excellently under iterative decoding. For instance we have constructed a spatially coupled LDPC code family of rate $\approx \frac{1}{4}$ which performs well under iterative decoding even for noise values close to the hashing bound $p \approx 0.127$. This represents a tremendous improvement over all previous known families of quantum LDPC codes of the same rate. I will discuss in this talk what can be expected from this approach when these spatially coupled LDPC codes are used for performing fault tolerant computation.

International Workshop on Quantum LDPC Codes

Perimeter Institute, Waterloo, Ontario, July 14 to July 16, 2014

The purpose of the workshop is to bring together researchers working on quantum LDPC codes, including the surface codes. All aspects related to such codes are of interest, including code constructions, syndrome decoding algorithms, bounds on the parameters, fault-tolerant processing, thresholds with different noise models, possible implementations, etc.

What are the quantum LDPC codes and why are they interesting?

Quantum low density parity-check codes are also known as quantum sparse-graph codes. Technically, these are just stabilizer codes, but with stabilizer generators which involve only a few qubits each compared to the number of qubits used in the code. Such codes are most often degenerate: some errors have trivial effect and do not require any correction. Compared to general quantum codes, with a quantum LDPC code, each quantum measurement involves fewer qubits, measurements can be done in parallel, and also the classical processing could potentially be enormously simplified.

The most famous family of quantum sparse-graph codes is Kitaev's toric construction: it has a relatively high threshold for scalable quantum computation, around 1% total error probability per quantum gate or a qubit measurement, and only local gates are needed. One disadvantage is that all toric and related surface codes encode very few qubits (in technical terms, they have asymptotically zero rate); thus they require many redundant physical qubits.

Finite-rate quantum LDPC codes are also possible. Several large families of such codes are known. With these, fewer redundant qubits may be necessary to build a useful quantum computer.

Quantum sparse graph codes are commonly called quantum LDPC codes by analogy with the classical low density parity-check codes. These latter codes have fast and efficient (capacity-approaching) decoders. Over the last ten years classical LDPC codes have become a significant component of industrial standards for satellite communications, Wi-Fi, and gigabit ethernet, to name a few. The success of classical LDPC codes is the reason for some of the interest in the quantum LDPC codes.

Why is this workshop needed now?

Over the last five years, several families of finite-rate quantum LDPC codes with explicitly known parameters have been constructed, and the existence of a finite error-correction threshold established. These are the key milestones indicating that such codes could be a viable option for coherence protection in quantum computers of the future.

However, there is still a lot to learn about quantum LDPC codes. Here are just a few of the questions: Are there general decoding algorithms nearly as good as the belief propagation algorithm which works so well for classical LDPC codes? Are there specific bounds on parameters of quantum LDPC codes? What are the most efficient protocols for fault-tolerant logical operations acting on the encoded qubits?

The threshold theorem for fault tolerance tells us that it is possible to build arbitrarily large reliable quantum computers provided the error rate per physical gate or time step is below some threshold value. Most research on the threshold theorem so far has gone into optimizing the tolerable error rate under various assumptions, with other considerations being secondary. However, for the foreseeable future, the number of qubits may be an even greater restriction than error rates.

We introduce space-time quantum code construction which is based on repeating the layers of an arbitrary quantum error correcting code. The error threshold of such space-time construction is shown to be related to the fault tolerant error threshold of the original quantum error correcting code in the presence of errors in syndrome measurements. The decoding transition for space-time codes can be further mapped to random-bond Wegner spin models.

In maximum likelihood (ML) decoding, we are trying to find the most likely error given the measured syndrome. While this is hardly ever practical, such a decoder is expected to have the highest threshold.

I will review the theory of spin glasses with an emphasis on gauge symmetry. A number of exact results will be shown to be derived, some of which are useful to discuss the properties of quantum LDPC codes. Also will be explained is the combination of gauge symmetry, replica method, and duality argument to predict the precise location of a multicritical point, which is equivalent to the error-tolerance limit of toric code.

In this talk, I will cover some basic notions of quantum LDPC codes, focusing on the similarities and distinctions with their classical cousins. Topics will include definitions of stabilizer quantum LDPC codes (CSS and general), iterative decoding algorithms, dual spin models, and obstructions caused by error degeneracy. The talk will be informal and a good occasion to ask questions.