As prototypes of quantum processing units (QPU) mature, it becomes increasingly pressing to design approaches to maximize the performance of noisy devices running implementations of algorithms that can be benchmarked in the near-term. Minimizing the runtime of quantum circuits is particularly critical in early QPUs subject by decoherence which do not have resources for error correction. We show that the circuit compilation problem naturally maps to a planning problem similar to that encountered in automating operations of multiple agents that cooperatively need to achieve a goal. We demonstrate that state-of-the art Planning and Constraint Programming can effectively address the quantum circuit compilation problem. We applied our general compilation methods to the problem of compiling circuits related to the Quantum Alternating Operator Ansatz (QAOA), a prominent example of a quantum meta-heuristics, for simply structured optimization problems, such as MaxCut. Formulations of practical discrete optimization problems within QAOA framework results in circuits that are logically composed of a large number of commuting multi-qubit gates whose execution could be scheduled in a combinatorial number of ways. The architectural constraints of real-world QPUs, with available elementary gates manufactured in a planar nearest neighbor irregular graph layout and with each qubit individually calibrated to operate with different duration and fidelity. We exhibit efficient low-level compilation of QAOA circuits in this inhomogeneous, under-constrained setting, exemplified by the Rigetti, Google and IBM chips. We also discuss the general problem of quantum circuit compilation, taking into account additional constraints such as cross talk and additional algorithmic primitives such as measurement, in addition to optimizing the insertion of swap operations and accounting for different durations of synthesized logical gates.

We demonstrate how near-term quantum processing units (QPUs) can be integrated into high-performance computing using applications for quantum chemistry, machine learning, and combinatorial optimization. The eXtreme-scale ACCelerator programming model (XACC) is an open-source framework that supports quantum acceleration of scientific workflows across many different vendor QPUs. We develop the XACC programming model as a coprocessor model akin to the design of OpenCL or CUDA for GPUs, in which the framework offloads computational work by defining quantum kernels for execution on an attached QPU accelerator. We demonstrate an extensible quantum compilation mechanism with general quantum circuit optimization and transformation capabilities. We show how this approach is agnostic to quantum programming language and QPU hardware, and we demonstrate how XACC enables hybrid computing programs to be ported to multiple processors for benchmarking, verification and validation. Finally, we measure the utility of this programming model by demonstrating a distributed-memory implementation of the variational quantum eigensolver.

Taking advantage of exponential speedups offered by quantum computerswill require new tools to design and optimize quantum algorithms. Here,we describe a framework to develop such tools via an automated approach. Ourapproach requires minimal input: (i) the task that the quantum algorithm issupposed to perform and (ii) available resources (e.g., the number of qubits,the maximal depth of the circuit as well as any circuit constraints thatexists in a target quantum hardware). Given the above, our method returnsthe quantum algorithm that fulfills all the requirements or suggests that theresources are not sufficient to achieve the specified task. In this talk wewill present automatically generated algorithms for (among others) computingentanglement and simulating real-time evolution of quantum many-body systems.

Quantum computing is at an inflection point, where 50-qubit (quantum bit) machineshave been built, 100-qubit machines are just around the corner, and even 1000-qubitmachines are perhaps only a few years away. These machines have the potentialto fundamentally change our concept of what is computable and demonstratepractical applications in areas such as quantum chemistry, optimization, andquantum simulation.

Yet a significant resource gap remains between practical quantum algorithms and near-termmachines. Programming, compilation and control will play a key role in increasing the efficiencyof algorithms and machines to close this gap.

I will outline the grand research challenges in closing this gap, includingprogramming language design, software and hardware verification, definingand perforating abstraction boundaries, cross-layer optimization, managing parallelismand communication, mapping and scheduling computations, reducing control complexity,machine-specific optimizations, and many more. I will also describe the resourcesand infrastructure available for tackling these challenges.

Quantum information processors are terrifically complicated systems. A quantum processor's controllable behavior depends quite sensitively on a large number of externally controllable parameters, and the processor will only function as intended if these parameters are (1) carefully calibrated, and (2) stabilized to avoid drift over time. As these processors' precision and size (number of qubits) grow, calibration and drift control will need to be optimized and automated. In this talk, we introduce fast, parallelizable feedback protocols for tuning up quantum processors and controlling their drift. These protocols rely only on resources that are already present in all modern quantum processors: the ability to perform quantum circuits, make measurements. They are suitable for both offline tuning and online drift control.

A central challenge for the scaling of quantum computing systems is the need to control all qubits in the system without a large overhead. A solution for this problem in classical computing comes in the form of so called crossbar architectures. Recently we made a proposal for a large scale quantum processor to be implemented in silicon quantum dots. This system features a crossbar control architecture which limits parallel single qubit control, but allows the scheme to overcome control scaling issues that form a major hurdle to large scale quantum computing systems. In this work, we develop a language that makes it possible to easily map quantum circuits to crossbar systems, taking into account their architecture and control limitations. Using this language we show how to map well known quantum error correction codes such as the planar surface and color codes in this limited control setting with only a small overhead in time. We analyze the logical error behavior of this surface code mapping for estimated experimental parameters of the crossbar system in and conclude that logical error suppression to a level useful for real quantum computation is feasible.

As quantum information processors (QIPs) grow from 2, to 5, to 16 or more qubits, characterizing their behavior rapidly becomes challenging. Techniques commonly used today, such as tomography and randomized benchmarking, are unlikely to scale easily to many qubits while providing useful debugging information. QIP development will require fast, scalable, and accurate techniques that extract useful information about noise affecting QIPs and the errors they are likely to suffer in use. Machine learning tools are a promising alternative to the brute force and/or ad-hoc statistical methods that underlie most existing techniques. Here, we demonstrate a machine learning classifier that distinguishes whether the noise on a single-qubit QIP is stochastic or coherent. The classifier uses data from certain structured circuits, specifically those used for gate set tomography, but does not rely on any of the standard statistical tools for analyzing such data, and can in principle be applied to arbitrary data that contains information about the property of interest.

Quantum information processors have grown rapidly in both size and fidelity. Currently available processors comprise 5, 8, or even 16 qubits, with 1- and 2-qubit gate infidelities below 1%. One of the looming obstacles to successfully running small algorithms or quantum error correction is crosstalk: each qubit may be influenced by the state of its neighbors, or by the operations performed on those neighbors. Crosstalk could ruin any desired computation if not eliminated or mitigated. We have been developing and testing methods to detect, quantify, and characterize crosstalk so that it can be eliminated by device engineering, or mitigated through modeling and adaptation. In this talk we provide a comprehensive taxonomy of crosstalk, and present hardware-agnostic protocols to diagnose and characterize it. Finally, we demonstrate these techniques by applying them to experimental data from superconducting qubit systems, and show that we can characterize signatures of various distinct crosstalk processes.

Quantum error correction, fundamental to enabling large-scale quantum computing, relies on stochastic errors throughout a quantum circuit. Correlated errors between sequential logic gates violate this requirement, but are a realistic element of laboratory environments. To facilitate QEC it is necessary to identify and suppress such errors at both the physical and virtual layers of the quantum processor architecture.We provide an analytic framework to identify correlated errors in randomly composed quantum circuits using only projective measurements at their conclusion. Using a single trapped 171Yb+ ion, we identify signatures of error correlations in the presence of engineered noise with tuneable correlation length. To reduce error correlations before the application of QEC, we work at a higher abstraction layer than the physical gates, replacing primitive qubit operations with logically equivalent dynamically corrected gates (DCGs) to form a virtual layer. We demonstrate that even in the presence of strongly correlated noise the signatures of error correlations at the virtual layer appear similar to standard gates exposed to uncorrelated noise, quantitatively extracting a >100x reduction in the correlated error component.

The efficient simulation of correlated quantum systems is the most promising near-term application of quantum computers. Here, we present the calculation of the second Renyi entropy of the ground state of the two-site Fermi-Hubbard model on a 5 qubit programmable quantum computer based on trapped ions. Our work illustrates efficient mapping of the electronic system to the qubit Hilbert space, circuit compilation and implementation on a physical quantum computer, optimized use of finite quantum gate depth, extraction of a non-linear characteristic of a quantum state using the controlled-swap gate, and effective reduction of experimental errors by over 40% using a symmetry-based post-selection scheme. Thus we demonstrate the first scalable measurement of entanglement on a digital quantum computer, which on larger systems will provide insights into many-body quantum systems that are impossible to simulate on classical computers.

The performance of the surface code (SC) is usually estimated under Pauli errors, as they can be efficiently simulated using the stabilizer formalism. However, Pauli errors do not reflect the actual decoherence mechanisms that physical qubits undergo, which may negatively impact the predictions about the code performance in actual experiments. Here we present a tensor network simulator of the SC subject to arbitrary physical local noise [1], such as relaxation and dephasing, and always present ZZ interactions. Our simulation includes not only noisy data qubits, but also noisy syndrome qubits resulting in imperfect parity measurements. The simulator is exact for small surface code distances, and relies on approximate contraction techniques to extend the result to larger patches. We derive logical error rates on SC implemented in cirQED architectures.