At a fundamental level, the classical picture of the world is dead, and has been dead now for almost a century. Pinning down exactly which quantum phenomena are responsible for this has proved to be a tricky and controversial question, but a lot of progress has been made in the past few decades. We now have a range of precise statements showing that whatever the ultimate laws of Nature are, they cannot be classical. In this article, we review results on the fundamental phenomena of quantum theory that cannot be understood in classical terms. We proceed by first granting quite a broad notion of classicality, describe a range of quantum phenomena (such as randomness, discreteness, the indistinguishability of states, measurement-uncertainty, measurement-disturbance, complementarity, noncommutativity, interference, the no-cloning theorem, and the collapse of the wave-packet) that do fall under its liberal scope, and then finally describe some aspects of quantum physics that can never admit a classical understanding — the intrinsically quantum mechanical aspects of Nature. The most famous of these is Bell’s theorem, but we also review two more recent results in this area. Firstly, Hardy’s theorem shows that even a finite dimensional quantum system must contain an infinite amount of information, and secondly, the Pusey–Barrett–Rudolph theorem shows that the wave-function must be an objective property of an individual quantum system. Besides being of foundational interest, results of this sort now find surprising practical applications in areas such as quantum information science and the simulation of quantum systems.

@Article{Jennings:2015,
Title = {No return to classical reality},
Author = {Jennings, David and Leifer, Matthew},
Journal = {Contemporary Physics},
Eprint = {1501.03202},
Year = {2015},
Pages = {60--82},
Volume = {57},
Numer = {1},
Abstract = {At a fundamental level, the classical picture of the world is dead, and has been dead now for almost a century. Pinning down exactly which quantum phenomena are responsible for this has proved to be a tricky and controversial question, but a lot of progress has been made in the past few decades. We now have a range of precise statements showing that whatever the ultimate laws of Nature are, they cannot be classical. In this article, we review results on the fundamental phenomena of quantum theory that cannot be understood in classical terms. We proceed by first granting quite a broad notion of classicality, describe a range of quantum phenomena (such as randomness, discreteness, the indistinguishability of states, measurement-uncertainty, measurement-disturbance, complementarity, noncommutativity, interference, the no-cloning theorem, and the collapse of the wave-packet) that do fall under its liberal scope, and then finally describe some aspects of quantum physics that can never admit a classical understanding -- the intrinsically quantum mechanical aspects of Nature. The most famous of these is Bell's theorem, but we also review two more recent results in this area. Firstly, Hardy's theorem shows that even a finite dimensional quantum system must contain an infinite amount of information, and secondly, the Pusey--Barrett--Rudolph theorem shows that the wave-function must be an objective property of an individual quantum system. Besides being of foundational interest, results of this sort now find surprising practical applications in areas such as quantum information science and the simulation of quantum systems.},
Doi = {10.1080/00107514.2015.1063233},
Url = {http://www.tandfonline.com/doi/full/10.1080/00107514.2015.1063233}
}

Towards the end of 2011, Pusey, Barrett and Rudolph derived a theorem that aimed to show that the quantum state must be ontic (a state of reality) in a broad class of realist approaches to quantum theory. This result attracted a lot of attention and controversy. The aim of this review article is to review the background to the Pusey–Barrett–Rudolph Theorem, to provide a clear presentation of the theorem itself, and to review related work that has appeared since the publication of the Pusey–Barrett–Rudolph paper. In particular, this review: Explains what it means for the quantum state to be ontic or epistemic (a state of knowledge); Reviews arguments for and against an ontic interpretation of the quantum state as they existed prior to the Pusey–Barrett–Rudolph Theorem; Explains why proving the reality of the quantum state is a very strong constraint on realist theories in that it would imply many of the known no-go theorems, such as Bell’s Theorem and the need for an exponentially large ontic state space; Provides a comprehensive presentation of the Pusey–Barrett–Rudolph Theorem itself, along with subsequent improvements and criticisms of its assumptions; Reviews two other arguments for the reality of the quantum state: the first due to Hardy and the second due to Colbeck and Renner, and explains why their assumptions are less compelling than those of the Pusey–Barrett–Rudolph Theorem; Reviews subsequent work aimed at ruling out stronger notions of what it means for the quantum state to be epistemic and points out open questions in this area. The overall aim is not only to provide the background needed for the novice in this area to understand the current status, but also to discuss often overlooked subtleties that should be of interest to the experts.

@Article{Leifer:2014b,
Title = {Is the quantum state real? An extended review of \(\psi\)-ontology theorems},
Author = {Leifer, M. S.},
Journal = {Quanta},
Eprint = {1409.1570},
Year = {2014},
Pages = {67--155},
Volume = {3},
Number = {1},
Abstract = {Towards the end of 2011, Pusey, Barrett and Rudolph derived a theorem that aimed to show that the quantum state must be ontic (a state of reality) in a broad class of realist approaches to quantum theory. This result attracted a lot of attention and controversy. The aim of this review article is to review the background to the Pusey--Barrett--Rudolph Theorem, to provide a clear presentation of the theorem itself, and to review related work that has appeared since the publication of the Pusey--Barrett--Rudolph paper. In particular, this review: Explains what it means for the quantum state to be ontic or epistemic (a state of knowledge); Reviews arguments for and against an ontic interpretation of the quantum state as they existed prior to the Pusey--Barrett--Rudolph Theorem; Explains why proving the reality of the quantum state is a very strong constraint on realist theories in that it would imply many of the known no-go theorems, such as Bell's Theorem and the need for an exponentially large ontic state space; Provides a comprehensive presentation of the Pusey--Barrett--Rudolph Theorem itself, along with subsequent improvements and criticisms of its assumptions; Reviews two other arguments for the reality of the quantum state: the first due to Hardy and the second due to Colbeck and Renner, and explains why their assumptions are less compelling than those of the Pusey--Barrett--Rudolph Theorem; Reviews subsequent work aimed at ruling out stronger notions of what it means for the quantum state to be epistemic and points out open questions in this area. The overall aim is not only to provide the background needed for the novice in this area to understand the current status, but also to discuss often overlooked subtleties that should be of interest to the experts.},
Doi = {10.12743/quanta.v3i1.22},
Url = {http://quanta.ws/ojs/index.php/quanta/article/view/22}
}

The status of the quantum state is perhaps the most controversial issue in the foundations of quantum theory. Is it an epistemic state (state of knowledge) or an ontic state (state of reality)? In realist models of quantum theory, the epistemic view asserts that nonorthogonal quantum states correspond to overlapping probability measures over the true ontic states. This naturally accounts for a large number of otherwise puzzling quantum phenomena. For example, the indistinguishability of nonorthogonal states is explained by the fact that the ontic state sometimes lies in the overlap region, in which case there is nothing in reality that could distinguish the two states. For this to work, the amount of overlap of the probability measures should be comparable to the indistinguishability of the quantum states. In this letter, I exhibit a family of states for which the ratio of these two quantities must be \( <= 2d\exp(-cd)\) in Hilbert spaces of dimension \(d\) that are divisible by \(4\). This implies that, for large Hilbert space dimension, the epistemic explanation of indistinguishability becomes implausible at an exponential rate as the Hilbert space dimension increases.

@Article{Leifer:2014,
Title = {\(\psi\)-epistemic models are exponentially bad at explaining the distinguishability of quantum states},
Author = {Leifer, M. S.},
Journal = {Phys. Rev. Lett.},
Eprint = {1401.7996},
Year = {2014},
Pages = {160404},
Volume = {112},
Abstract = {The status of the quantum state is perhaps the most controversial issue in the foundations of quantum theory. Is it an epistemic state (state of knowledge) or an ontic state (state of reality)? In realist models of quantum theory, the epistemic view asserts that nonorthogonal quantum states correspond to overlapping probability measures over the true ontic states. This naturally accounts for a large number of otherwise puzzling quantum phenomena. For example, the indistinguishability of nonorthogonal states is explained by the fact that the ontic state sometimes lies in the overlap region, in which case there is nothing in reality that could distinguish the two states. For this to work, the amount of overlap of the probability measures should be comparable to the indistinguishability of the quantum states. In this letter, I exhibit a family of states for which the ratio of these two quantities must be \( <= 2d\exp(-cd)\) in Hilbert spaces of dimension \(d\) that are divisible by \(4\). This implies that, for large Hilbert space dimension, the epistemic explanation of indistinguishability becomes implausible at an exponential rate as the Hilbert space dimension increases.},
Doi = {10.1103/PhysRevLett.112.160404},
Url = {http://journals.aps.org/prl/abstract/10.1103/PhysRevLett.112.160404}
}

In approaches to quantum theory in which the quantum state is regarded as a representation of knowledge, information, or belief, two agents can assign different states to the same quantum system. This raises two questions: when are such state assignments compatible? And how should the state assignments of different agents be reconciled? In this paper, we address these questions from the perspective of the recently developed conditional states formalism for quantum theory (Leifer M S and Spekkens R W 2013 Phys. Rev. A 88 052310). Specifically, we derive a compatibility criterion proposed by Brun, Finkelstein and Mermin from the requirement that, upon acquiring data, agents should update their states using a quantum generalization of Bayesian conditioning. We provide two alternative arguments for this criterion, based on the objective and subjective Bayesian interpretations of probability theory. We then apply the same methodology to the problem of quantum state improvement, i.e. how to update your state when you learn someone else's state assignment, and to quantum state pooling, i.e. how to combine the state assignments of several agents into a single assignment that accurately represents the views of the group. In particular, we derive a pooling rule previously proposed by Spekkens and Wiseman under much weaker assumptions than those made in the original derivation. All of our results apply to a much broader class of experimental scenarios than have been considered previously in this context.

@Article{Leifer:2014a,
Title = {A Bayesian approach to compatibility, improvement, and pooling of quantum states},
Author = {Leifer, M. S. and Spekkens, R. W.},
Journal = {J. Phys. A},
Eprint = {1110.1085},
Year = {2014},
Pages = {275301},
Volume = {47},
Abstract = {In approaches to quantum theory in which the quantum state is regarded as a representation of knowledge, information, or belief, two agents can assign different states to the same quantum system. This raises two questions: when are such state assignments compatible? And how should the state assignments of different agents be reconciled? In this paper, we address these questions from the perspective of the recently developed conditional states formalism for quantum theory (Leifer M S and Spekkens R W 2013 Phys. Rev. A 88 052310). Specifically, we derive a compatibility criterion proposed by Brun, Finkelstein and Mermin from the requirement that, upon acquiring data, agents should update their states using a quantum generalization of Bayesian conditioning. We provide two alternative arguments for this criterion, based on the objective and subjective Bayesian interpretations of probability theory. We then apply the same methodology to the problem of quantum state improvement, i.e. how to update your state when you learn someone else's state assignment, and to quantum state pooling, i.e. how to combine the state assignments of several agents into a single assignment that accurately represents the views of the group. In particular, we derive a pooling rule previously proposed by Spekkens and Wiseman under much weaker assumptions than those made in the original derivation. All of our results apply to a much broader class of experimental scenarios than have been considered previously in this context.},
Doi = {10.1088/1751-8113/47/27/275301},
Url = {http://iopscience.iop.org/1751-8121/47/27/275301/},
Note = {Selected by the editors for inclusion in the "Highlights of 2014" collection.}
}

Quantum theory can be viewed as a generalization of classical probability theory, but the analogy as it has been developed so far is not complete. Whereas the manner in which inferences are made in classical probability theory is independent of the causal relation that holds between the conditioned variable and the conditioning variable, in the conventional quantum formalism, there is a significant difference between how one treats experiments involving two systems at a single time and those involving a single system at two times. In this article, we develop the formalism of quantum conditional states, which provides a unified description of these two sorts of experiment. In addition, concepts that are distinct in the conventional formalism become unified: Channels, sets of states, and positive operator valued measures are all seen to be instances of conditional states; the action of a channel on a state, ensemble averaging, the Born rule, the composition of channels, and nonselective state-update rules are all seen to be instances of belief propagation. Using a quantum generalization of Bayes’ theorem and the associated notion of Bayesian conditioning, we also show that the remote steering of quantum states can be described within our formalism as a mere updating of beliefs about one system given new information about another, and retrodictive inferences can be expressed using the same belief propagation rule as is used for predictive inferences. Finally, we show that previous arguments for interpreting the projection postulate as a quantum generalization of Bayesian conditioning are based on a misleading analogy and that it is best understood as a combination of belief propagation (corresponding to the nonselective state-update map) and conditioning on the measurement outcome.

@Article{Leifer:2013,
Title = {Towards a Formulation of Quantum Theory as a Causally Neutral Theory of Bayesian Inference},
Author = {Leifer, M. S. and Spekkens, R. W.},
Journal = {Phys. Rev. A},
Year = {2013},
Eprint = {1107.5849},
Number = {5},
Pages = {052130},
Volume = {88},
Abstract = {Quantum theory can be viewed as a generalization of classical probability theory, but the analogy as it has been developed so far is not complete. Whereas the manner in which inferences are made in classical probability theory is independent of the causal relation that holds between the conditioned variable and the conditioning variable, in the conventional quantum formalism, there is a significant difference between how one treats experiments involving two systems at a single time and those involving a single system at two times. In this article, we develop the formalism of quantum conditional states, which provides a unified description of these two sorts of experiment. In addition, concepts that are distinct in the conventional formalism become unified: Channels, sets of states, and positive operator valued measures are all seen to be instances of conditional states; the action of a channel on a state, ensemble averaging, the Born rule, the composition of channels, and nonselective state-update rules are all seen to be instances of belief propagation. Using a quantum generalization of Bayes’ theorem and the associated notion of Bayesian conditioning, we also show that the remote steering of quantum states can be described within our formalism as a mere updating of beliefs about one system given new information about another, and retrodictive inferences can be expressed using the same belief propagation rule as is used for predictive inferences. Finally, we show that previous arguments for interpreting the projection postulate as a quantum generalization of Bayesian conditioning are based on a misleading analogy and that it is best understood as a combination of belief propagation (corresponding to the nonselective state-update map) and conditioning on the measurement outcome.},
Doi = {10.1103/PhysRevA.88.052130},
Url = {http://link.aps.org/doi/10.1103/PhysRevA.88.052130}
}

We examine the relationship between quantum contextuality (in both the standard Kochen-Specker sense and in the generalised sense proposed by Spekkens) and models of quantum theory in which the quantum state is maximally epistemic. We find that preparation noncontextual models must be maximally epistemic, and these in turn must be Kochen-Specker noncontextual. This implies that the Kochen-Specker theorem is sufficient to establish both the impossibility of maximally epistemic models and the impossibility of preparation noncontextual models. The implication from preparation noncontextual to maximally epistemic then also yields a proof of Bell's theorem from an EPR-like argument.

@Article{Leifer:2012,
Title = {Maximally epistemic interpretations of the quantum state and contextuality},
Author = {Leifer, M. S. and Maroney, O. J. E.},
Journal = {Phys. Rev. Lett.},
Year = {2013},
Eprint = {1208.5132},
Number = {12},
Pages = {120401},
Volume = {110},
Abstract = {We examine the relationship between quantum contextuality (in both the standard Kochen-Specker sense and in the generalised sense proposed by Spekkens) and models of quantum theory in which the quantum state is maximally epistemic. We find that preparation noncontextual models must be maximally epistemic, and these in turn must be Kochen-Specker noncontextual. This implies that the Kochen-Specker theorem is sufficient to establish both the impossibility of maximally epistemic models and the impossibility of preparation noncontextual models. The implication from preparation noncontextual to maximally epistemic then also yields a proof of Bell's theorem from an EPR-like argument.},
Doi = {10.1103/PhysRevLett.110.120401},
Url = {http://link.aps.org/doi/10.1103/PhysRevLett.110.120401}
}

In this addendum to our paper (2010 New J. Phys. 12 033024), we point out that an elementary consequence of the strong subadditivity inequality allows us to strengthen one of the main conclusions of that paper.

@Article{Barnum:2012,
Title = {Entropy and information causality in general probabilistic theories (addendum)},
Author = {Barnum, H. and Barrett, J. and Clark, L. and Leifer, M. and Spekkens, R. and Stepanik, N. and Wilce, A. and Wilke, R.},
Journal = {New J. Phys.},
Year = {2012},
Pages = {129401},
Volume = {14},
Abstract = {In this addendum to our paper (2010 New J. Phys. 12 033024), we point out that an elementary consequence of the strong subadditivity inequality allows us to strengthen one of the main conclusions of that paper.},
Doi = {10.1088/1367-2630/14/12/129401},
Url = {http://iopscience.iop.org/1367-2630/14/12/129401}
}

We investigate the concept of entropy in probabilistic theories more general than quantum mechanics, with particular reference to the notion of information causality (IC) recently proposed by Pawlowski et al (2009 arXiv:0905.2292). We consider two entropic quantities, which we term measurement and mixing entropy. In the context of classical and quantum theory, these coincide, being given by the Shannon and von Neumann entropies, respectively; in general, however, they are very different. In particular, while measurement entropy is easily seen to be concave, mixing entropy need not be. In fact, as we show, mixing entropy is not concave whenever the state space is a non-simplicial polytope. Thus, the condition that measurement and mixing entropies coincide is a strong constraint on possible theories. We call theories with this property monoentropic. Measurement entropy is subadditive, but not in general strongly subadditive. Equivalently, if we define the mutual information between two systems A and B by the usual formula I(A: B)=H(A)+H(B)-H(AB), where H denotes the measurement entropy and AB is a non-signaling composite of A and B, then it can happen that I(A:BC)<I(A:B). This is relevant to IC in the sense of Pawlowski et al: we show that any monoentropic non-signaling theory in which measurement entropy is strongly subadditive, and also satisfies a version of the Holevo bound, is informationally causal, and on the other hand we observe that Popescu--Rohrlich boxes, which violate IC, also violate strong subadditivity. We also explore the interplay between measurement and mixing entropy and various natural conditions on theories that arise in quantum axiomatics.

@Article{Barnum:2010,
Title = {Entropy and information causality in general probabilistic theories},
Author = {Barnum, H. and Barrett, J. and Clark, L. and Leifer, M. and Spekkens, R. and Stepanik, N. and Wilce, A. and Wilke, R.},
Journal = {New J. Phys.},
Year = {2010},
Eprint = {0909.5075},
Pages = {033024},
Volume = {12},
Abstract = {We investigate the concept of entropy in probabilistic theories more general than quantum mechanics, with particular reference to the notion of information causality (IC) recently proposed by Pawlowski et al (2009 arXiv:0905.2292). We consider two entropic quantities, which we term measurement and mixing entropy. In the context of classical and quantum theory, these coincide, being given by the Shannon and von Neumann entropies, respectively; in general, however, they are very different. In particular, while measurement entropy is easily seen to be concave, mixing entropy need not be. In fact, as we show, mixing entropy is not concave whenever the state space is a non-simplicial polytope. Thus, the condition that measurement and mixing entropies coincide is a strong constraint on possible theories. We call theories with this property monoentropic.
Measurement entropy is subadditive, but not in general strongly subadditive. Equivalently, if we define the mutual information between two systems A and B by the usual formula I(A: B)=H(A)+H(B)-H(AB), where H denotes the measurement entropy and AB is a non-signaling composite of A and B, then it can happen that I(A:BC)<I(A:B). This is relevant to IC in the sense of Pawlowski et al: we show that any monoentropic non-signaling theory in which measurement entropy is strongly subadditive, and also satisfies a version of the Holevo bound, is informationally causal, and on the other hand we observe that Popescu--Rohrlich boxes, which violate IC, also violate strong subadditivity. We also explore the interplay between measurement and mixing entropy and various natural conditions on theories that arise in quantum axiomatics.},
Doi = {10.1088/1367-2630/12/3/033024},
Url = {http://iopscience.iop.org/1367-2630/12/3/033024}
}

We prove a de Finetti theorem for exchangeable sequences of states on test spaces, where a test space is a generalization of the sample space of classical probability theory and the Hilbert space of quantum theory. The standard classical and quantum de Finetti theorems are obtained as special cases. By working in a test space framework, the common features that are responsible for the existence of these theorems are elucidated. In addition, the test space framework is general enough to imply a de Finetti theorem for classical processes. We conclude by discussing the ways in which our assumptions may fail, leading to probabilistic models that do not have a de Finetti theorem.

@Article{Barrett:2009,
Title = {The de Finetti theorem for test spaces},
Author = {Barrett, Jonathan and Leifer, Matthew},
Journal = {New J. Phys.},
Year = {2009},
Eprint = {0712.2265},
Pages = {033024},
Volume = {11},
Abstract = {We prove a de Finetti theorem for exchangeable sequences of states on test spaces, where a test space is a generalization of the sample space of classical probability theory and the Hilbert space of quantum theory. The standard classical and quantum de Finetti theorems are obtained as special cases. By working in a test space framework, the common features that are responsible for the existence of these theorems are elucidated. In addition, the test space framework is general enough to imply a de Finetti theorem for classical processes. We conclude by discussing the ways in which our assumptions may fail, leading to probabilistic models that do not have a de Finetti theorem.},
Doi = {10.1088/1367-2630/11/3/033024},
Url = {http://www.iop.org/EJ/abstract/1367-2630/11/3/033024}
}

Belief Propagation algorithms acting on Graphical Models of classical probability distributions, such as Markov Networks, Factor Graphs and Bayesian Networks, are amongst the most powerful known methods for deriving probabilistic inferences amongst large numbers of random variables. This paper presents a generalization of these concepts and methods to the quantum case, based on the idea that quantum theory can be thought of as a noncommutative, operator-valued, generalization of classical probability theory. Some novel characterizations of quantum conditional independence are derived, and definitions of Quantum n-Bifactor Networks, Markov Networks, Factor Graphs and Bayesian Networks are proposed. The structure of Quantum Markov Networks is investigated and some partial characterization results are obtained, along the lines of the Hammersley--Clifford theorem. A Quantum Belief Propagation algorithm is presented and is shown to converge on 1-Bifactor Networks and Markov Networks when the underlying graph is a tree. The use of Quantum Belief Propagation as a heuristic algorithm in cases where it is not known to converge is discussed. Applications to decoding quantum error correcting codes and to the simulation of many-body quantum systems are described.

@Article{Leifer:2008,
Title = {Quantum Graphical Models and Belief Propagation},
Author = {Leifer, Matthew and Poulin, David},
Journal = {Ann. Phys.},
Year = {2008},
Eprint = {0708.1337},
Pages = {1899},
Volume = {323},
Abstract = {Belief Propagation algorithms acting on Graphical Models of classical probability distributions, such as Markov Networks, Factor Graphs and Bayesian Networks, are amongst the most powerful known methods for deriving probabilistic inferences amongst large numbers of random variables. This paper presents a generalization of these concepts and methods to the quantum case, based on the idea that quantum theory can be thought of as a noncommutative, operator-valued, generalization of classical probability theory. Some novel characterizations of quantum conditional independence are derived, and definitions of Quantum n-Bifactor Networks, Markov Networks, Factor Graphs and Bayesian Networks are proposed. The structure of Quantum Markov Networks is investigated and some partial characterization results are obtained, along the lines of the Hammersley--Clifford theorem. A Quantum Belief Propagation algorithm is presented and is shown to converge on 1-Bifactor Networks and Markov Networks when the underlying graph is a tree. The use of Quantum Belief Propagation as a heuristic algorithm in cases where it is not known to converge is discussed. Applications to decoding quantum error correcting codes and to the simulation of many-body quantum systems are described.},
Doi = {10.1016/j.aop.2007.10.001},
Url = {http://www.sciencedirect.com/science?_ob=ArticleURL&_udi=B6WB1-4PWF0SJ-2&_user=10&_rdoc=1&_fmt=&_orig=search&_sort=d&view=c&_acct=C000050221&_version=1&_urlVersion=0&_userid=10&md5=44280eaaf4e3ff64289c16ab390bb43f}
}

We prove a generalized version of the no-broadcasting theorem, applicable to essentially any nonclassical finite-dimensional probabilistic model satisfying a no-signaling criterion, including ones with ``superquantum'' correlations. A strengthened version of the quantum no-broadcasting theorem follows, and its proof is significantly simpler than existing proofs of the no-broadcasting theorem.

Quantum theory can be regarded as a noncommutative generalization of classical probability. From this point of view, one expects quantum dynamics to be analogous to classical conditional probabilities. In this paper, a variant of the well-known isomorphism between completely positive maps and bipartite density operators is derived, which makes this connection much more explicit. This isomorphism is given an operational interpretation in terms of statistical correlations between ensemble preparation procedures and outcomes of measurements. Finally, the isomorphism is applied to elucidate the connection between no-cloning and no-broadcasting theorems and the monogamy of entanglement, and a simplified proof of the no-broadcasting theorem is obtained as a by-product.

@Article{Leifer:2006,
Title = {Quantum Dynamics as an analog of Conditional Probability},
Author = {Leifer, M. S.},
Journal = {Phys. Rev. A},
Year = {2006},
Eprint = {quant-ph/0606022},
Pages = {042310},
Volume = {74},
Abstract = {Quantum theory can be regarded as a noncommutative generalization of classical probability. From this point of view, one expects quantum dynamics to be analogous to classical conditional probabilities. In this paper, a variant of the well-known isomorphism between completely positive maps and bipartite density operators is derived, which makes this connection much more explicit. This isomorphism is given an operational interpretation in terms of statistical correlations between ensemble preparation procedures and outcomes of measurements. Finally, the isomorphism is applied to elucidate the connection between no-cloning and no-broadcasting theorems and the monogamy of entanglement, and a simplified proof of the no-broadcasting theorem is obtained as a by-product.},
Doi = {10.1103/PhysRevA.74.042310},
Url = {http://link.aps.org/abstract/PRA/v74/e042310}
}

The jump process introduced by J. S. Bell in 1986, for defining a quantum field theory without observers, presupposes that space is discrete whereas time is continuous. In this letter, our interest is to find an analogous process in discrete time. We argue that a genuine analog does not exist, but provide examples of processes in discrete time that could be used as a replacement.

@Article{Barrett:2005,
Title = {Bell's Jump Process in Discrete Time},
Author = {Barrett, Jonathan and Leifer, Matthew and Tumulka, Roderich},
Journal = {Europhysics Letters},
Year = {2005},
Eprint = {quant-ph/0506066},
Number = {5},
Pages = {685-690},
Volume = {72},
Abstract = {The jump process introduced by J. S. Bell in 1986, for defining a quantum field theory without observers, presupposes that space is discrete whereas time is continuous. In this letter, our interest is to find an analogous process in discrete time. We argue that a genuine analog does not exist, but provide examples of processes in discrete time that could be used as a replacement.},
Doi = {10.1209/epl/i2005-10297-5},
Url = {http://www.iop.org/EJ/abstract/0295-5075/72/5/685/}
}

Many seemingly paradoxical effects are known in the predictions for outcomes of intermediate measurements made on pre- and post-selected quantum systems. Despite appearances, these effects do not demonstrate the impossibility of a noncontextual hidden variable theory, since an explanation in terms of measurement disturbance is possible. Nonetheless, we show that for every paradoxical effect wherein all the pre- and post-selected probabilities are 0 or 1 and the pre- and post-selected states are nonorthogonal, there is an associated proof of the impossibility of a noncontextual hidden variable theory. This proof is obtained by considering all the measurements involved in the paradoxical effect---the preselection, the post-selection, and the alternative possible intermediate measurements---as alternative possible measurements at a single time.

@Article{Leifer:2005,
Title = {Pre- and Post-Selection paradoxes and contextuality in quantum mechanics},
Author = {Leifer, M. S. and Spekkens, R. W.},
Journal = {Phys. Rev. Lett.},
Year = {2005},
Eprint = {quant-ph/0412178},
Pages = {200405},
Volume = {95},
Abstract = {Many seemingly paradoxical effects are known in the predictions for outcomes of intermediate measurements made on pre- and post-selected quantum systems. Despite appearances, these effects do not demonstrate the impossibility of a noncontextual hidden variable theory, since an explanation in terms of measurement disturbance is possible. Nonetheless, we show that for every paradoxical effect wherein all the pre- and post-selected probabilities are 0 or 1 and the pre- and post-selected states are nonorthogonal, there is an associated proof of the impossibility of a noncontextual hidden variable theory. This proof is obtained by considering all the measurements involved in the paradoxical effect---the preselection, the post-selection, and the alternative possible intermediate measurements---as alternative possible measurements at a single time.},
Doi = {10.1103/PhysRevLett.95.200405},
Url = {http://link.aps.org/abstract/PRL/v95/e200405}
}

We present networks for directly estimating the polynomial invariants of multiparty quantum states under local transformations. The structure of these networks is closely related to the structure of the invariants themselves and this lends a physical interpretation to these otherwise abstract mathematical quantities. Specifically, our networks estimate the invariants under local unitary (LU) transformations and under stochastic local operations and classical communication (SLOCC). Our networks can estimate the LU invariants for multiparty states, where each party can have a Hilbert space of arbitrary dimension and the SLOCC invariants for multiqubit states. We analyze the statistical efficiency of our networks compared to methods based on estimating the state coefficients and calculating the invariants.

@Article{Leifer:2004,
Title = {Measuring Polynomial Invariants of Multi-Party Quantum States},
Author = {Leifer, M. S. and Linden, N. and Winter, A.},
Journal = {Phys. Rev. A},
Year = {2004},
Eprint = {quant-ph/0308008},
Pages = {052304},
Volume = {69},
Abstract = {We present networks for directly estimating the polynomial invariants of multiparty quantum states under local transformations. The structure of these networks is closely related to the structure of the invariants themselves and this lends a physical interpretation to these otherwise abstract mathematical quantities. Specifically, our networks estimate the invariants under local unitary (LU) transformations and under stochastic local operations and classical communication (SLOCC). Our networks can estimate the LU invariants for multiparty states, where each party can have a Hilbert space of arbitrary dimension and the SLOCC invariants for multiqubit states. We analyze the statistical efficiency of our networks compared to methods based on estimating the state coefficients and calculating the invariants.},
Doi = {10.1103/PhysRevA.69.052304},
Url = {http://link.aps.org/abstract/PRA/v69/e052304}
}

We consider how much entanglement can be produced by a nonlocal two-qubit unitary operation, UAB---the entangling capacity of UAB. For a single application of UAB, with no ancillas, we find the entangling capacity and show that it generally helps to act with UAB on an entangled state. Allowing ancillas, we present numerical results from which we can conclude, quite generally, that allowing initial entanglement typically increases the optimal capacity in this case as well. Next, we show that allowing collective processing does not increase the entangling capacity if initial entanglement is allowed.

@Article{Leifer:2003,
Title = {Optimal Entanglement Generation from Quantum Operations},
Author = {Leifer, M. S. and Henderson, L. and Linden, N.},
Journal = {Phys. Rev. A},
Year = {2003},
Eprint = {quant-ph/0205055},
Pages = {012306},
Volume = {67},
Abstract = {We consider how much entanglement can be produced by a nonlocal two-qubit unitary operation, UAB---the entangling capacity of UAB. For a single application of UAB, with no ancillas, we find the entangling capacity and show that it generally helps to act with UAB on an entangled state. Allowing ancillas, we present numerical results from which we can conclude, quite generally, that allowing initial entanglement typically increases the optimal capacity in this case as well. Next, we show that allowing collective processing does not increase the entangling capacity if initial entanglement is allowed.},
Doi = {10.1103/PhysRevA.67.012306},
Url = {http://link.aps.org/abstract/PRA/v67/e012306}
}

We consider the simulation of the dynamics of one nonlocal Hamiltonian by another, allowing arbitrary local resources but no entanglement or classical communication. We characterize notions of simulation, and proceed to focus on deterministic simulation involving one copy of the system. More specifically, two otherwise isolated systems A and B interact by a nonlocal Hamiltonian HHA+HB. We consider the achievable space of Hamiltonians H such that the evolution e-iHt can be simulated by the interaction H interspersed with local operations. For any dimensions of A and B, and any nonlocal Hamiltonians H and H, there exists a scale factor s such that for all times t the evolution e-iHst can be simulated by H acting for time t interspersed with local operations. For two-qubit Hamiltonians H and H, we calculate the optimal s and give protocols achieving it. The optimal protocols do not require local ancillas, and can be understood geometrically in terms of a polyhedron defined by a partial order on the set of two-qubit Hamiltonians.

@Article{Bennett:2002,
Title = {Optimal simulation of two-qubit Hamiltonians using general local operations},
Author = {Bennett, C. H. and Cirac, J. I. and Leifer, M. S. and Leung, D. W. and Linden, N. and Popescu, S. and Vidal, G.},
Journal = {Phys. Rev. A},
Year = {2002},
Eprint = {quant-ph/0107035},
Pages = {012305},
Volume = {66},
Abstract = {We consider the simulation of the dynamics of one nonlocal Hamiltonian by another, allowing arbitrary local resources but no entanglement or classical communication. We characterize notions of simulation, and proceed to focus on deterministic simulation involving one copy of the system. More specifically, two otherwise isolated systems A and B interact by a nonlocal Hamiltonian HHA+HB. We consider the achievable space of Hamiltonians H such that the evolution e-iHt can be simulated by the interaction H interspersed with local operations. For any dimensions of A and B, and any nonlocal Hamiltonians H and H, there exists a scale factor s such that for all times t the evolution e-iHst can be simulated by H acting for time t interspersed with local operations. For two-qubit Hamiltonians H and H, we calculate the optimal s and give protocols achieving it. The optimal protocols do not require local ancillas, and can be understood geometrically in terms of a polyhedron defined by a partial order on the set of two-qubit Hamiltonians.},
Doi = {10.1103/PhysRevA.66.012305},
Url = {http://link.aps.org/abstract/PRA/v66/e012305}
}

If a quantum system is prepared and later post-selected in certain states, "paradoxical" predictions for intermediate measurements can be obtained. This is the case both when the intermediate measurement is strong, i.e. a projective measurements with Lüders-von Neumann update rule, or with weak measurements where they show up in anomalous weak values. Leifer and Spekkens [quant-ph/0412178] identified a striking class of such paradoxes, known as logical pre- and post-selection paradoxes, and showed that they are indirectly connected with contextuality. By analysing the measurement-disturbance required in models of these phenomena, we find that the strong measurement version of logical pre- and post-selection paradoxes actually constitute a direct manifestation of quantum contextuality. The proof hinges on under-appreciated features of the paradoxes. In particular, we show by example that it is not possible to prove contextuality without Lüders-von Neumann updates for the intermediate measurements, nonorthogonal pre- and post-selection, and 0/1 probabilities for the intermediate measurements. Since one of us has recently shown that anomalous weak values are also a direct manifestation of contextuality [arXiv:1409.1535], we now know that this is true for both realizations of logical pre- and post-selection paradoxes.

@InProceedings{Pusey:2015,
Title = {Logical pre- and post-selection paradoxes are proofs of contextuality},
Author = {Pusey, Matthew F. and Leifer, Matthew S.},
Booktitle = {Proceedings 12th International Workshop on Quantum Physics and Logic},
Year = {2015},
Editor = {Heunen, C. and Selinger, P. and Vicary, J.},
Eprint = {1506.07850},
Pages = {295--306},
Series = {Electronic Proceedings in Theoretical Computer Science},
Volume = {195},
Year = {2015},
Abstract = {If a quantum system is prepared and later post-selected in certain states, "paradoxical" predictions for intermediate measurements can be obtained. This is the case both when the intermediate measurement is strong, i.e. a projective measurements with L{\"u}ders-von Neumann update rule, or with weak measurements where they show up in anomalous weak values. Leifer and Spekkens [quant-ph/0412178] identified a striking class of such paradoxes, known as logical pre- and post-selection paradoxes, and showed that they are indirectly connected with contextuality. By analysing the measurement-disturbance required in models of these phenomena, we find that the strong measurement version of logical pre- and post-selection paradoxes actually constitute a direct manifestation of quantum contextuality. The proof hinges on under-appreciated features of the paradoxes. In particular, we show by example that it is not possible to prove contextuality without L{\"u}ders-von Neumann updates for the intermediate measurements, nonorthogonal pre- and post-selection, and 0/1 probabilities for the intermediate measurements. Since one of us has recently shown that anomalous weak values are also a direct manifestation of contextuality [arXiv:1409.1535], we now know that this is true for both realizations of logical pre- and post-selection paradoxes.},
Url = {http://arxiv.org/html/1511.01181v1}
}

In a previous paper, we showed that many important quantum information-theoretic phenomena, including the no-cloning and no-broadcasting theorems, are in fact generic in all non-classical probabilistic theories. An exception is teleportation, which most such theories do not support. In this paper, we investigate which probabilistic theories, and more particularly, which composite systems, \em do support a teleportation protocol. We isolate a natural class of composite systems that we term \em regular, and establish necessary and sufficient conditions for a regular tripartite system to support a conclusive, or post-selected, teleportation protocol. We also establish a sufficient condition for deterministic teleportation that yields a large supply of theories, neither classical nor quantum, that support such a protocol.

@InProceedings{Barnum:2012a,
Title = {Teleportation in General Probabilistic Theories},
Author = {Barnum, Howard and Barrett, Jonathan and Leifer, Matthew and Wilce, Alexander},
Booktitle = {Mathematical Foundations of Information Flow (Proceedings of the Clifford Lectures 2008)},
Year = {2012},
Editor = {Abramsky, S. and Mislove, M.},
Eprint = {0805.3553},
Pages = {25--47},
Publisher = {American Mathematical Society},
Series = {Proceedings of Symposia in Applied Mathematics},
Volume = {71},
Abstract = {In a previous paper, we showed that many important quantum information-theoretic phenomena, including the no-cloning and no-broadcasting theorems, are in fact generic in all non-classical probabilistic theories. An exception is teleportation, which most such theories do not support. In this paper, we investigate which probabilistic theories, and more particularly, which composite systems, {\em do} support a teleportation protocol. We isolate a natural class of composite systems that we term {\em regular}, and establish necessary and sufficient conditions for a regular tripartite system to support a conclusive, or post-selected, teleportation protocol. We also establish a sufficient condition for deterministic teleportation that yields a large supply of theories, neither classical nor quantum, that support such a protocol.},
Url = {http://www.ams.org/bookstore?fn=20&arg1=psapmseries&ikey=PSAPM-71}
}

We investigate the existence of secure bit commitment protocols in the convex framework for probabilistic theories. The theory makes only minimal assumptions, and can be used to formalize quantum theory, classical probability theory, and a host of other possibilities. We prove that in all such theories that are locally non-classical but do not have entanglement, there exists a bit commitment protocol that is exponentially secure in the number of systems used.

@InProceedings{Barnum:2008,
Title = {Nonclassicality without entanglement enables bit commitment},
Author = {Barnum, Howard and Dahlsten, Oscar C. O. and Leifer, Matthew and Toner, Ben},
Booktitle = {Proceedings of IEEE Information Theory Workshop, 2008},
Year = {2008},
Eprint = {0803.1264},
Pages = {386-390},
Abstract = {We investigate the existence of secure bit commitment protocols in the convex framework for probabilistic theories. The theory makes only minimal assumptions, and can be used to formalize quantum theory, classical probability theory, and a host of other possibilities. We prove that in all such theories that are locally non-classical but do not have entanglement, there exists a bit commitment protocol that is exponentially secure in the number of systems used.},
Doi = {10.1109/ITW.2008.4578692},
Url = {http://ieeexplore.ieee.org/search/wrapper.jsp?arnumber=4578692}
}

Assuming that quantum states, including pure states, represent subjective degrees of belief rather than objective properties of systems, the question of what other elements of the quantum formalism must also be taken as subjective is addressed. In particular, we ask this of the dynamical aspects of the formalism, such as Hamiltonians and unitary operators. Whilst some operations, such as the update maps corresponding to a complete projective measurement, must be subjective, the situation is not so clear in other cases. Here, it is argued that all trace preserving completely positive maps, including unitary operators, should be regarded as subjective, in the same sense as a classical conditional probability distribution. The argument is based on a reworking of the Choi-Jamiolkowski isomorphism in terms of ``conditional'' density operators and trace preserving completely positive maps, which mimics the relationship between conditional probabilities and stochastic maps in classical probability.

@InProceedings{Leifer:2007,
Title = {Conditional Density Operators and the Subjectivity of Quantum Operations},
Author = {Leifer, M. S.},
Booktitle = {Foundations of Probability and Physics-4},
Year = {2007},
Editor = {Adenier, G. and Fuchs, C. A. and Khrennikov, A. Yu.},
Eprint = {quant-ph/0611233},
Pages = {172-186},
Publisher = {AIP},
Series = {AIP Conference Proceedings},
Volume = {889},
Abstract = {Assuming that quantum states, including pure states, represent subjective degrees of belief rather than objective properties of systems, the question of what other elements of the quantum formalism must also be taken as subjective is addressed. In particular, we ask this of the dynamical aspects of the formalism, such as Hamiltonians and unitary operators. Whilst some operations, such as the update maps corresponding to a complete projective measurement, must be subjective, the situation is not so clear in other cases. Here, it is argued that all trace preserving completely positive maps, including unitary operators, should be regarded as subjective, in the same sense as a classical conditional probability distribution. The argument is based on a reworking of the Choi-Jamiolkowski isomorphism in terms of ``conditional'' density operators and trace preserving completely positive maps, which mimics the relationship between conditional probabilities and stochastic maps in classical probability.},
Doi = {10.1063/1.2713456},
Url = {http://link.aip.org/link/?APCPCS/889/172/1}
}

Preprints

Huw Price has proposed an argument that suggests a time-symmetric ontology for quantum theory must necessarily be retrocausal, i.e. it must involve influences that travel backwards in time. One of Price's assumptions is that the quantum state is a state of reality. However, one of the reasons for exploring retrocausality is that it offers the potential for evading the consequences of no-go theorems, including recent proofs of the reality of the quantum state. Here, we show that this assumption can be replaced by a different assumption, called lambda-mediation, that plausibly holds independently of the status of the quantum state. We also reformulate the other assumptions behind the argument to place them in a more general framework and pin down the notion of time symmetry involved more precisely. We show that our assumptions imply a timelike analogue of Bell's local causality criterion and, in doing so, give a new interpretation of timelike violations of Bell inequalities. Namely, they show the impossibility of a (non-retrocausal) time-symmetric ontology.

@Unpublished{Leifer:2016,
Title = {Is a time symmetric interpretation of quantum theory possible without retrocausality},
Author = {Leifer, Matthew S. and Pusey, Matthew F.},
Eprint = {1607.07871},
Year = {2016},
Abstract = {Huw Price has proposed an argument that suggests a time-symmetric ontology for quantum theory must necessarily be retrocausal, i.e. it must involve influences that travel backwards in time. One of Price's assumptions is that the quantum state is a state of reality. However, one of the reasons for exploring retrocausality is that it offers the potential for evading the consequences of no-go theorems, including recent proofs of the reality of the quantum state. Here, we show that this assumption can be replaced by a different assumption, called lambda-mediation, that plausibly holds independently of the status of the quantum state. We also reformulate the other assumptions behind the argument to place them in a more general framework and pin down the notion of time symmetry involved more precisely. We show that our assumptions imply a timelike analogue of Bell's local causality criterion and, in doing so, give a new interpretation of timelike violations of Bell inequalities. Namely, they show the impossibility of a (non-retrocausal) time-symmetric ontology.}
}

"Protective measurement" refers to two related schemes for finding the expectation value of an observable without disturbing the state of a quantum system, given a single copy of the system that is subject to a "protecting" operation. There have been several claims that these schemes support interpreting the quantum state as an objective property of a single quantum system. Here we provide three counter-arguments, each of which we present in two versions tailored to the two different schemes. Our first argument shows that the same resources used in protective measurement can be used to reconstruct the quantum state in a different way via process tomography. Our second argument is based on exact analyses of special cases of protective measurement, and our final argument is to construct explicit "psi-epistemic" toy models for protective measurement, which strongly suggest that protective measurement does not imply the reality of the quantum state. The common theme of the three arguments is that almost all of the information comes from the "protection" operation rather than the quantum state of the system, and hence the schemes have no implications for the reality of the quantum state.

@Unpublished{Combes:2015,
Title = {Why protective measurement does not establish the reality of the quantum state},
Author = {Combes, Joshua and Ferrie, Christopher and Leifer, Matthew S. and Pusey, Matthew F.},
Eprint = {1509.08893},
Year = {2015},
Abstract = {"Protective measurement" refers to two related schemes for finding the expectation value of an observable without disturbing the state of a quantum system, given a single copy of the system that is subject to a "protecting" operation. There have been several claims that these schemes support interpreting the quantum state as an objective property of a single quantum system. Here we provide three counter-arguments, each of which we present in two versions tailored to the two different schemes. Our first argument shows that the same resources used in protective measurement can be used to reconstruct the quantum state in a different way via process tomography. Our second argument is based on exact analyses of special cases of protective measurement, and our final argument is to construct explicit "psi-epistemic" toy models for protective measurement, which strongly suggest that protective measurement does not imply the reality of the quantum state. The common theme of the three arguments is that almost all of the information comes from the "protection" operation rather than the quantum state of the system, and hence the schemes have no implications for the reality of the quantum state.}
}

Plausibility measures are structures for reasoning in the face of uncertainty that generalize probabilities, unifying them with weaker structures like possibility measures and comparative probability relations. So far, the theory of plausibility measures has only been developed for classical sample spaces. In this paper, we generalize the theory to test spaces, so that they can be applied to general operational theories, and to quantum theory in particular. Our main results are two theorems on when a plausibility measure agrees with a probability measure, i.e. when its comparative relations coincide with those of a probability measure. For strictly finite test spaces we obtain a precise analogue of the classical result that the Archimedean condition is necessary and sufficient for agreement between a plausibility and a probability measure. In the locally finite case, we prove a slightly weaker result that the Archimedean condition implies almost agreement.

@Unpublished{Fritz:2015,
Title = {Plausibility measures on test spaces},
Author = {Fritz, Tobias and Leifer, Matthew},
Eprint = {1505.01151},
Year = {2015},
Abstract = {Plausibility measures are structures for reasoning in the face of uncertainty that generalize probabilities, unifying them with weaker structures like possibility measures and comparative probability relations. So far, the theory of plausibility measures has only been developed for classical sample spaces. In this paper, we generalize the theory to test spaces, so that they can be applied to general operational theories, and to quantum theory in particular. Our main results are two theorems on when a plausibility measure agrees with a probability measure, i.e. when its comparative relations coincide with those of a probability measure. For strictly finite test spaces we obtain a precise analogue of the classical result that the Archimedean condition is necessary and sufficient for agreement between a plausibility and a probability measure. In the locally finite case, we prove a slightly weaker result that the Archimedean condition implies almost agreement.}
}

We prove generic versions of the no-cloning and no-broadcasting theorems, applicable to essentially \em any non-classical finite-dimensional probabilistic model that satisfies a no-signaling criterion. This includes quantum theory as well as models supporting ``super-quantum'' correlations that violate the Bell inequalities to a larger extent than quantum theory. The proof of our no-broadcasting theorem is significantly more natural and more self-contained than others we have seen: we show that a set of states is broadcastable if, and only if, it is contained in a simplex whose vertices are cloneable, and therefore distinguishable by a single measurement. This necessary and sufficient condition generalizes the quantum requirement that a broadcastable set of states commute.

@Unpublished{Barnum:2006,
Title = {Cloning and Broadcasting in Generic Probabilistic Theories},
Author = {Barnum, Howard and Barrett, Jonathan and Leifer, Matthew and Wilce, Alexander},
Eprint = {quant-ph/0611295},
Year = {2006},
Abstract = {We prove generic versions of the no-cloning and no-broadcasting theorems, applicable to essentially {\em any} non-classical finite-dimensional probabilistic model that satisfies a no-signaling criterion. This includes quantum theory as well as models supporting ``super-quantum'' correlations that violate the Bell inequalities to a larger extent than quantum theory. The proof of our no-broadcasting theorem is significantly more natural and more self-contained than others we have seen: we show that a set of states is broadcastable if, and only if, it is contained in a simplex whose vertices are cloneable, and therefore distinguishable by a single measurement. This necessary and sufficient condition generalizes the quantum requirement that a broadcastable set of states commute.}
}

In the past few years it has been shown that universal quantum computation can be obtained by projective measurements alone, with no need for unitary gates. This suggests that the underlying logic of quantum computing may be an algebra of sequences of quantum measurements rather than an algebra of products of unitary operators. Such a Sequential Quantum Logic (SQL) was developed in the late 70's and has more recently been applied to the consistent histories framework of quantum mechanics as a possible route to the theory of quantum gravity. In this letter, I give a method for deciding the truth of a proposition in SQL with nonzero probability of success on a quantum computer.

@Unpublished{Leifer:2005b,
Title = {Nondeterministic testing of Sequential Quantum Logic propositions on a quantum computer},
Author = {Leifer, M. S.},
Eprint = {quant-ph/0509193},
Year = {2005},
Abstract = {In the past few years it has been shown that universal quantum computation can be obtained by projective measurements alone, with no need for unitary gates. This suggests that the underlying logic of quantum computing may be an algebra of sequences of quantum measurements rather than an algebra of products of unitary operators. Such a Sequential Quantum Logic (SQL) was developed in the late 70's and has more recently been applied to the consistent histories framework of quantum mechanics as a possible route to the theory of quantum gravity. In this letter, I give a method for deciding the truth of a proposition in SQL with nonzero probability of success on a quantum computer.}
}