To quantify the inherent uncertainty of quantum states Wehrl ('79) suggested a definition of their classical entropy based on the coherent state transform. He conjectured that this classical entropy is minimized by states that also minimize the Heisenberg uncertainty inequality, i.e., Gaussian coherent states. Lieb ('78) proved this conjecture and conjectured that the same holds when Euclidean Glauber coherent states are replaced by SU(2) Bloch coherent states. This generalized Wehrl conjecture has been open for almost 35 years until it was settled last year in joint work with Elliott Lieb. Recently we simplified the proof and generalized it to SU(N) for general N. I will present this here.

The Kronecker coefficients are of fundamental importance in representation theory and quantum physics. Their complexity turns out to be of fundamental importance in the Geometric Complexity Theory (GCT) program towards that P vs. NP and related problems. In this talk we will describe some results related to the complexity of Kronecker coefficients.

Consider N qubits in a generic pure state on $(C^2)^{\otimes N}$. Give a fraction $p \in (0,1/2)$ of them to Alice, another fraction $p$ of them to Bob, and assume the remaining qubits disappear in the environment. Do Alice and Bob share some entanglement ?

When N is large, this problem features a threshold phenomenon around the critical value $p=1/5$. In this talk I give an elementary proof of the "easy" half of this result: for $p>1/5$, entanglement is generic. The general question, and a discussion of the threshold phenomenon, will be addressed in the talk by S. Szarek.

I will introduce background from high-dimensional convex geometry and prove some key estimates on the size (specifically the mean width) of the set of separable states.

The talk is a variation on arxiv:1106.2264 (joint with S. Szarek and D. Ye).

The notions of channel and capacity are central to the classical Shannon theory. "Quantum Shannon theory" denotes a subfield of quantum information science which uses operator analysis, convexity and matrix inequalities, asymptotic techniques such as large deviations and measure concentration to study mathematical models of communication channels and their information-processing performance. From the mathematical point of view quantum channels are normalized completely positive maps of operator algebras, the analog of Markov maps in the noncommutative probability theory, while the capacities are related to certain norm-like quantities. In applications noisy quantum channels arise from irreversible evolutions of open quantum systems interacting with environment-a physical counterpart of a mathematical dilation theorem.
It turns out that in the quantum case the notion of channel capacity splits into the whole spectrum of numerical information-processing characteristics depending on the kind of data transmitted (classical or quantum) as well as on the additional communication resources. An outstanding role here is played by quantum correlations - entanglement - inherent in tensor-product structure of composite quantum systems. This talk presents a survey of basic coding theorems providing analytical expressions for the capacities of quantum channels in terms of entropic quantities. We also touch upon some open mathematical problems, such as additivity and Gaussian optimizers, concerning the entropic characteristics of both theoretically and practically important Bosonic Gaussian channels.

The fields of quantum non-locality in physics, and causal discovery in machine learning, both face the problem of deciding whether observed data is compatible with a presumed causal relationship between the variables (for example a local hidden variable model). Traditionally, Bell inequalities have been used to describe the restrictions imposed by causal structures on marginal distributions. However, some structures give rise to non-convex constraints on the accessible data, and it has recently been noted that linear inequalities on the observable entropies capture these situations more naturally. In this talk, I will introduce the machine learning background, advertise the program of investigating entropic marginals, and present some recent results.

The method of types---classifying strings according to letter frequencies---is a fundamental tool of classical information theory. I will discuss a natural quantum generalisation, known as the Schur basis, in which quantum states are decomposed into irreps of the unitary and symmetric groups. First, I'll explain the analogy to the classical method of types and will review past work that applies the Schur basis to quantum information theory. Then I'll discuss applications to spectrum estimation, channel coding problems and mention some open problems.

Let $g_{\lambda\mu\nu}$ denote the Kronecker coefficient. It is a multiplicity in the decomposition of the tensor product of two irreducible representations of the symmeric group. The set of triples $(\lambda,\mu,\nu)$ of bounded length such that $g_{\lambda\mu\nu}$ is nonzero generate a closed convex polyhedral cone. In this talk, we will give a description of these cones and more generaly of branching cones.

Some branching cones have an interpretation in terms of eigenvalues of Hermitian matrices known as the addive Horn problem. We will also give an answer of the so called multiplicative Horn problem.

Random matrix theory is the study of probabilistic quantities associated to random matrix models, in the large dimension limit. The eigenvalues counting measure and the eigenvalues spacing are amongst the most studied and best understood quantities. The purpose of this talk is to focus on a quantity that was less understood until recently, namely the operator norm of random matrices. I will state recent results in this direction, and mention three applications to quantum information theory: -a- convergence of the collection of images of pure states under typical quantum channels (joint with Fukuda and Nechita) -b- thresholds for random states to have the absolute PPT property (joint with Nechita and Ye), -c- new examples of k-positive maps (ongoing, joint with Hayden and Nechita).

Consider a quantum system consisting of N identical particles and assume that it is in a random pure state (i.e., uniformly distributed over the sphere of the corresponding Hilbert space). Next, let A and B be two subsystems consisting of k particles each. Are A and B likely to share entanglement? Is the AB-marginal typically PPT?

As it turns out, for many natural properties there is a sharp "phase transition" at some threshold depending on the property in question. For example, there is a threshold K asymptotically equivalent to N/5 such that - if k > K then A and B typically share entanglement - if k
The first statement was (essentially) shown in the talk by G. Aubrun. Here we present a general scheme for handling such questions and sketch the analysis specific to entanglement.

The talk is based on arxiv:1106.2264v3; a less-technical overview is in arxiv:1112.4582v2.

Using a theorem that allows one to assert the presence of nonlocal correlations between parties that are never measured in the same run of an experiment (using marginal), we study the following question: Experimental violations of Bell inequalities using space-like separated measurements precludes the explanation of quantum correlations through causal influences propagating at subluminal speed. Yet, “everything looks as if the two parties somehow communicate behind the scene”. We investigate the assumption that they do so at a speed faster than light, though finite. Such an assumption doesn’t respect the spirit of Einstein relativity. However, it is not crystal clear that such “communication behind the scene” would contradict relativity. Indeed, one could imagine that this communication remains for ever hidden to humans, i.e. that it could not be controlled by humans, only Nature exploits it to produce correlations that can’t be explained by usual common causes. To define faster than light hidden communication requires a universal privileged reference frame in which this faster than light speed is defined. Again, such a universal privilege d frame is not in the spirit of relativity, but it is also clearly not in contradiction: for example the reference frame in which the cosmic microwave background radiation is isotropic defines such a privileged frame. Hence, a priori, a hidden communication explanation is not more surprising than nonlocality. We prove that for any finite speed, such models predict correlations that can be exploited for faster-than-light communication. This superluminal communication doesn’t require access to any hidden physical quantities, but only the manipulation of measurement devices at the level of our present-day description of quantum experiments. Consequently, all possible explanations of quantum correlations that satisfy the principle of continuity, which states that everything propagates gradually and continuously through space and time, or in other words, all combination of local common causes and direct causes that reproduce quantum correlations, lead to faster than light communication, which can be exploited by humans, at least in principle. Accordingly, either there is superluminal communication or the conclusion that Nature is nonlocal (i.e. discontinuous) is unavoidable.

I will give an informal discussion of two recent papers. One, joint work with Damian Pitalua-Garcia, arXiv:1307.6839, describes a new approach to Bell inequalities for continuous sets of measurement choices, and new Bell inequalites.

The other, arXiv:1308.5009, reexamines Popescu and Rohrlich's discussion of the possibility of stronger nonlocal correlations than those allowed by quantum theory. It shows that no set of correlations is strictly more nonlocal than quantum correlations, in the sense that they violate all CHSH inequalities by at least as much as quantum theory does and some by more.

Algebraic geometry and representation theory have been used to prove lower bounds for the complexity of matrix multiplication, the complexity of linear circuits (matrix rigidity), and Geometric Complexity Theory (questions related to the conjecture that P is distinct from NP). Remarkably, these questions in computer science are related to classical questions in algebraic geometry regarding objects such as dual varieties, secant varieties, Darboux hypersurfaces, and classical intersection theory, as well as questions in representation theory such as the Foulkes-Howe conjecture and the asymptotic study of Kronecker coefficients. I will give an overview of my joint work with G. Ottaviani (matrix multiplication), L. Manivel and N. Ressayre (GCT) and F. Gesmundo, J. Hauenstein, and C. Ikenmeyer (linear circuits).

It is a remarkable fact that two prominent problems of algebraic complexity theory, the permanent versus determinant problem and the tensor rank problem, can be restated as explicit orbit closure problems. This offers the potential for proving lower complexity bounds by relying on methods from algebraic geometry and representation theory. This basic idea for the tensor rank problem goes back to work by Volker Strassen from the mid eighties. It leads to challenging problems regarding the irreducible representions of symmetric groups over the complex numbers (tensor products and plethysms).

In the first part of the talk, we will present the general framework, explain some negative results, and state some open problems. Then we will move on to outline some recent progress for proving lower bounds on the border rank of the matrix multiplication tensor. This is achieved by the explicit construction of highest weight vectors vanishing on the (higher secant) varieties of tensors of border rank at most r.

Let V be a representation space for a compact connected Lie group G decomposing as a sum of irreductible representations pi of G with finite multiplicity m(pi,V).

When V is constructed as the geometric quantization of a symplectic manifold with proper moment map, the multiplicity function pi-> m(pi,V)$ is piecewise quasi polynomial on the cone of dominant weights. In particular, the function t-> m(t *pi,V) is a quasipolynomial,alonf the ray t*pi, when t runs over the non negative integers. We will explain how to compute effectively this quasi-polynomial (or the Duistermaat-Heckman measure) in some examples, including the function t-> c(t*lambda,t* mu,t*nu) for Clebsch-Gordan coefficients (in low rank) and the function t-> k(t*alpha,t*beta,t*gamma) for Kronecker-coefficients (with number of rows less or equal to 3). Our method is based on a multidimensional residue theorem (Jeffrey-Kirwan residues).

The problem of determining and describing the family of 1-particle reduced density operators (1-RDO) arising from N-fermion pure states (via partial trace) is known as the fermionic quantum marginal problem. We present its solution, a multitude of constraints on the eigenvalues of the 1-RDO, generalizing the Pauli exclusion principle. To explore the relevance of these constraints we study an analytically solvable model of N-fermions in a harmonic potential and determine the spectral `trajectory' corresponding to the ground state as function of the fermion-fermion interaction strength. Intriguingly, we find that the occupation numbers are almost, but not exactly, pinned to the boundary of the allowed region. Our findings suggest a generalization of the Hartree-Fock approximation.

Variational determination the ground state of two-particle Hamiltonian using the second-order reduced density matrix is very hopeful approach to quantum chemistry and condensed matter physics (J.Chem.Phys 114, 8282-8292 (2001)). The Quantum Marginal problem in this context is known as N-representability problem. If we employ some non-negativity conditions, which are necessary conditions of N-representability, the ground state energies are quite accurately calculated. In this talk, we show some results, and state some open questions to community.

The performance of a channel is usually measured in terms of its capacity C, intended as the largest rate achievable by block codes with probability of error which vanishes in the block-length. For rates R<C, the probability of error for optimal codes decreases exponentially fast with the block-length, and a more detailed measure of the performance of the channel is the so called reliability function E(R), the first order exponent of this error.Determining E(R) exactly is an unsolved problem in general; it includes as a sub-problem, for example, the determination of the zero-error capacity (also called Shannon capacity of a graph). In this talk, we discuss bounds to E(R) for classical and classical-quantum channels and presents some connections between those bounds and the Lovasz theta function.