Recent empirical research indicates that many convex optimization problems with random constraints exhibit a phase transition as the number of constraints increases. For example, this phenomenon emerges in the l1 minimization method for identifying a sparse vector from random linear samples. Indeed, this approach succeeds with high probability when the number of samples exceeds a threshold that depends on the sparsity level; otherwise, it fails with high probability.

We present the first rigorous analysis that explains why phase transitions are ubiquitous in random convex optimization problems. We also describe tools for making reliable predictions about the quantitative aspects of the transition, including the location and the width of the transition region. These techniques apply to regularized linear inverse problems with random measurements, to demixing problems under a random incoherence model, and also to cone programs with random affine constraints.

These applications depend on foundational research in conic geometry. A new summary parameter, called the statistical dimension, canonically extends the dimension of a linear subspace to the class of convex cones. The main result demonstrates that the sequence of conic intrinsic volumes of a convex cone concentrates sharply near the statistical dimension. This fact leads to an approximate version of the conic kinematic formula that gives bounds on the probability that a randomly oriented cone shares a ray with a fixed cone.

This is joint work with D. Amelunxen, M. McCoy and J. Tropp.

The video for this talk should appear here if JavaScript is enabled.If it doesn't, something may have gone wrong with our embedded player.We'll get it fixed as soon as possible.