A student blog of MIT CSAIL Theory of Computation Group

Menu

Category Archives: algorithms

In this post, I describe an algorithm of Edith Cohen, which estimates the size of the transitive closure of a given directed graph in near-linear time. This simple, but extremely clever algorithm uses ideas somewhat similar to the algorithm of Flajolet–Martin for estimating the number of distinct elements in a stream, and to MinHash sketch of Broder1.

Suppose we have a large directed graph with vertices and directed edges. For a vertex , let us denote the set of vertices that are reachable from . There are two known ways to compute sets (all at the same time):

Perform Depth-First Search (DFS) from each vertex. This takes time , which is the best known bound for sparse graphs;

Can we do better? Turns out we can, if we are OK with merely approximating the size of every . Namely, the following theorem was proved back in 1994:

Theorem 1. There exists a randomized algorithm for computing -multiplicative approximation for every with running time .

Instead of spelling out the full proof, I will present it as a sequence of problems: each of them will likely be doable for a mathematically mature reader. Going through the problems should be fun, and besides, it will save me some typing.

Problem 1. Let be a function that assigns random independent and uniform reals between 0 and 1 to every vertex. Let us define . Show how to compute values of for all vertices at once in time .

Problem 2. For a positive integer , denote the distribution of the minimum of independent and uniform reals between 0 and 1. Suppose we receive several independent samples from with an unknown value of . Show that we can obtain a -multiplicative approximation of with probability using as few as samples.

Problem 3. Combine the solutions for two previous problems and prove Theorem 1.

Footnotes

These similarities explain my extreme enthusiasm towards the algorithm. Sketching-based techniques are useful for a problem covered in 6.006, yay!

Summary

In this post I will show that any normed space that allows good sketches is necessarily embeddable into an space with close to . This provides a partial converse to a result of Piotr Indyk, who showed how to sketch metrics that embed into for . A cool bonus of this result is that it gives a new technique for obtaining sketching lower bounds.

Sketching

One of the exciting relatively recent paradigms in algorithms is that of sketching. The high-level idea is as follows: if we are interested in working with a massive object , let us start with compressing it to a short sketch that preserves properties of we care about. One great example of sketching is the Johnson-Lindenstrauss lemma: if we work with high-dimensional vectors and are interested in Euclidean distances between them, we can project the vectors on a random -dimensional subspace, and this will preserve with high probability all the pairwise distances up to a factor of .

It would be great to understand, for which computational problems sketching is possible, and how efficient it can be made. There are quite a few nice results (both upper and lower bounds) along these lines (see, e.g., graph sketching or a recent book about sketching for numerical linear algebra), but the general understanding has yet to emerge.

Sketching for metrics

One of the main motivations to study sketching is fast computation and indexing of similarity measures between two objects and . Often times similarity between objects is modeled by some metric (but not always! think KL divergence): for instance the above example of the Euclidean distance falls into this category. Thus, instantiating the above general question one can ask: for which metric spaces there exist good sketches? That is, when is it possible to compute a short sketch of a point such that, given two sketches and , one is able to estimate the distance ?

The following communication game captures the question of sketching metrics. Alice and Bob each have a point from a metric space (say, and , respectively). Suppose, in addition, that either or (where and are the parameters known from the beginning). Both Alice and Bob send messages and that are bits long to Charlie, who is supposed to distinguish two cases (whether is small or large) with probability at least . We assume that all three parties are allowed to use shared randomness. Our main goal is to understand the trade-off between (approximation) and (sketch size).

Arguably, the most important metric spaces are spaces. Formally, for we define to be a -dimensional space equipped with distance

(when this expression should be understood as ). One can similarly define spaces for ; even if the triangle inequality does not hold for this case, it is nevertheless a meaningful notion of distance.

It turns out that spaces exhibit very interesting behavior, when it comes to sketching. Indyk showed that for one can achieve approximation and sketch size for every (for this was established before by Kushilevitz, Ostrovsky and Rabani). It is quite remarkable that these bounds do not depend on the dimension of a space. On the other hand, for with the dependence on the dimension is necessary. It turns out that for constant approximation the optimal sketch size is .

Are there any other examples of metrics that admit efficient sketches (say, with constant and )? One simple observation is that if a metric embeds well into for , then one can sketch this metric well. Formally, we say that a map between metric spaces is an embedding with distortion, if

for every and for some . It is immediate to see that if a metric space embeds into for with distortion , then one can sketch with and . Thus, we know that any metric that embeds well into with is efficiently sketchable. Are there any other examples? The amazing answer is that we don’t know!

Our results

Our result shows that for a very important class of metrics—normed spaces—embedding into is the only possible way to obtain good sketches. Formally, if a normed space allows sketches of size for approximation , then for every the space embeds into with distortion . This result together with the above upper bound by Indyk provides a complete characterization of normed spaces that admit good sketches.

Taking the above result in the contrapositive, we see that non-embeddability implies lower bounds for sketches. This is great, since it potentially allows us to employ many sophisticated non-embeddability results proved by geometers and functional analysts. Specifically, we prove two new lower bounds for sketches: for the planar Earth Mover’s Distance (building on a non-embeddability theorem by Naor and Schechtman) and for the trace norm (non-embeddability was proved by Pisier). In addition to it, we are able to unify certain known results: for instance, classify spaces and the cascaded norms in terms of “sketchability”.

Overview of the proof

Let me outline the main steps of the proof of the implication “good sketches imply good embeddings”. The following definition is central to the proof. Let us call a map between two metric spaces -threshold, if for every :

implies ,

implies .

One should think of threshold maps as very weak embeddings that merely
preserve certain distance scales.

The proof can be divided into two parts. First, we prove that for a normed space that allows sketches of size and approximation there exists a -threshold map to a Hilbert space. Then, we prove that the existence of such a map implies the existence of an embedding into with distortion .

The first half goes roughly as follows. Assume that there is no -threshold map from to a Hilbert space. Then, by convex duality, this implies certain Poincaré-type inequalities on . This, in turn, implies sketching lower bounds for (the direct sum of copies of , where the norm is definied as the maximum of norms of the components) by a result of Andoni, Jayram and Pătrașcu (which is based on a very important notion of information complexity). Then, crucially using the fact that is a normed space, we conclude that itself does not have good sketches (this step follows from the fact that every normed space is of type and is of cotype ).

The second half uses tools from nonlinear functional analysis. First, building on an argument of Johnson and Randrianarivony, we show that for normed spaces -threshold map into a Hilbert space implies a uniform embedding into a Hilbert space—that is, a map , where is a Hilbert space such that

where are non-decreasing functions such that for every and as . Both and are allowed to depend only on and . This step uses a certain Lipschitz extension-type theorem and averaging via bounded invariant means. Finally, we conclude the proof by applying theorems of Aharoni-Maurey-Mityagin and Nikishin and obtain a desired (linear) embedding of into .

Open problems

Let me finally state several open problems.

The first obvious open problem is to extend our result to as large class of general metric spaces as possible. Two notable examples one should keep in mind are the Khot-Vishnoi space and the Heisenberg group. In both cases, a space admits good sketches (since both spaces are embeddable into -squared), but neither of them is embeddable into . I do not know, if these spaces are embeddable into , but I am inclined to suspect so.

The second open problem deals with linear sketches. For a normed space, one can require that a sketch is of the form , where is a random matrix generated using shared randomness. Our result then can be interpreted as follows: any normed space that allows sketches of size and approximation allows a linear sketch with one linear measurement and approximation (this follows from the fact that for there are good linear sketches). But can we always construct a linear sketch of size and approximation , where and are some (ideally, not too quickly growing) functions?

Finally, the third open problem is about spaces that allow essentially no non-trivial sketches. Can one characterize -dimensional normed spaces, where any sketch for approximation must have size ? The only example I can think of is a space that contains a subspace that is close to . Is this the only case?

In the next post in our series of STOC 2014 recaps, Adrian Vladu tells us about some of the latest and greatest in Laplacian and SDD linear system solvers. There’s been a flurry of exciting results in this line of work, so we hope this gets you up to speed.

The Monday morning session was dominated by a nowadays popular topic, symmetric diagonally dominant (SDD) linear system solvers. Richard Peng started by presenting his work with Dan Spielman, the first parallel solver with near linear work and poly-logarithmic depth! This is exciting, since parallel algorithms are used for large scale problems in scientific computing, so this is a result with great practical applications.

The second talk was given by Jakub Pachocki and Shen Chen Xu from CMU, which was a result of merging two papers. The first result is a new type of trees that can be used as preconditioners. The second one is a more efficient solver, which together with the trees shaved one more factor in the race for the fastest solver.

Before getting into more specific details, it might be a good idea to provide a bit of background on the vast literature of Laplacian solvers.

Typically, linear systems are easier to solve whenever has some structure on it. A particular class we care about are positive semidefinite (PSD) matrices. They work nicely because the solution is the minimizer of the quadratic form , which happens to be a convex function due to the PSD-ness of . Hence we can use various versions of gradient descent, the convergence of which depends usually on the condition number of .

A subset of PSD matrices are Laplacian matrices, which are nothing else but graph Laplacians; using an easy reduction, one can show that any SDD system can be reduced to a Laplacian system. Laplacians are great because they carry a lot of combinatorial structure. Instead of having to suffer through a lot of scary algebra, this is the place where we finally get to solve some fun problems on graphs. The algorithms we aim for have running time close to linear in the sparsity of the matrix.

One reason why graphs are nice is that we know how to approximate them with other simpler graphs. More specifically, when given a graph , we care about finding a sparser graph such that a, for some small (the smaller, the better). The point is that whenever you do gradient descent in order to minimize , you can take large steps by solving a system in the sparser . Of course, this requires another linear system solve, only that this only needs to be done on a sparser graph. Applying this idea recursively eventually yields efficient solvers. A lot of combinatorial work is spent on understanding how to compute these sparser graphs.

In their seminal work, Spielman and Teng used ultrasparsifiersb as their underlying combinatorial structure, and after many pages of work they obtained a near linear algorithm with a large polylogarithmic factor in the running time. Eventually, Koutis, Miller and Peng came up with a much cleaner construction, and showed how to construct a chain of sparser and sparser graphs, which yielded a solver that was actually practical. Subsequently, people spent a lot of time trying to shave log factors from the running time, see [9], [6], [7], [12] (the last of which was presented at this STOC), and the list will probably continue.

After this digression, we can get back to the conference program and discuss the results.

How do we solve a system ? We need to find away to efficiently apply the operator to . Even Laplacians are not easy to invert, and what’s worse, their pseudoinverses might not even be sparse. However, we can still represent as a product of sparse matrices which are easy to compute.

We can gain some inspiration from trying to numerically approximate the inverse of for some small real . Taking the Taylor expansion we get that . Notice that in order to get precision, we only need to take the product of the first factors. It would be great if we could approximate matrix inverses the same way. Actually, we can, since for matrices of norm less than we have the identity . At this point we’d be tempted to think that we’re almost done, since we can just write , and try to invert . However we would still need to compute matrix powers, and those matrices might again not even be sparse, so this approach needs more work.

Richard presented a variation of this idea that is more amenable to SDD matrices. He writes

The only hard part of applying this inverse operator to a vector consists of left multiplying by . How to do this? One crucial ingredient is the fact that is also SDD! Therefore we can recurse, and solve a linear system in . You might say that we won’t be able to do it efficiently, since is not sparse. But with a little bit of work and the help of existing spectral sparsification algorithms can be approximated with a sparse matrix.

Notice that at the level of recursion, the operator we need to apply is . A quick calculation shows that if the condition number of is , then the condition number of is . This means that after iterations, the eigenvalues of are close to , so we can just approximate the operator with without paying too much for the error.

There are a few details left out. Sparsifying requires a bit of understanding of its underlying structure. Also, in order to do this in parallel, the authors originally employed the spectral sparsification algorithm of Spielman and Teng, combined with a local clustering algorithm of Orecchia, Sachdeva and Vishnoi. Blackboxing these two sophisticated results might question the practicality of the algorithm. Fortunately, Koutis recently produced a simple self-contained spectral sparsification algorithm, which parallelizes and can replace all the heavy machinery in Richard and Dan’s paper.

Jakub Pachocki and Shen Chen Xu talked about two results, which together yield the fastest SDD system solver to date. The race is still on!

Let me go through a bit of more background. Earlier on I mentioned that graph preconditioners are used to take long steps while doing gradient descent. A dual of gradient descent on the quadratic function is the Richardson iteration. This is yet another iterative method, which refines a coarse approximation to the solution of a linear system. Let be the Laplacian of our given graph, and be the Laplacian of its preconditioner. Let us assume that we have access to the inverse of . The Richardson iteration computes a sequence , which converges to the solution of the system. It starts with a weak estimate for the solution, and iteratively attempts to decrease the norm of the residue by updating the current solution with a coarse approximation to the solution of the system . That coarse approximation is computed using . Therefore steps are given by

where is a parameter that adjusts the length of the step. The better approximates , the fewer steps we need to make.

The problem that Jakub and Shen talked about was finding these good preconditioners. The way they do it is by looking more closely at the Richardson iteration, and weakening the requirements. Instead of having the preconditioner approximate spectrally, they only impose some moment bounds. I will not describe them here, but feel free to read the paper. Proving that these moment bounds can be satisfied using a sparser preconditioner than those that have been so far used in the literature constitutes the technical core of the paper.

Just like in the past literature, these preconditioners are obtained by starting with a good tree, and sampling extra edges. Traditionally, people used low stretch spanning trees. The issue with them is that the number of edges in the preconditioner is determined by the average stretch of the tree, and we can easily check that for the square grid this is . Unfortunately, in general we don’t know how to achieve this bound yet; the best known result is off by a factor. It turns out that we can still get preconditioners by looking at a different quantity, called the stretch (), which can be brought down to . This essentially eliminates the need for computing optimal low stretch spanning trees. Furthermore, these trees can be computed really fast, time in the RAM model, and the algorithm parallelizes.

This result consists of a careful combination of existing algorithms on low stretch embeddings of graphs into tree metrics and low stretch spanning trees. I will talk more about these embedding results in a future post.

a. is also known as the Lowner partial order. is equivalent to , which says that is PSD.↩b. A -ultrasparsifier of is a graph with edges such that . It turns out that one is able to efficiently construct ultrasparsifiers. So by adding a few edges to a spanning tree, you can drastically reduce the relative condition number with the initial graph.↩

Ah, summertime. For many NotSoGITCSers, it means no classes, warm weather, ample research time… and a trip to STOC. This year the conference was held in New York City, only four hours away from Cambridge via sketchy Chinatown tour bus. Continuing in the great tradition of conference blogging by TCS blogs (see here, and here), NotSoGITCS will share some of our favorite papers from the conference. I’ll start this off with my recap on Rothvoss’s paper on extension complexity. Over the next few posts, we’ll hear from Clément Canonne and Gautam Kamath on Efficient Distribution Estimation, Pritish Kamath on the latest results in algebraic circuit complexity, Justin Holmgren on indistinguishability obfuscation, and Sunoo Park on coin flipping!

The Best Paper award for this year’s STOC went to this paper, which is another milestone result in the recent flurry of activity in lower bounds for extension complexity of combinatorial problems. This area represents a fruitful “meeting in the middle” between complexity theory and algorithms. Although a proof of P ≠ NP — a conjecture about the limitation of all polynomial time algorithms — seems astronomically distant, we can instead shoot for a closer goal: show that polynomial time algorithms we know of now cannot solve NP-complete problems. The area of extension complexity is about understanding the limitations of some of the finest weapons in our polynomial-time toolbox: LP and SDP solvers. As many of you know, these algorithms are tremendously powerful and versatile, in both theory and practice alike.

A couple years ago, STOC 2012 (also held in New York!) saw the breakthrough result of Fiorini, et al. titled Linear vs. Semideﬁnite Extended Formulations: Exponential Separation and Strong Lower Bounds. This paper showed that while powerful, LP solvers cannot naturally solve NP-complete problems in polynomial time. What do I mean by naturally solve? Let’s take the Traveling Salesman Problem for a moment (which was the subject of the aforementioned paper). One way to try to solve TSP on an n-city instance using linear programming is to try to optimize over the polytope PTSP where each vertex corresponds to a tour of the complete graph on n nodes. Turns out, this polytope has exponentially many facets, and so is a rather unwieldy object to deal with — simply writing down a system of linear inequalities defining PTSP takes exponential time! Instead, people would like to optimize over another polytope in higher dimension QTSP that projects down to PTSP (such a QTSP would be called a linear extension of PTSP) except now QTSP has a polynomial number of facets. Does such a succinct linear extension exist? If so, that would imply P = NP! The Fiorini, et al. paper showed that one can’t solve the Traveling Salesman Problem this way: the TSP polytope has exponential extension complexity — meaning that any linear extension of QTSP of PTSP necessarily has exponentially many facets [1].

In a sense, one shouldn’t be too surprised by this result: after all, we all know that P ≠ NP, so of course the TSP polytope has exponential extension complexity. But we’ve learned more than just that LP solvers can’t solve NP-complete problems: we’ve gained some insight into the why. However, after the Fiorini et al. result, the natural question remained: is this exponential complexity of polytopes about NP-hardness, or is it something else? In particular, what is the extension complexity of the perfect matching polytope PPM (the convex hull of all perfect matchings in )? This is interesting question for a number of reasons: this polytope, like PTSP, has an exponential number of facets. However, unlike the TSP polytope, optimizing linear objective functions over PPM is actually polynomial-time solvable! This is due to the seminal work of Jack Edmonds, who, in the 1960’s, effectively brought the notion of “polynomial time computability” into the (computer science) public consciousness by demonstrating that the maximum matching problem admits an efficient solution.

As you might have guessed from the title, Thomas Rothvoss showed that actually the matching polytope cannot be described as the projection of a polynomially-faceted polytope! So, while we have LP-based algorithms for optimizing over PPM, these algorithms must nontrivially rely on the structure of the matching problem, and cannot be expressed as generically optimizing over some succinct linear extension. I’ll say a few words about how he shows this. To lower bound the extension complexity of a polytope, one starts by leveraging the Yannakakis’s Factorization Theorem, proved in his 1991 paper that started the whole area of extension complexity. His theorem says that, for a polytope , , where the left-hand side denotes the extension complexity of [2], and the right-hand side denotes the non-negative rank of the slack matrix of [3]. Now, instead of worrying about every possible linear extension of , we “only” have to focus our attention on the non-negative rank of . It turns out that fortuitously, we have techniques of lower bounding the non-negative rank of matrices from communication complexity (a connection that was also pointed out by Yannakakis). Roughly speaking, communication complexity tells us that the non-negative rank of a matrix is high if the matrix cannot be “described” as a small collection of combinatorial rectangles.

This idea is captured in what Rothvoss calls the “Hyperplane separation lower bound”: let be an arbitrary non-negative matrix; and be an arbitrary matrix of the same dimensions (not necessarily non-negative). Then , where is the Frobenius inner product between and , is the largest-magnitude entry of , and . Intuitively, in order to show that the non-negative rank of is large, you want to exhibit a matrix (a “hyperplane”) that has large correlation with , but has small correlation with any rectangle . Rothvoss presents such a hyperlane that proves the matching polytope has exponential extension complexity; the hard part is showing that for all rectangles . To do so, Rothvoss follows a similar strategy to Razborov’s proof of the lower bound on the randomized communication complexity of DISJOINTNESS, with substantial modifications to fit the matching problem.

There are a number of reasons why I like these papers in extension complexity: they’re concrete, solid steps towards the dream of proving P ≠ NP. They’re short and elegantly written: the Fiorini, et al. paper is 24 pages; the Rothvoss paper is less than 20. They also demonstrate the unity of theoretical computer science: drawing upon ideas from algorithms, combinatorics, communication complexity, and even quantum information.

-Henry Yuen

[1] Well, now we know that these guys couldn’t have used a succinct extended formulation.

[2] is defined to be the minimum number of facets of any polytope that linearly projects to .

[3] Let be a non-negative matrix of size . Then if there exists a factorization where and are non-negative matrices, and and have dimensions and , respectively. Let is a polytope defined by inequalities, with vertices . Then the th entry of the slack matrix of is defined to be , with the th row of .

One problem that one encounters a lot in machine learning, databases and other areas is the near neighbor search problem (NN). Given a set of points in a -dimensional space and a threshold the goal is to build a data structure that given a query reports any point from within distance at most from .

Unfortunately, all known data structures for NN suffer from the so-called “curse of dimensionality”: if the query time is (hereinafter we denote the number of points in our dataset ), then either space or query time is .

To overcome this obstacle one can consider the approximate near neighbor search problem (ANN). Now in addition to and we are also given an approximation parameter . The goal is given a query report a point from within distance from , provided that the neighborhood of radius is not empty.

It turns out that one can overcome the curse of dimensionality for ANN (see, for example, this paper and its references). If one insists on having near-linear (in ) memory and being subexponential in the dimension, then the only known technique for ANN is locality-sensitive hashing. Let us give some definitions. Say a hash family on a metric space is -sensitive, if for every two points

if , then ;

if , then .

Of course, for to be meaningful, we should have . Informally speaking, the closer two points are, the larger probability of their collision is.

Let us construct a simple LSH family for hypercube , equipped with Hamming distance. We set , where . It is easy to check that this family is -sensitive.

In 1998 Piotr Indyk and Rajeev Motwaniproved the following theorem. Suppose we have a -sensitive hash family for the metric we want to solve ANN for. Moreover, assume that we can sample and evaluate a function from relatively quickly, store it efficiently, and that . Then, one can solve ANN for this metric with space roughly and query time , where . Plugging the family from the previous paragraph, we are able to solve ANN for Hamming distance in space around and query time . More generally, in the same paper it was proved that one can achieve for the case of norms for (via an embedding by William Johnson and Gideon Schechtman). In 2006 Alexandr Andoni and Piotr Indyk proved that one can achieve for the norm.

Thus, the natural question arises: how optimal are the abovementioned bounds on (provided that is not too tiny)? This question was resolved in 2011 by Ryan O’Donnell, Yi Wu and Yuan Zhou: they showed a lower bound for and for matching the upper bounds. Thus, the above simple LSH family for the hypercube is in fact, optimal!

Is it the end of the story? Not quite. The catch is that the definition of LSH families is actually too strong. The real property that is used in the ANN data structure is the following: for every pair of points we have

if , then ;

if , then .

The difference with the definition of -sensitive family is that we now restrict one of the points to be in a prescribed set . And it turns out that one can indeed exploit this dependency on data to get a slightly improved LSH family. Namely, we are able to achieve for , which by a simple embedding of into -squared gives for (in particular, Hamming distance over the hypercube). This is nice for two reasons. First, we are able to overcome the natural LSH barrier. Second, this result shows that what “practitioners” have been doing for some time (namely, data-dependent space partitioning) can give advantage in theory, too.

In the remaining text let me briefly sketch the main ideas of the result. From now on, assume that our metric is . The first ingredient is an LSH family that simplifies and improves upon for the case, when all data points and queries lie in a ball of radius . This scheme has strong parallels with an SDP rounding scheme of David Karger, Rajeev Motwani and Madhu Sudan.

The second (and the main) ingredient is a two-level hashing scheme that leverages the abovementioned better LSH family. First, let us recall, how the standard LSH data structure works. We start from a -sensitive family and then consider the following simple “tensoring” operation: we sample functions from independently and then we hash a point into a tuple . It is easy to see that the new family is -sensitive. Let us denote this family by . Now we choose to have the following collision probabilities:

at distance ;

at distance

(actually, we can not set to achieve these probabilities exactly, since must be integer, that’s exactly why we need the condition ). Now we hash all the points from the dataset using a random function from , and to answer a query we hash and enumerate all the points in the corresponding bucket, until we find anything within distance . To analyze this simple data structure, we observe that the average number of “outliers” (points at distance more than ) we encounter is at most one due to the choice of . On the other hand, for any near neighbor (within distance at most ) we find it with probability at least , so, to boost it to constant, we build independent hash tables. As a result, we get a data structure with space and query time .

Now let us show how to build a similar two-level data structure, which achieves somewhat better parameters. First, we apply the LSH family for with , but only partially. Namely, we choose a constant parameter and such that the collision probabilities are as follows:

at distance ;

at distance ;

at distance .

Now we hash all the data points with and argue that with high probability every bucket has diameter . But now given this “bounded buckets” condition, we can utilize the better family designed above! Namely, we hash every bucket using our new family to achieve the following probabilities:

at distance ;

at distance .

Overall, the data structure consists of an outer hash table that uses the LSH family of Andoni and Indyk, and then every bucket is hashed using the new family. Due to independence, the collision probabilities multiply, and we get

at distance ;

at distance .

Then we argue as before and conclude that we can achieve .

After carefully optimizing all the parameters, we achieve, in fact, . Then we go further, and consider a multi-level scheme with several distance scales. Choosing these scales carefully, we achieve .