Tag: SVD

A typical scenario: given a subset of a metric space and a point , we look for a point that is nearest to : that is, . Such a point is generally not unique: for example, if is the graph of cosine function and , then both and qualify as nearest to . This makes the nearest-point projection onto discontinuous: moving slightly to the left or to the right will make its projection onto jump from one point to another. Not good.

Discontinuous nearest-point projection

Even when the nearest point projection is well-defined and continuous, it may not be the kind of projection we want. For example, in a finite-dimensional normed space with strictly convex norm we have a continuous nearest-point projection onto any linear subspace, but it is in general a nonlinear map.

Let’s say that is a quasi-projection if for some constant independent of . Such maps are much easier to construct: indeed, every Lipschitz continuous map such that for is a quasi-projection. For example, one quasi-projection onto the graph of cosine is the map shown below.

Continuous quasi-projection

If is a Banach space and is its subspace, then any idempotent operator with range is a quasi-projection onto . Not every subspace admits such an operator but many do (these are complemented subspaces; they include all subspaces of finite dimension or finite codimension). By replacing “nearest” with “close enough” we gain linearity. And even some subspaces that are not linearly complemented admit a continuous quasi-projection.

Here is a neat fact: if and are subspaces of a Euclidean space and , then there exists an isometric quasi-projection of onto with constant . This constant is best possible: for example, an isometry from the -axis onto the -axis has to send to one of , thus moving it by distance .

An isometry must incur sqrt(2) distance cost

Proof. Let be the common dimension of and . Fix some orthonormal bases in and . In these bases, the orthogonal (nearest-point) projection from to is represented by some matrix of norm at most . We need an orthogonal matrix such that the map that it defines is a -quasi-projection. What exactly does this condition mean for ?

Let’s say , is the orthogonal projection of onto , and is where we want to send by an isometry. Our goal is , in addition to . Squaring and expanding inner products yields . Since both and are in , we can replace on the left by its projection . So, the goal simplifies to . Geometrically, this means placing so that its projection onto the line through lies on the continuation of this line beyond .

So far so good, but the disappearance of from the inequality is disturbing. Let’s bring it back by observing that is equivalent to , which is simply . So that’s what we want to do: map so that the distance from its image to does not exceed . In terms of matrices and their operator norm, this means .

It remains to show that every square matrix of norm at most (such as here) is within distance of some orthogonal matrix. Let be the singular value decomposition, with orthogonal and a diagonal matrix with the singular values of on the diagonal. Since the singular values of are between and , it follows that . Hence , and taking concludes the proof.

Given a bunch of words, specifically the names of divisions of plants and bacteria, I’m going to use a truncated Singular Value Decomposition to separate bacteria from plants. This isn’t a novel or challenging task, but I like the small size of the example. A similar type of examples is classifying a bunch of text fragments by keywords, but that requires a lot more setup.

Not so obvious anymore, is it? Recalling the -phyta ending, we may want to focus on the presence of letter y, which is not so common otherwise. Indeed, the count of y letters is a decent prediction: on the following plot, green asterisks are plants and red are bacteria, the vertical axis is the count of letter Y in each word.

Count of Y in each word

However, the simple count fails to classify several words: having 1 letter Y may or may not mean a plant. Instead, let’s consider the entire matrix of letter counts (here it is in a spreadsheet: 33 rows, one for each word; 26 columns, one for each letter.) So far, we looked at its 25th column in isolation from the rest of the matrix. Truncated SVD uncovers the relations between columns that are not obvious but express patterns such as the presence of letters p,h,t,a along with y. Specifically, write with unitary and diagonal. Replace all entries of , except the four largest ones, by zeros. The result is a rank-4 diagonal matrix . The product is a rank-4 matrix, which keeps some of the essential patterns in but de-emphasizes the accidental.

The entries of are no longer integers. Here is a color-coded plot of its 25th column, which still somehow corresponds to letter Y but takes into account the other letters with which it appears.

The same column of the letter-count matrix, after truncation

Plants are now cleanly separated from bacteria. Plots made in MATLAB as follows: