Archive

There are a wide variety of different norms of matrices and operators that are useful in many different contexts. Some matrix norms, such as the Schatten norms and Ky Fan norms, are easy to compute thanks to the singular value decomposition. However, the computation of many other norms, such as the induced p-norms (when p ≠ 1, 2, ∞), is NP-hard. In this post, we will look at a general method for getting quite good estimates of almost any matrix norm.

The basic idea is that every norm can be written as a maximization of a convex function over a convex set (in particular, every norm can be written as a maximization over the unit ball of the dual norm). However, this maximization is often difficult to deal with or solve analytically, so instead it can help to write the norm as a maximization over two or more simpler sets, each of which can be solved individually. To illustrate how this works, let’s start with the induced matrix norms.

Induced matrix norms

The induced p → q norm of a matrix B is defined as follows:

where

is the vector p-norm. There are three special cases of these norms that are easy to compute:

When p = q = 2, this is the usual operator norm of B (i.e., its largest singular value).

When p = q = 1, this is the maximum absolute column sum: .

When p = q = ∞, this is the maximum absolute row sum: .

However, outside of these three special cases (and some other special cases, such as when B only has real entries that are non-negative [1]), this norm is much messier. In general, its computation is NP-hard [2], so how can we get a good idea of its value? Well, we rewrite the norm as the following double maximization:

where is the positive real number such that (and we take if , and vice-versa). The idea is then to maximize over and one at a time, alternately.

Start by setting and fixing a randomly-chosen vector , scaled so that .

Compute

keeping fixed, and let be the vector attaining this maximum. By Hölder’s inequality, we know that this maximum value is exactly equal to . Furthermore, the equality condition of Hölder’s inequality tells us that the vector attaining this maximum is the one with complex phases that are the same as those of , and whose magnitudes are such that is a multiple of (here the notation means we take the absolute value and the q-th power of every entry of the vector).

Compute

keeping fixed, and let be the vector attaining this maximum. By an argument almost identical to that of step 2, this maximum is equal to , where is the positive real number such that . Furthermore, the vector attaining this maximum is the one with complex phases that are the same as those of , and whose magnitudes are such that is a multiple of .

Increment by 1 and return to step 2. Repeat until negligible gains are made after each iteration.

This algorithm is extremely quick to run, since Hölder’s inequality tells us exactly how to solve each of the two maximizations separately, so we’re left only performing simple vector calculations at each step. The downside of this algorithm is that, even though it will always converge to some local maximum, it might converge to a value that is smaller than the true induced p → q norm. However, in practice this algorithm is fast enough that it can be run several thousand times with different (randomly-chosen) starting vectors to get an extremely good idea of the value of .

It is worth noting that this algorithm is essentially the same as the one presented in [3], and reduces to the power method for finding the largest singular value when p = q = 2. This algorithm has been implemented in the QETLAB package for MATLAB as the InducedMatrixNorm function.

Induced Schatten superoperator norms

There is a natural family of induced norms on superoperators (i.e., linear maps ) as well. First, for a matrix , we define its Schatten p-norm to be the p-norm of its vector of singular values:

Three special cases of the Schatten p-norms include:

p = 1, which is often called the “trace norm” or “nuclear norm”,

p = 2, which is often called the “Frobenius norm” or “Hilbert–Schmidt norm”, and

p = ∞, which is the usual operator norm.

The Schatten norms themselves are easy to compute (since singular values are easy to compute), but their induced counter-parts are not.

Given a superoperator , its induced Schatten p → q norm is defined as follows:

These induced Schatten norms were studied in some depth in [4], and crop up fairly frequently in quantum information theory (especially when p = q = 1) and operator theory (especially when p = q = ∞). The fact that they are NP-hard to compute in general is not surprising, since they reduce to the induced matrix norms (discussed earlier) in the case when only acts on the diagonal entries of and just zeros out the off-diagonal entries. However, it seems likely that this norm’s computation is also difficult even in the special cases p = q = 1 and p = q = ∞ (however, it is straightforward to compute when p = q = 2).

Nevertheless, we can obtain good estimates of this norm’s value numerically using essentially the same method as discussed in the previous section. We start by rewriting the norm as a double maximization, where each maximization individually is easy to deal with:

where is again the positive real number (or infinity) satisfying . We now maximize over and , one at a time, alternately, just as before:

Start by setting and fixing a randomly-chosen matrix , scaled so that .

Compute

keeping fixed, and let be the matrix attaining this maximum. By the Hölder inequality for Schatten norms, we know that this maximum value is exactly equal to . Furthermore, the matrix attaining this maximum is the one with the same left and right singular vectors as , and whose singular values are such that there is a constant so that for all (i.e., the vector of singular values of , raised to the power, is a multiple of the vector of singular values of , raised to the power).

Compute

keeping fixed, and let be the matrix attaining this maximum. By essentially the same argument as in step 2, we know that this maximum value is exactly equal to , where is the map that is dual to in the Hilbert–Schmidt inner product. Furthermore, the matrix attaining this maximum is the one with the same left and right singular vectors as , and whose singular values are such that there is a constant so that for all .

Increment by 1 and return to step 2. Repeat until negligible gains are made after each iteration.

The above algorithm is almost identical to the algorithm presented for induced matrix norms, but with absolute values and complex phases of the vectors and replaced by the singular values and singular vectors of the matrices and , respectively. The entire algorithm is still extremely quick to run, since each step just involves computing one singular value decomposition.

The downside of this algorithm, as with the induced matrix norm algorithm, is that we have no guarantee that this method will actually converge to the induced Schatten p → q norm; only that it will converge to some lower bound of it. However, the algorithm works pretty well in practice, and is fast enough that we can simply run it a few thousand times to get a very good idea of what the norm actually is. If you’re interested in making use of this algorithm, it has been implemented in QETLAB as the InducedSchattenNorm function.

Entanglement Norms

The central idea used for the previous two families of norms can also be used to get lower bounds on the following norm on that comes up from time to time when dealing with quantum entanglement:

(As a side note: this norm, and some other ones like it, were the central focus on my thesis.) This norm is already written for us as a double maximization, so the idea presented in the previous two sections is somewhat clearer from the start: we fix randomly-generated vectors and and then maximize over all vectors and , which can be done simply by computing the left and right singular vectors associated with the maximum singular value of the operator

We then fix and as those singular vectors and then maximize over all vectors and (which is again a singular value problem), and we iterate back and forth until we converge to some value.

As with the previously-discussed norms, this algorithm always converges, and it converges to a lower bound of , but perhaps not its exact value. If you want to take this algorithm out for a spin, it has been implemented in QETLAB as the sk_iterate function.

It’s also worth mentioning that this algorithm generalizes straightforwardly in several different directions. For example, it can be used to find lower bounds on the norms where we maximize on the left and right by pure states with Schmidt rank not larger than k rather than separable pure states, and it can be used to find lower bounds on the geometric measure of entanglement [5].

Let be a finite-dimensional Hilbert space over or (the fields of real and complex numbers, respectively). If we let be a norm on (not necessarily the norm induced by the inner product), then the dual norm of is defined by

The double-dual of a norm is equal to itself (i.e., ) and the norm induced by the inner product is the unique norm that is its own dual. Similarly, if is the vector p-norm, then , where satisfies .

In this post, we will demonstrate that has an equivalent characterization as an infimum, and we use this characterization to provide a simple derivation of several known (but perhaps not well-known) formulas for norms such as the operator norm of matrices.

For certain norms (such as the “separability norms” presented at the end of this post), this ability to write a norm as both an infimum and a supremum is useful because computation of the norm may be difficult. However, having these two different characterizations of a norm allows us to bound it both from above and from below.

The Dual Norm as an Infimum

Theorem 1. Let be a bounded set satisfying and define a norm by

Then is given by

where the infimum is taken over all such decompositions of .

Before proving the result, we make two observations. Firstly, the quantity described by Theorem 1 really is a norm: boundedness of ensures that the supremum is finite, and ensures that . Secondly, every norm on can be written in this way: we can always choose to be the unit ball of the dual norm . However, there are times when other choices of are more useful or enlightening (as we will see in the examples).

Proof of Theorem 1. Begin by noting that if and then . It follows that whenever . In fact, we now show that is the largest norm on with this property. To this end, let be another norm satisfying whenever . Then

Thus , so by taking duals we see that , as desired.

For the remainder of the proof, we denote the infimum in the statement of the theorem by . Our goal now is to show that: (1) is a norm, (2) satisfies whenever , and (3) is the largest norm satisfying property (2). The fact that will then follow from the first paragraph of this proof.

To see (1) (i.e., to prove that is a norm), we only prove the triangle inequality, since positive homogeneity and the fact that if and only if are both straightforward (try them yourself!). Fix and let , be decompositions of with for all i, satisfying and . Then

Since was arbitrary, the triangle inequality follows, so is a norm.

To see (2) (i.e., to prove that whenever ), we simply write in its trivial decomposition , which gives the single coefficient , so .

To see (3) (i.e., to prove that is the largest norm on satisfying condition (2)), begin by letting be any norm on with the property that for all . Then using the triangle inequality for shows that if is any decomposition of with for all i, then

Taking the infimum over all such decompositions of shows that , which completes the proof.

The remainder of this post is devoted to investigating what Theorem 1 says about certain specific norms.

Operator and Trace Norm of Matrices

where is the Euclidean norm on . If we let be the set of unitary matrices in , then Theorem 1 provides the following alternate characterization of the operator norm:

Corollary 1. Let . Then

As an application of Corollary 1, we are able to provide the following characterization of unitarily-invariant norms (i.e., norms with the property that for all unitary matrices ):

Corollary 2. Let be a norm on . Then is unitarily-invariant if and only if

for all .

Proof of Corollary 2. The “if” direction is straightforward: if we let and be unitary, then

where we used the fact that . It follows that , so is unitarily-invariant.

To see the “only if” direction, write and with each and unitary. Then

By taking the infimum over all decompositions of and of the given form and using Corollary 1, the result follows.

An alternate proof of Corollary 2, making use of some results on singular values, can be found in [2, Proposition IV.2.4].

Separability Norms

As our final (and least well-known) example, let , again with the usual Hilbert–Schmidt inner product. If we let

where is the Euclidean norm on or , then Theorem 1 tells us that the following two norms are dual to each other:

There’s actually a little bit of work to be done to show that has the given form, but it’s only a couple lines – consider it an exercise for the interested reader.

Both of these norms come up frequently when dealing with quantum entanglement. The norm was the subject of [3], where it was shown that a quantum state is entangled if and only if (I use the above duality relationship to provide an alternate proof of this fact in [4, Theorem 6.1.5]). On the other hand, the norm characterizes positive linear maps of matrices and was the subject of [5, 6].

Update [March 3, 2015]: I have released a MATLAB package called QETLAB that can compute the completely bounded and diamond norms (and much more) much faster than the code contained in this blog post. I recommend using QETLAB instead of the code below.

In operator theory, the completely bounded norm of a linear map on complex matrices is defined by , where is the usual norm on linear maps defined by and is the operator norm of [1]. The completely bounded norm is particularly useful when thinking of and as operator spaces.

The dual of the completely bounded norm is called the diamond norm, which plays an important role in quantum information theory, as it can be used to measure the distance between quantum channels. The diamond norm of is typically denoted . For properties of the completely bounded and diamond norms, see [1,2,3].

A method for efficiently computing the completely bounded and diamond norms via semidefinite programming was recently presented in [4]. The purpose of this post is to provide MATLAB scripts that implement this algorithm and demonstrate its usage.

Download and Install

In order to make use of these scripts to compute the completely bounded or diamond norm, you must download and install two things: the SeDuMi semidefinite program solver and the MATLAB scripts themselves.

SeDuMi – Please follow the instructions on the SeDuMi website to download and install it. If possible, you should install SeDuMi 1.1R3, not SeDuMi 1.21 or SeDuMi 1.3, since there is a bug with the newer versions when dealing with complex matrices.

Once the scripts are installed, type “help CBNorm” or “help DiamondNorm” at the MATLAB prompt to learn how to use the CBNorm and DiamondNorm functions. Several usage examples are provided below.

Usage Examples

The representation of the linear map that the CBNorm and DiamondNorm functions take as input is a pair of arrays of its left- and right- generalized Choi-Kraus operators. That is, an array of operators and such that for all .

Basic Examples

If we wanted to compute the completely bounded and diamond norms of the map

So we see that its completely bounded norm is 7.2684 and its diamond norm is 7.4124.

If we instead want to compute the completely bounded or diamond norm of a completely positive map, we only need to provide its Kraus operators – i.e., operators such that for all . Furthermore, in this case semidefinite programming isn’t used at all, since [1, Proposition 3.6] tells us that and , and computing is trivial. The following example demonstrates the usage of these scripts in this case, via a completely positive map with four (essentially random) Kraus operators:

Suppose we want to compute the completely bounded or diamond norm of the transpose map on . A generalized Choi-Kraus representation is given by defining , where is the standard basis of (i.e., and are the operators with matrix representation in the standard basis with a one in the -entry and zeroes elsewhere). It is known that the completely bounded and diamond norms of the n-dimensional transpose map are both equal to n, which can be verified in small dimensions as follows:

Now consider the map defined by , where is the following unitary matrix:

We know from [2, Theorem 12] that the CB norm and diamond norm of are both equal to the diameter of the smallest closed disc containing all of the eigenvalues of . Because the eigenvalues of are , the smallest closed disc containing its eigenvalues has diameter , so . This result can be verified as follows:

Recall that in linear algebra, the vector p-norm of a vector x ∈ Cn (or x ∈ Rn) is defined to be

where xi is the ith element of x and 1 ≤ p ≤ ∞ (where the p = ∞ case is understood to mean the limit as p approaches ∞, which gives the maximum norm). By far the most well-known of these norms is the Euclidean norm, which arises when p = 2. Another well-known norm arises when p = 1, which gives the “taxicab” norm.

The problem that will be investigated in this post is to characterize what operators preserve the p-norms – i.e., what their isometries are. In the p = 2 case of the Euclidean norm, the answer is well-known: the isometries of the real Euclidean norm are exactly the orthogonal matrices, and the isometries of the complex Euclidean norm are exactly the unitary matrices. It turns out that if p ≠ 2 then the isometry group looks much different. Indeed, Exercise IV.1.3 of [1] asks the reader to show that the isometries of the p = 1 and p = ∞ norms are what are known as complex permutation matrices (to be defined). We will investigate those cases as well as a situation when p ≠ 1, 2, ∞.

p = 1: The “Taxicab” Norm

Recall that a permutation matrix is a matrix with exactly one “1” in each of its rows and columns, and a “0” in every other position. A signed permutation matrix (sometimes called a generalized permutation matrix) is similar – every row and column has exactly one non-zero entry, which is either 1 or -1. Similarly, a complex permutation matrix is a matrix for which every row and column has exactly one non-zero entry, and every non-zero entry is a complex number with modulus 1.

It is not difficult to show that if x ∈ Rn then the taxicab norm of x is preserved by signed permutation matrices, and if x ∈ Cn then the taxicab norm of x is preserved by complex permutation matrices. We will now show that the converse holds:

Theorem 1. Let P ∈ Mn be an n × n matrix. Then

if and only if P is a complex permutation matrix (or a signed permutation matrix, respectively).

Proof. We only prove the “only if” implication, because the “if” implication is trivial (an exercise left for the reader?). So let’s suppose that P is an isometry of the p = 1 vector norm. Let ei denote the ith standard basis vector, let pi denote the ith column of P, and let pij denote the (j,i)-entry of P (i.e., the jth entry of pi). Then Pei = pi for all i, so

Similarly, P(ei + ek) = pi + pk for all i,k, so

However, by the triangle inequality for the absolute value we know that the above equality can only hold if there exist non-negative real constants cijk ≥ 0 such that pij = cijkpkj. However, it is similarly the case that P(ei – ek) = pi – pk for all i,k, so

Using the equality condition for the complex absolute value again we then know that there exist non-negative real constants dijk ≥ 0 such that pij = -dijkpkj. Using the fact that each cijk and each dijk is non-negative, it follows that each row contains at most one non-zero entry (and each row must indeed contain at least one non-zero entry since the isometries of any norm must be nonsingular).

Thus every row has exactly one non-zero entry. By using (again) the fact that isometries must be nonsingular, it follows that each of the non-zero entries must occur in a distinct column (otherwise there would be a zero column). The fact that each non-zero entry has modulus 1 follows from simply noting that P must preserve the p = 1 norm of each ei.

p = ∞: The Maximum Norm

As with the p = 1 case, it is not difficult to show that if x ∈ Rn then the maximum norm of x is preserved by signed permutation matrices, and if x ∈ Cn then the maximum norm of x is preserved by complex permutation matrices. We will now show that the converse holds in this case as well:

Theorem 2. Let P ∈ Mn be an n × n matrix. Then

if and only if P is a complex permutation matrix (or a signed permutation matrix, respectively).

Proof. Again, we only prove the “only if” implication, since the “if” implication is trivial. So suppose that P is an isometry of the p = ∞ vector norm. As before, let ei denote the ith standard basis vector, let pi denote the ith column of P, and let pij denote the (j,i)-entry of P (i.e., the jth entry of pi). Then Pei = pi for all i, so

In other words, each entry of P has modulus at most 1, and each column has at least one element with modulus equal to 1. Also, P(ei ± ek) = pi ± pk for all i,k, so

It follows that if |pij| = 1, then pkj = 0 for all k ≠ i. Because each column has an element with modulus 1, it follows that each row has exactly 1 non-zero entry. Because each column has an entry with modulus 1, it follows that each row and column has exactly 1 non-zero entry, which must have modulus 1, so P is a signed or complex permutation matrix.

Any p ≠ 2

When p = 2, the isometries are orthogonal/unitary matrices. When p = 1 or p = ∞, the isometries are signed/complex permutation matrices, which are a very small subset of the orthogonal/unitary matrices. One might naively expect that the isometries for other values of p somehow interpolate between those two extremes. Alternatively, one might expect that the signed/complex permutation matrices are the only isometries for all other values of p as well. It turns out that the latter conjecture is correct [2,3].