One of the standard applications of orthogonal projections to function spaces comes in the form of Fourier series, where we use as a mutually orthogonal set of functions with respect to the usual inner product

In particular, projecting a function onto the span of the first 2n+1 functions in the set gives its order-n Fourier approximation:

where the coefficients and can be computed via inner products (i.e., integrals) in this space (see many other places on the web, like here and here, for more details).

A natural follow-up question is then whether or not we similarly get Taylor polynomials when we project a function down onto the span of the set of functions . More specifically, if we define to be the inner product space consisting of continuous functions on the interval [-1,1] with the standard inner product

and consider the orthogonal projection with range equal to , the subspace of degree-n polynomials, is it the case that is the degree-n Taylor polynomial of ?

It does not take long to work through an example to see that, no, we do not get Taylor polynomials when we do this. For example, if we choose n = 2 then projecting the function onto the results in the function , which is not its degree-2 Taylor polynomial . These various functions are plotted below ( is displayed in orange, and the two different approximating polynomials are displayed in blue):

These plots illustrate why the orthogonal projection of does not equal its Taylor polynomial: the orthogonal projection is designed to approximate as well as possible on the whole interval [-1,1], whereas the Taylor polynomial is designed to approximate it as well as possible at x = 0 (while sacrificing precision near the endpoints of the interval, if necessary).

However, something interesting happens if we change the interval that the orthogonal projection acts on. In particular, if we let be a scalar and instead consider the orthogonal projection with range equal to , then a straightforward (but hideous) calculation shows that the best degree-2 polynomial approximation of on the interval [-c,c] is

While this by itself is an ugly mess, something interesting happens if we take the limit as c approaches 0:

which is exactly the Taylor polynomial of . Intuitively, this makes sense and meshes with our earlier observations about Taylor polynomials approximating at x = 0 as well as possible and orthogonal projections approximating as well as possible on the interval . However, I am not aware of any proof that this happens in general (i.e., no matter what the degree of the polynomial is and no matter what (sufficiently nice) function is used in place of ), and I would love for a kind-hearted commenter to point me to a reference. [Update: user “jam11249” provided a sketch proof of this fact on reddit here.]

There are a wide variety of different norms of matrices and operators that are useful in many different contexts. Some matrix norms, such as the Schatten norms and Ky Fan norms, are easy to compute thanks to the singular value decomposition. However, the computation of many other norms, such as the induced p-norms (when p ≠ 1, 2, ∞), is NP-hard. In this post, we will look at a general method for getting quite good estimates of almost any matrix norm.

The basic idea is that every norm can be written as a maximization of a convex function over a convex set (in particular, every norm can be written as a maximization over the unit ball of the dual norm). However, this maximization is often difficult to deal with or solve analytically, so instead it can help to write the norm as a maximization over two or more simpler sets, each of which can be solved individually. To illustrate how this works, let’s start with the induced matrix norms.

Induced matrix norms

The induced p → q norm of a matrix B is defined as follows:

where

is the vector p-norm. There are three special cases of these norms that are easy to compute:

When p = q = 2, this is the usual operator norm of B (i.e., its largest singular value).

When p = q = 1, this is the maximum absolute column sum: .

When p = q = ∞, this is the maximum absolute row sum: .

However, outside of these three special cases (and some other special cases, such as when B only has real entries that are non-negative [1]), this norm is much messier. In general, its computation is NP-hard [2], so how can we get a good idea of its value? Well, we rewrite the norm as the following double maximization:

where is the positive real number such that (and we take if , and vice-versa). The idea is then to maximize over and one at a time, alternately.

Start by setting and fixing a randomly-chosen vector , scaled so that .

Compute

keeping fixed, and let be the vector attaining this maximum. By Hölder’s inequality, we know that this maximum value is exactly equal to . Furthermore, the equality condition of Hölder’s inequality tells us that the vector attaining this maximum is the one with complex phases that are the same as those of , and whose magnitudes are such that is a multiple of (here the notation means we take the absolute value and the q-th power of every entry of the vector).

Compute

keeping fixed, and let be the vector attaining this maximum. By an argument almost identical to that of step 2, this maximum is equal to , where is the positive real number such that . Furthermore, the vector attaining this maximum is the one with complex phases that are the same as those of , and whose magnitudes are such that is a multiple of .

Increment by 1 and return to step 2. Repeat until negligible gains are made after each iteration.

This algorithm is extremely quick to run, since Hölder’s inequality tells us exactly how to solve each of the two maximizations separately, so we’re left only performing simple vector calculations at each step. The downside of this algorithm is that, even though it will always converge to some local maximum, it might converge to a value that is smaller than the true induced p → q norm. However, in practice this algorithm is fast enough that it can be run several thousand times with different (randomly-chosen) starting vectors to get an extremely good idea of the value of .

It is worth noting that this algorithm is essentially the same as the one presented in [3], and reduces to the power method for finding the largest singular value when p = q = 2. This algorithm has been implemented in the QETLAB package for MATLAB as the InducedMatrixNorm function.

Induced Schatten superoperator norms

There is a natural family of induced norms on superoperators (i.e., linear maps ) as well. First, for a matrix , we define its Schatten p-norm to be the p-norm of its vector of singular values:

Three special cases of the Schatten p-norms include:

p = 1, which is often called the “trace norm” or “nuclear norm”,

p = 2, which is often called the “Frobenius norm” or “Hilbert–Schmidt norm”, and

p = ∞, which is the usual operator norm.

The Schatten norms themselves are easy to compute (since singular values are easy to compute), but their induced counter-parts are not.

Given a superoperator , its induced Schatten p → q norm is defined as follows:

These induced Schatten norms were studied in some depth in [4], and crop up fairly frequently in quantum information theory (especially when p = q = 1) and operator theory (especially when p = q = ∞). The fact that they are NP-hard to compute in general is not surprising, since they reduce to the induced matrix norms (discussed earlier) in the case when only acts on the diagonal entries of and just zeros out the off-diagonal entries. However, it seems likely that this norm’s computation is also difficult even in the special cases p = q = 1 and p = q = ∞ (however, it is straightforward to compute when p = q = 2).

Nevertheless, we can obtain good estimates of this norm’s value numerically using essentially the same method as discussed in the previous section. We start by rewriting the norm as a double maximization, where each maximization individually is easy to deal with:

where is again the positive real number (or infinity) satisfying . We now maximize over and , one at a time, alternately, just as before:

Start by setting and fixing a randomly-chosen matrix , scaled so that .

Compute

keeping fixed, and let be the matrix attaining this maximum. By the Hölder inequality for Schatten norms, we know that this maximum value is exactly equal to . Furthermore, the matrix attaining this maximum is the one with the same left and right singular vectors as , and whose singular values are such that there is a constant so that for all (i.e., the vector of singular values of , raised to the power, is a multiple of the vector of singular values of , raised to the power).

Compute

keeping fixed, and let be the matrix attaining this maximum. By essentially the same argument as in step 2, we know that this maximum value is exactly equal to , where is the map that is dual to in the Hilbert–Schmidt inner product. Furthermore, the matrix attaining this maximum is the one with the same left and right singular vectors as , and whose singular values are such that there is a constant so that for all .

Increment by 1 and return to step 2. Repeat until negligible gains are made after each iteration.

The above algorithm is almost identical to the algorithm presented for induced matrix norms, but with absolute values and complex phases of the vectors and replaced by the singular values and singular vectors of the matrices and , respectively. The entire algorithm is still extremely quick to run, since each step just involves computing one singular value decomposition.

The downside of this algorithm, as with the induced matrix norm algorithm, is that we have no guarantee that this method will actually converge to the induced Schatten p → q norm; only that it will converge to some lower bound of it. However, the algorithm works pretty well in practice, and is fast enough that we can simply run it a few thousand times to get a very good idea of what the norm actually is. If you’re interested in making use of this algorithm, it has been implemented in QETLAB as the InducedSchattenNorm function.

Entanglement Norms

The central idea used for the previous two families of norms can also be used to get lower bounds on the following norm on that comes up from time to time when dealing with quantum entanglement:

(As a side note: this norm, and some other ones like it, were the central focus on my thesis.) This norm is already written for us as a double maximization, so the idea presented in the previous two sections is somewhat clearer from the start: we fix randomly-generated vectors and and then maximize over all vectors and , which can be done simply by computing the left and right singular vectors associated with the maximum singular value of the operator

We then fix and as those singular vectors and then maximize over all vectors and (which is again a singular value problem), and we iterate back and forth until we converge to some value.

As with the previously-discussed norms, this algorithm always converges, and it converges to a lower bound of , but perhaps not its exact value. If you want to take this algorithm out for a spin, it has been implemented in QETLAB as the sk_iterate function.

It’s also worth mentioning that this algorithm generalizes straightforwardly in several different directions. For example, it can be used to find lower bounds on the norms where we maximize on the left and right by pure states with Schmidt rank not larger than k rather than separable pure states, and it can be used to find lower bounds on the geometric measure of entanglement [5].

After over two and a half years in various stages of development, I am happy to somewhat “officially” announce a MATLAB package that I have been developing: QETLAB (Quantum Entanglement Theory LABoratory). This announcement is completely arbitrary, since people started finding QETLAB via Google about a year ago, and a handful of papers have made use of it already, but I figured that I should at least acknowledge its existence myself at some point. I’ll no doubt be writing some posts in the near future that highlight some of its more advanced features, but I will give a brief run-down of what it’s about here.

Mixed State Separability

The “motivating problem” for QETLAB is the separability problem, which asks us to (efficiently / operationally / practically) determine whether a given mixed quantum state is separable or entangled. The (by far) most well-known tool for this job is the positive partial transpose (PPT) criterion, which says that every separable state remains positive semidefinite when the partial transpose map is applied to it. However, this is just a quick-and-dirty one-way test, and going beyond it is much more difficult.

The QETLAB function that tries to solve this problem is the IsSeparable function, which goes through several separability criteria in an attempt to prove the given state separable or entangled, and provides a journal reference to the paper that contains the separability criteria that works (if one was found).

As an example, consider the “tiles” state, introduced in [1], which is an example of a quantum state that is entangled, but is not detected by the simple PPT test for entanglement. We can construct this state using QETLAB’s UPB function, which lets the user easily construct a wide variety of unextendible product bases, and then verify its entanglement as follows:

And of course more advanced tests for entanglement, such as those based on symmetric extensions, are also checked. Generally, quick and easy tests are done first, and slow but powerful tests are only performed if the script has difficulty finding an answer.

Symmetry of Subsystems

One problem that I’ve come across repeatedly in my work is the need for robust functions relating to permuting quantum systems that have been tensored together, and dealing with the symmetric and antisymmetric subspaces (and indeed, this type of thing is quite common in quantum information theory). Some very basic functionality of this type has been provided in other MATLAB packages, but it has never been as comprehensive as I would have liked. For example, QUBIT4MATLAB has a function that is capable of computing the symmetric projection on two systems, or on an arbitrary number of 2- or 3-dimensional systems, but not on an arbitrary number of systems of any dimension. QETLAB’s SymmetricProjection function fills this gap.

Nonlocality and Bell Inequalities

QETLAB also has a set of functions for dealing with quantum non-locality and Bell inequalities. For example, consider the CHSH inequality, which says that if and are -valued measurement settings, then the following inequality holds in classical physics (where denotes expectation):

However, in quantum-mechanical settings, this inequality can be violated, and the quantity on the left can take on a value as large as (this is Tsirelson’s bound). Finally, in no-signalling theories, the quantity on the left can take on a value as large as .

All three of these quantities can be easily computed in QETLAB via the BellInequalityMax function:

The classical value of the Bell inequality is computed simply by brute force, and the no-signalling value is computed via a linear program. However, no reasonably efficient method is known for computing the quantum value of a Bell inequality, so this quantity is estimated using the NPA hierarchy [2]. Advanced users who want more control can specify which level of the NPA hierarchy to use, or even call the NPAHierarchy function directly themselves. There is also a closely-related function for computing the classical, quantum, or no-signalling value of a nonlocal game (in case you’re a computer scientist instead of a physicist).

Download and Documentation

QETLAB v0.8 is currently available at qetlab.com (where you will also find its documentation) and also on github. If you have any suggestions/comments/requests/anything, or if you have used QETLAB in your work, please let me know!

One of my favourite examples of an “obvious” mathematical statement that is actually false is the “fact” that if are vector spaces then

The reason that the above statement seems so obvious is that the similar fact does hold, so it’s very tempting to think “inclusion-exclusion, yadda yadda, it’s simple enough to prove that it’s not worth writing down or working through the details”. However, it’s not true: a counterexample is provided by 3 distinct lines through the origin in .

There is another problem that I’ve been thinking about for quite some time that is also “obvious”: the minimal superpermutation conjecture. This conjecture was so obvious, in fact, that it appeared as a question in a national programming contest in 1998. Well, last night Robin Houston posted a note on the arXiv showing that, despite being obvious, the conjecture is false [1].

Superpermutations

What is the shortest string that contains each permutation of “123” as a contiguous substring? It is straightforward to check that “123121321” contains each of “123”, “132”, “213”, “231”, “312”, and “321” as substrings (i.e., it is a superpermutation of 3 symbols), and it’s not difficult to argue (or use a computer search to show) that it is the shortest string with this property.

Well, we can repeat this question for any number of symbols. I won’t repeat all of the details (because I already wrote about the problem here), but there is a natural recursive construction that takes an (n-1)-symbol superpermutation of length L and spits out an n-symbol superpermutation of length L+n!. This immediately gives us an n-symbol superpermutation of length 1! + 2! + 3! + … + n! for all n. Importantly, it seemed like this construction was the best we could do: computer search verifies that these superpermutations are the smallest possible, and are even unique, for n ≤ 4.

Furthermore, it is not difficult to come up with some lower bounds on the length of superpermutations that seem to suggest that we have found the right answer. A trivial argument shows that an n-symbol superpermutation must have length at least (n-1) + n!, since we need n characters for the first permutation, and 1 additional character for each of the remaining n!-1 permutations. This argument can be refined to show that a superpermutation must actually have length at least (n-2) + (n-1)! + n!, since there is no way to pack the permutations tightly enough so that each one only uses 1 additional character (spend a few minutes trying to construct superpermutations by hand and you’ll see this for yourself). In fact, we can even refine this argument further (see a not-so-pretty proof sketch here) to show that n-symbol superpermutations must have length at least (n-3) + (n-2)! + (n-1)! + n!.

A-ha! A pattern has emerged – surely we can just keep refining this argument over and over again to eventually get a lower bound of 1! + 2! + 3! + … + n!, which shows that the superpermutations we already have are indeed minimal, right? Some variant of this line of thought seemed to be where almost everyone’s mind went when introduced to this problem, and it seemed fairly convincing: this argument is more or less contained within the answers when this question was posted on MathExchange and on StackOverflow (although the authors are usually careful to state that their method only appears to be optimal), and this problem was presented as a programming question in the 1998 Turkish Informatics Olympiad (see the resulting thread here). Furthermore, even on pages where this was acknowledged to be a difficult open problem, it was sometimes claimed that it had been proved for n ≤ 11 (example).

For the above reasons, it was a long time before I was even convinced that this problem was indeed unsolved – it seemed like people had solved this problem but just found it not worth the effort of writing up a full proof, or that people had found a simple way to tackle the problem for moderately large values of n like 10 or 11 that I couldn’t even dream of handling.

The Conjecture is False

It turns out that the minimal superpermutation conjecture is false for all n ≥ 6. That is, there exists a superpermutation of length strictly less than 1! + 2! + 3! + … + n! in all of these cases [1]. In particular, Robin Houston found the following 6-symbol superpermutation of length 872, which is smaller than the conjectured minimum length of 1! + 2! + … + 6! = 873:

So not only are congratulations due to Robin for settling the conjecture, but a big “thank you” are due to him as well for (hopefully) convincing everyone that this problem is not as easy as it appears upon first glance.

Recall from an earlier blog post that the minimal superpermutation problem asks for the shortest string on the symbols “1”, “2”, …, “n” that contains every permutation of those symbols as a contiguous substring. For example, “121” is a minimal superpermutation on the symbols “1” and “2”, since it contains both “12” and “21” as substrings, and there is no shorter string with this property.

Until now, the length of minimal superpermutations has only been known when n ≤ 4: they have length 1, 3, 9, and 33 in these cases, respectively. It has been conjectured that minimal superpermutations have length for all n, and I am happy to announce that Ben Chaffin has proved this conjecture when n = 5. More specifically, he showed that minimal superpermutations in the n = 5 case have length 153, and there are exactly 8 such superpermutations (previously, it was only known that minimal superpermutations have either length 152 or 153 in this case, and there are at least 2 superpermutations of length 153).

The Eight Minimal Superpermutations

The eight superpermutations that Ben found are available here (they’re too large to include in the body of this post). Notice that the first superpermutation is the well-known “easy-to-construct” superpermutation described here, and the second superpermutation is the one that was found in [1]. The other six superpermutations are new.

One really interesting thing about the six new superpermutations is that they are the first known minimal superpermutations to break the “gap pattern” that previously-known constructions have. To explain what I mean by this, consider the minimal superpermutation “123121321” on three symbols. We can think about generating this superpermutation greedily: we start with “123”, then we append the character “1” to add the permutation “231” to the string, and then we append the character “2” to add the permutation “312” to the string. But now we are stuck: we have “12312”, and there is no way to append just one character to this string in such a way as to add another permutation to it: we have to append the two characters “13” to get the new permutation “213”.

This phenomenon seemed to be fairly general: in all known small superpermutations on n symbols, there was always a point (approximately halfway through the superpermutation) where n-2 consecutive characters were “wasted”: they did not add any new permutations themselves, but only “prepared” the next symbol to add a new permutation.

However, none of the six new minimal superpermutations have this property: they all never have more than 2 consecutive “wasted” characters, whereas the two previously-known superpermutations each have a run of n-2 = 3 consecutive “wasted” characters. Thus these six new superpermutations are really quite different from any superpermutations that we currently know and love.

How They Were Found

The idea of Ben’s search is to do a depth-first search on the placement of the “wasted” characters (recall that “wasted” characters were defined and discussed in the previous section). Since the shortest known superpermutation on 5 symbols has length 153, and there are 120 permutations of 5 symbols, and the first n-1 = 4 characters of the superpermutation must be wasted, we are left with the problem of trying to place 153 – 120 – 4 = 29 wasted characters. If we can find a superpermutation with only 28 wasted characters (other than the initial 4), then we’ve found a superpermutation of length 152; if we really need all 29 wasted characters, then minimal superpermutations have length 153.

So now we do the depth-first search:

Find (via brute-force) the maximum number of permutations that we can fit in a string if we are allowed only 1 wasted character: the answer is 10 permutations (for example, the string “123451234152341” does the job).

Now find the maximum number of permutations that we can fit in a string if we are allowed 2 wasted characters. To speed up the search, once we have found a string that contains some number (call it p) of permutations, we can ignore all other strings that use a wasted character before p-10 permutations, since we know from the previous bullet point that the second wasted character can add at most 10 more permutations, for a total of (p-10)+10 = p permutations.

We now repeat this process for higher and higher numbers of wasted characters: we find the maximum number of permutations that we can fit in a string with 3 wasted characters, using the results from the previous two bullets to speed up the search by ignoring strings that place 1 or 2 wasted characters too early.

Etc.

The results of this computation are summarized in the following table:

Wasted characters

Maximum # of permutations

0

5

1

10

2

15

3

20

4

23

5

28

6

33

7

36

8

41

9

46

10

49

11

53

12

58

13

62

14

66

15

70

16

74

17

79

18

83

19

87

20

92

21

96

22

99

23

103

24

107

25

111

26

114

27

116

28

118

29

120

As we can see, it is not possible to place all 120 permutations in a string with 28 or fewer wasted characters, which proves that there is no superpermutation of length 152 in the n = 5 case. C code that computes the values in the above table is available here.

Update [August 18, 2014]: Robin Houston has found a superpermutation on 6 symbols of length 873 (i.e., the conjectured minimal length) with the interesting property that it never has more than one consecutive wasted character! The superpermutation is available here.

IMPORTANT UPDATE [August 22, 2014]: Robin Houston has gone one step further and disproved the minimal superpermutation conjecture for all n ≥ 6. See here.

where each is a real scalar and the sets and form orthonormal bases of .

The Schmidt decomposition theorem isn’t anything fancy: it is just the singular value decomposition in disguise (the ‘s are singular values of some matrix and the sets and are its left and right singular vectors). However, it tells us everything we could ever want to know about the entanglement of : it is entangled if and only if it has more than one non-zero , and in this case the question of “how much” entanglement is contained within is answered by a certain function of the ‘s.

Well, we can find a similar decomposition of mixed quantum states. If is a mixed quantum state then it can be written in its operator-Schmidt decomposition:

Once again, we haven’t really done anything fancy: the operator-Schmidt decomposition is also just the singular value decomposition in disguise in almost the exact same way as the regular Schmidt decomposition. However, its relationship with entanglement of mixed states is much weaker (as we might expect from the fact that the singular value decomposition can be computed in polynomial time, but determining whether a mixed state is entangled or separable (i.e., not entangled) is expected to be hard [1]). In this post, we’ll investigate some cases when the operator-Schmidt decomposition does let us conclude that is separable or entangled.

Proving a State is Entangled: The Realignment Criterion

One reasonably well-known method for proving that a mixed state is entangled is the realignment criterion [2,3]. What is slightly less well-known is that the realignment criterion can be phrased in terms of the coefficients in the operator-Schmidt decomposition of .

Proof. The idea is to construct a specific entanglement witness that detects the entanglement in . In particular, the entanglement witness that we will use is . To see that is indeed an entanglement witness, we must show that for all and . Well, some algebra shows that

so it suffices to show that . To see this notice that

where the inequality is the Cauchy–Schwarz inequality and the equality comes from the fact that the sets and are orthonormal bases, so (and similarly for ).

Now that we know that is an entanglement witness, we must check that it detects the entanglement in (that is, we want to show that ). This is straightforward to show by making use of the fact that the sets and are orthonormal:

It follows that is entangled, which completes the proof.

A more popular formulation of the realignment criterion says that if we define the realignment map by and extending by linearity, and let denote the trace norm (i.e., the sum of the singular values), then implies that is entangled. The equivalence of these two formulations of the realignment criterion comes from the fact that the singular values of are exactly the coefficients in its operator-Schmidt decomposition.

Proving a State is Entangled: Beyond the Realignment Criterion

We might naturally wonder whether we can prove that even more states are entangled based on their operator-Schmidt decomposition than those detected by the realignment criterion. The following theorem gives one sense in which the answer to this question is “no”: if we only look at “nice” functions of the coefficients then the realignment criterion gives the best method of entanglement detection possible.

Theorem 2. Let be a symmetric gauge function (i.e., a norm that is invariant under permutations and sign changes of the entries of the input vector). If we can conclude that is entangled based on the value of then it must be the case that .

Proof. Without loss of generality, we scale so that . We first prove two facts about .

Claim 1: for all mixed states . This follows from the fact that (which itself is kind of a pain to prove: it follows from the fact that the Schatten norm of the realignment map is , but if anyone knows of a more direct and/or simpler way to prove this, I’d love to see it). If we assume without loss of generality that then

as desired.

Claim 2: There exists a separable state for which equals any given value in the interval . To see why this is the case, first notice that there exists a separable state with and for all : the state is one such example. Similarly, there is a separable state with and for all : the state is one such example. Furthermore, it is straightforward to interpolate between these two extremes to find separable states (even product states) with for all and any value of . For such states we have

which can take any value in the interval as claimed.

By combining claims 1 and 2, we see that we could only ever use the value of to conclude that is entangled if . However, in this case we have

which completes the proof.

Theorem 2 can be phrased naturally in terms of the other formulation of the realignment criterion as well: it says that there is no unitarily-invariant matrix norm with the property that we can use the value of to conclude that is entangled, except in those cases where the trace norm (i.e., the realignment criterion) itself already tells us that is entangled.

Nonetheless, we can certainly imagine using functions of the coefficients that are not symmetric gauge functions. Alternatively, we could take into account some (hopefully easily-computable) properties of the matrices and . One such method for detecting entanglement that depends on the coefficients and the trace of each and is as follows.

Theorem 3 [4,5]. Let have operator-Schmidt decomposition

If

then is entangled.

I won’t prove Theorem 3 here, but I will note that it is strictly stronger than the realignment criterion, which can be seen by showing that the left-hand side of Theorem 3 is at least as large as the left-hand side of Theorem 1. To show this, observe that

and

which is nonnegative.

Proving a State is Separable

Much like we can use the operator-Schmidt decomposition to sometimes prove that a state is entangled, we can also use it to sometimes prove that a state is separable. To this end, we will use the operator-Schmidt rank of , which is the number of non-zero coefficients in its operator-Schmidt decomposition. One trivial observation is as follows:

Proposition 4. If the operator-Schmidt rank of is then is separable.

Proof. If the operator-Schmidt rank of is then we can write for some . Since is positive semidefinite, it follows that either and are both positive semidefinite or both negative semidefinite. If they are both positive semidefinite, we are done. If they are both negative semidefinite then we can write and then we are done.

Somewhat surprisingly, however, we can go further than this: it turns out that all states with operator-Schmidt rank are also separable, as was shown in [6].

Theorem 5 [6]. If the operator-Schmidt rank of is then is separable.

Proof. If has operator-Schmidt rank then it can be written in the form for some . Throughout this proof, we use the notation , and so on.

Since is positive semidefinite, so are each of its partial traces. Thus and are both positive semidefinite operators. It is then straightforward to verify that

What is important here is that we have found a rank- tensor decomposition of in which one of the terms is positive semidefinite. Now we define

and notice that for some (in order to do this, we actually need the partial traces of to be nonsingular, but this is easily taken care of by standard continuity arguments, so we’ll sweep it under the rug). Furthermore, is also positive semidefinite, and it is separable if and only if is separable. Since is positive semidefinite, we know that for all eigenvalues of and of . If we absorb scalars between and so that then this implies that for all . Thus and are both positive semidefinite. Furthermore, a straightforward calculation shows that

We now play a similar game as before: we define a new matrix

and notice that for some (similar to before, we note that there is a standard continuity argument that can be used to handle the fact that and might be singluar). The minimum eigenvalue of is then , which is non-negative as a result of being positive semidefinite. It then follows that

Since each term in the above decomposition is positive semidefinite, it follows that is separable, which implies that is separable, which finally implies that is separable.

In light of Theorem 6, it seems somewhat natural to ask how far we can push things: what values of the operator-Schmidt rank imply that a state is separable? Certainly we cannot expect all states with an operator-Schmidt rank of to be separable, since every state in has operator-Schmidt rank or less, and there are entangled states in this space (more concretely, it’s easy to check that the maximally-entangled pure state has operator-Schmidt rank ).

This left the case of operator-Schmidt rank open. Very recently, it was shown in [7] that a mixed state in with operator-Schmidt rank is indeed separable, yet there are entangled states with operator-Schmidt rank in .