Category: Physics

There is no bilaterally-symmetrical, nor eccentrically-periodic curve used in any branch of astrophysics or observational astronomy which could not be smoothly plotted as the resultant motion of a point turning within a constellation of epicycles, finite in number, revolving around a fixed deferent.

Epicycles were first used by the Greeks to reconcile observational data of the motions of the planets with the theory that all bodies orbit the Earth in perfect circles. It was found that epicycles allowed astronomers to retain their belief in perfectly circular orbits, as well as the centrality of Earth. The cost of this, however, was a system with many adjustable parameters (as many parameters as there were epicycles).

There’s a somewhat common trope about adding on endless epicycles to a theory, the idea being that by being overly flexible and accommodating of data you lose epistemic credibility. This happens to fit perfectly with my most recent posts on model selection and overfitting! The epicycle view of the solar system is one that is able to explain virtually any observational data. (There’s a pretty cool reason for this that has to do with the properties of Fourier series, but I won’t go into it.) The cost of this is a massive model with many parameters. The heliocentric model of the solar system, coupled with the Newtonian theory of gravity, turns out to be able to match all the same data with far fewer adjustable parameters. So by all of the model selection criteria we went over, it makes sense to switch over from one to the other.

Of course, it is not the case that we should have been able to tell a priori that an epicycle model of the planets’ motions was a bad idea. “Every planet orbits Earth on at most one epicycle”, for instance, is a perfectly reasonable scientific hypothesis… it just so happened that it didn’t fit the data. And adding epicycles to improve the fit to data is also not bad scientific practice, so long as you aren’t ignoring other equally good models with fewer parameters.)

Okay, enough blabbing. On to the pretty pictures! I was fascinated by the Hilbert curve drawn above, so I decided to write up a program of my own that generates custom fractal images from epicycles. Here are some gifs I created for your enjoyment:

Negative doubling of angular velocity

(Each arm rotates in the opposite direction of the previous arm, and at twice its angular velocity. The length of each arm is half that of the previous.)

Trebling of angular velocity

Negative trebling

Here’s a still frame of the final product for N = 20 epicycles:

Quadrupling

ωn ~ (n+1) 2n

(or, the Fractal Frog)

ωn ~ n, rn ~ 1/n

ωn ~ n, constant rn

ωn ~ 2n, rn ~ 1/n2

And here’s a still frame of N = 20:

(All animations were built using Processing.py, which I highly recommend for quick and easy construction of visualizations.)

In a previous post, I mentioned self-defeating beliefs as a category that I am confused about. I wrote:

How should we reason about self defeating beliefs?

The classic self-defeating belief is “This statement is a lie.” If you believe it, then you are compelled to disbelieve it, eliminating the need to believe it in the first place. Broadly speaking, self-defeating beliefs are those that undermine the justifications for belief in them.

Here’s an example that might actually apply in the real world: Black holes glow. The process of emission is known as Hawking radiation. In principle, any configuration of particles with a mass less than the black hole can be emitted from it. Larger configurations are less likely to be emitted, but even configurations such as a human brain have a non-zero probability of being emitted. Henceforth, we will call such configurations black hole brains.

Now, imagine discovering some cosmological evidence that the era in which life can naturally arise on planets circling stars is finite, and that after this era there will be an infinite stretch of time during which all that exists are black holes and their radiation. In such a universe, the expected number of black hole brains produced is infinite (a tiny finite probability multiplied by an infinite stretch of time), while the expected number of “ordinary” brains produced is finite (assuming a finite spatial extent as well).

What this means is that discovering this cosmological evidence should give you an extremely strong boost in credence that you are a black hole brain. (Simply because most brains in your exact situation are black hole brains.) But most black hole brains have completely unreliable beliefs about their environment! They are produced by a stochastic process which cares nothing for producing brains with reliable beliefs. So if you believe that you are a black hole brain, then you should suddenly doubt all of your experiences and beliefs. In particular, you have no reason to think that the cosmological evidence you received was veridical at all!

I don’t know how to deal with this. It seems perfectly possible to find evidence for a scenario that suggests that we are black hole brains (I’d say that we havealready found such evidence, multiple times). But then it seems we have no way to rationally respond to this evidence! In fact, if we do a naive application of Bayes’ theorem here, we find that the probability of receiving any evidence in support of black hole brains to be 0!

So we have a few options. First, we could rule out any possible skeptical scenarios like black hole brains, as well asanything that could provide anyamount of evidence for them (no matter how tiny). Or we could accept the possibility of such scenarios but face paralysis upon actually encountering evidence for them! Both of these seem clearly wrong, but I don’t know what else to do.

I think I feel somewhat less confused about self-defeating beliefs (at least when considering the black hole brain scenario maybe I would feel more confused about other cases).

It seems like the problem might be when you say “imagine discovering some cosmological evidence that the era in which life can naturally arise on planets circling stars is finite, and that after this era there will be an infinite stretch of time during which all that exists are black holes and their radiation.” Presumably, whatever experience you had that you are interpreting as this cosmological evidence is an experience that you would actually be very unlikely to have given that you exist in that universe and as a result shouldn’t be interpreted as evidence for existing in such a universe. Instead you would have to think about in what kind of universe would you be most likely to have those experiences that naively seemed to indicate living in a universe with an infinity of black hole brains.

This could be a very difficult question to answer but not totally intractable. This also doesn’t seem to rule out starting with a high prior in being a black hole brain and it seems like you might even be able to get evidence for being a black hole brain (although I’m not sure what this would be; maybe having a some crazy jumble of incoherent experiences while suddenly dying?).

I think this is a really good point that clears up a lot of my confusion on the topic. My response ended up being quite long, so I’ve decided to make it its own post.

*** My response starts here ***

The key point that I was stuck on before reading this comment was the notion that this argument puts a strong a priori constraint on the types of experiences we can expect to have. This is because P(E) is near zero when E strongly implies a theory and that theory undermines E.

Your point, which seems right, is: It’s not that it’s impossible or near impossible to observe certain things that appear to strongly suggest a cosmology with an infinity of black hole brains. It’s that we can observe these things, and they aren’t actually evidence for these cosmologies (for just the reasons you laid out).

That is, there just aren’t observations that provide evidence for radical skeptical scenarios. Observations that appear to provide such evidence, prove to not do so upon closer examination. It’s about the fact that the belief that you are a black hole brain is by construction unmotivateable: this is what it means to say P(E) ~ 0. (More precisely, the types of observations that actually provide evidence for black hole brains are those that are not undermined by the belief in black hole brains. Your “crazy jumble of incoherent experiences” might be a good example of this. And importantly, basically any scientific evidence of the sort that we think could adjudicate between different cosmological theories will be undermined.)

One more thing as I digest this: Previously I had been really disturbed by the idea that I’d heard mentioned by Sean Carroll and others that one criterion for a feasible cosmology is that it doesn’t end up making it highly likely that we are black hole brains. This seemed like a bizarrely strong a priori constraint on the types of theories we allow ourselves to consider. But this actually makes a lot of sense if conceived of not as an a priori constraint but as a combination of two things: (1) updating on the strong experiential evidence that we are not black hole brains (the extremely structured and self-consistent nature of our experiences) and (2) noticing that these theories are very difficult to motivate, as most pieces of evidence that intuitively seem to support them actually don’t upon closer examination.

So (1) the condition that P(E) is near zero is not necessarily a constraint on your possible experiences, and (2) it makes sense to treat cosmologies that imply that we are black hole brains as empirically unsound and nearly unmotivateable.

Now, I’m almost all the way there, but still have a few remaining hesitations.

One thing is that things get more confusing when you break an argument for black hole brains down into its component parts and try to figure out where exactly you went wrong. Like, say you already have a whole lot of evidence that after a finite length of time, the universe will be black holes forever, but don’t yet know about Hawking radiation. So far everything is fine. But now scientists observe Hawking radiation. From this they conclude that black holes radiate, though they don’t have a theory of the stochastic nature of the process that entails that it can in principle produce brains. They then notice that Hawking radiation is actually predicted by combining aspects of QM and GR, and see that this entails that black holes can produce brains. Now they have all the pieces that together imply that they are black hole brains, but at which step did they go wrong? And what should they conclude now? They appear to have developed a mountain of solid evidence that when put together (and combined with some anthropic reasoning) straightforwardly imply that they are black hole brains. But this can’t be the case, since this would undermine the evidence they started with.

We can frame this as a multilemma. The general reasoning process that leads to the conclusion that we are black hole brains might look like:

We observe nature.

We generate laws of physics from these observations.

We predict from the laws of physics that there is a greater abundance of black hole brains than normal brains.

We infer from (3) that we are black hole brains (via anthropic reasoning).

Either this process fails at some point, or we should believe that we are black hole brains. Our multilemma (five propositions, at least one of which must be accepted) is thus:

Our observations of nature were invalid.

Our observations were valid, but our inference of laws of physics from them was invalid.

Our inference of laws of physics from our observations were valid, but our inference from these laws of there being a greater abundance of black hole brains than normal brains was invalid.

Our inference from the laws of there being a greater abundance of black hole brains from normal brains was valid, but the anthropic step was invalid.

We are black hole brains.

Clearly we want to deny (5). I also would want to deny (3) and (4) – I’m imagining them to be fairly straightforward deductive steps. (1) is just some form of skepticism about our access to nature, which I also want to deny. The best choice, it looks like, is (2): our inductive inference of laws of physics from observations of nature is flawed in some way. But even this is a hard bullet to bite. It’s not sufficient to just say that other laws of physics might equally well or better explain the data. What is required is to say that in fact our observations don’t really provide compelling evidence for QM, GR, and so on.

So the end result is that I pretty much want to deny every possible way the process could have failed, while also denying the conclusion. But we have to deny something! This is clearly not okay!

Summing up: The remaining disturbing thing to me is that it seems totally possible to accidentally run into a situation where your best theories of physics inevitably imply (by a process of reasoning each step of which you accept is valid) that you are a black hole brain, and I’m not sure what to do next at that point.

A few posts ago, I talked about how quantum mechanics entails the existence of irreducible states – states of particles that in principle cannot be described as the product of their individual components. The classic example of such an entangled state is the two qubit state

This state describes a system which is in an equal-probability superposition of both particles being |0⟩ and both particles being |1⟩. As it turns out, this state cannot be expressed as the product of two single-qubit states.

A friend of mine asked me a question about this that was good enough to deserve its own post in response. Start by imagining that Alice and Bob each have a coin. They each put their quarter inside a small box with heads facing up. Now they close their respective boxes, and shake them up in the exact same way. This is important! (as well as unrealistic) We suppose that whatever happens to the coin in Alice’s box, also happens to the coin in Bob’s box.

Now we have two boxes, each of which contains a coin, and these coins are guaranteed to be facing the same way. We just don’t know what way they are facing.

Alice and Bob pick up their boxes, being very careful to not disturb the states of their respective coins, and travel to opposite ends of the galaxy. The Milky Way is 100,000 light years across, so any communication between the two now would take a minimum of 100,000 years. But if Alice now opens her box, she instantly knows the state of Bob’s coin!

So while Alice and Bob cannot send messages about the state of their boxes any faster than 100,000 years, they can instantly receive information about each others’ boxes by just observing their own! Is this a contradiction?

No, of course not. While Alice does learn something about Bob’s box, this is not because of any message passed between the two. It is the result of the fact that in the past the configurations of their coins were carefully designed to be identical. So what seemed on its face to be special and interesting turns out to be no paradox at all.

Finally, we get to the question my friend asked. How is this any different from the case of entangled particles in quantum mechanics??

Both systems would be found to be in the states |00⟩ and |11⟩ with equal probability (where |0⟩ is heads and |1⟩ is tails). And both have the property that learning the state of one instantly tells you the state of the other. Indeed, the coins-in-boxes system also has the property of irreducibility that we talked about before! Try as we might, we cannot coherently treat the system of both coins as the product of two independent coins, as doing so will ignore the statistical dependence between the two coins.

(Which, by the way, is exactly the sort of statistical dependence that justifies timeless decision theory and makes it a necessary update to decision theory.)

I love this question. The premise of the question is that we can construct a classical system that behaves in just the same supposedly weird ways that quantum systems behave, and thus make sense of all this mystery. And answering it requires that we get to the root of why quantum mechanics is a fundamentally different description of reality than anything classical.

So! I’ll describe the two primary disanalogies between entangled particles and “entangled” coins.

Epistemic Uncertainty vs Fundamental Indeterminacy

First disanalogy. With the coins, either they are both heads or they are both tails. There is an actual fact in the world about which of these two is true, and the probabilities we reference when we talk about the chance of HH or TT represent epistemic uncertainty. There is a true determinate state of the coins, and probability only arises as a way to deal with our imperfect knowledge.

On the other hand, according to the mainstream interpretation of quantum mechanics, the state of the two particles is fundamentally indeterminate. There isn’t a true fact out there waiting to be discovered about whether the state is |00⟩ or |11⟩. The actual state of the system is this unusual thing called a superposition of |00⟩ and |11⟩. When we observe it to be |00⟩, the state has now actually changed from the superposition to the determinate state.

We can phrase this in terms of counterfactuals: If when we look at the coins, we see that they are HH, then we know that they were HH all along. In particular, we know that if we had observed them a moment later or earlier, we would have gotten H with 100% certainty. Give that we actually observed HH, the probability that we would have observed HH is 100%.

But if we observe the state of the particles to be |00⟩, this does not mean that had we observed it a moment before, we would be guaranteed to get the same answer. Given that we actually observed |00⟩, the probability that we would have observed |00⟩ is still 50%.

(A project for some enterprising reader: see what the truths of these counterfactuals imply for an interpretation of quantum mechanics in terms of Pearl-style causal diagrams. Is it even possible to do?)

Predictive differences

The second difference between the two cases is a straightforward experimental difference. Suppose that Alice and Bob identically prepare thousands of coins as we described before, and also identically prepare thousands of entangled particles. They ensure that the coins are treated exactly the same way, so that they are guaranteed to all be in the same state, and similarly for the entangled pairs.

If they now just observe all of their entangled pairs and coins, they will get similar results – roughly half of the coins will be HH and roughly half of the entangled pairs will be |00⟩. But there are other experiments they could run on the entangled pairs that would give difference answers than

The conclusion of this is that even if you tried to model the entangled pair as a simple probability distribution similar to the coins, you will get the wrong answer in some experiments. I described what these experiments could be in this earlier post – essentially they involve applying an operation that takes qubits in and out of superposition.

So we have both a theoretical argument and a practical argument for the difference between these two cases. They key take-away is the following:

According to quantum mechanics an entangled pair is in a state that is fundamentally indeterminate. When we describe it with probabilities, we are not saying “This probabilistic description is an account of my imperfect knowledge of the state of the system”. We’re saying that nature herself is undecided on what we will observe when we look at the state. (Side note: there is actually a way to describe epistemic uncertainty in quantum mechanics. It is called the density matrix, and is completely different from the description of superpositions.)

In addition, the most fundamental and accurate probability description for the state of the two particles is one that cannot be described as the product of two independent particles. This is not the case with the coins! The most fundamental and accurate probability description for the state of the two coins is either 100% HH or 100% TT (whichever turns out to be the case). What this means is that in the quantum case, not only is the state indeterminate, but the two particles are fundamentally interdependent – entangled. There is no independent description of the individual components of the system, there is only the system as a whole.

Is it possible to use quantum entanglement to communicate faster than light?

Here’s a suggestion for how we might achieve just such a thing. Two people each possess one qubit of an entangled pair in the state |Ψ⟩:

The owner of the second qubit then decides whether or not to apply some quantum gate U to their qubit. Immediately following this, the owner of the first qubit measures their qubit.

If the application of U to the second qubit changes the amplitude distribution over the first qubit, then the measurement can be used to communicate a message between the two people instantaneously! Why? Well, initially 0 and 1 are expected with equal probability. But if applying U makes these probabilities unequal, then the observation of a 0 or 1 carries evidence as to whether or not U was applied. (In an extreme case, applying U could make it guaranteed that the qubit would be observed as |0⟩, in which case observation of |1⟩ ensures that U was not applied.)

In this way, somebody could send information across by encoding it in a string of decisions about whether or not to apply U to a shared entangled pair.

It might seem a little strange that doing something to your qubit over here could affect the state of their qubit over there. But this is quantum mechanics, and quantum mechanics is very, very strange. Remember that the two qubits are entangled with one another. We are already guaranteed that what happens to one can affect the state of the other instantaneously – after all, if we measure the second qubit and find it in the state |0⟩, the first qubit’s state is instantaneously “collapsed” into the state |0⟩ as well. (This fact alone cannot be used to communicate messages, because before measuring the second qubit, its owner has no control over which of the two states it will end up in.)

So whether or not our scheme will work cannot be ruled out a priori. We must work out the math for ourselves to see if applying U to qubit 2 can successfully warp the probability distribution of qubit 1, thus sending information between the two instantaneously.

First, we’ll describe our single qubit gate U as a matrix.

a, b, c, and d are complex numbers. Can they have any possible values?

No. U must preserve the normalization of states it acts on. In other words, for U to represent a physically possible transformation of a qubit, it cannot transform physically possible states into physically impossible states.

What precise constraints does this entail? It turns out that the following two suffice to ensure the normalization condition:

Instead, we need to describe the state of both qubits, which, I’ll remind you, looks like:

A 2×2 matrix can’t operate on a vector with four components. What we need is a two-qubit quantum gate that corresponds to applying U to qubit 2 while leaving qubit 1 alone. “Leaving a qubit alone” is equivalent to applying the identity gate I to it, which just leaves the state unchanged.

So what we really want is the 4×4 matrix that corresponds to applying U to qubit 2 and I to qubit 1. It turns out that we can generate this matrix by simply taking the tensor product of U with I:

Alright, now we’re ready to see what happens when we apply this gate to |Ψ⟩!

This state sure looks different than the state we started with! But is it different enough to have carried some information between the qubits? Let’s now look at the probabilities for measuring qubit 1 in the states 0 and 1:

Remember the constraints on the possible values of a, b, c, and d we started with?

Which leads us to the final answer!

Sadly, it looks like our method won’t work to produce faster-than-light communication. No matter what gate we apply to the second qubit, it has no effect on the observed probabilities of the first. And therefore, no information can be sent by applying U.

Of course, this does not rule out all possible ways to attempt to utilize entanglement to communicate faster-than-light. But it does provide a powerful demonstration of the way in which such attempts are defeated by the laws of quantum mechanics.

Is it possible to describe this two-qubit system in terms of the states of the two individual particles that compose it? The principle of reductionism suggests to us that yes it should be possible; after all, of course we can always take any larger system and describe it perfectly fine in terms of its components.

But this turns out to not be the case! The above state is a perfectly allowable physical configuration of two particles, but there is no accurate descriptionof the state of the individual particles composing the system!

Multi-particle systems cannot in general be reduced to their parts. This is one of the shocking features of quantum mechanics that is extremely easy to prove, but is rarely emphasized in proportion to its importance. We’ll prove it now.

Suppose we have a system composed of two qubits in states |Ψ1⟩ and |Ψ2⟩. In general, we may write:

Now, as we’ve seen in previous posts, we can describe the state of the two qubits as a whole by simply smushing them together as follows:

So the set of all two-qubit states that can be split in their component parts is the set of all states arising from all possible values of α1, α2, β1, and β2 such that all states are normalized. I.e.

However, there’s also a theorem that says that if any two states are physical possible, then all normalized linear combinations are physically possible as well. Because the states |00⟩, |01⟩, |10⟩, and |11⟩ are all physically possible, and because they form a basis for the set of two-qubit states, we can write out the set of all possible states:

Now the philosophical question of whether or not there exist states that are irreducible can be formulated as a precise mathematical question: Does A = R?

And the answer is no! It turns out that A is much much larger than R.

The proof of this is very simple. R and A are both sets defined by a set of four complex numbers, and they share a constraint. But R also has two other constraints, independent of the shared constraint. That is, the two additional constraints cannot be derived from the first (try to derive it yourself! Or better, show that it cannot be derived). So the set of states that satisfy the conditions necessary to be in R must be smaller than the set of states that satisfy the conditions necessary to be in A. This is basically just the statement that when you take a set, and then impose a constraint on it, you get a smaller set.

An even simpler proof of the irreducibility of some states is to just give an example. Let’s return to our earlier example of a two-qubit state that cannot be decomposed into its parts:

Suppose that |Ψ⟩ is reducible. Then for both |00⟩ and |11⟩ to have a nonzero amplitude, there must be a nonzero amplitude for the first qubit to be in the state |0⟩ and for the second to be in the state |1⟩. But then there can’t be zero amplitude for the state |01⟩. Q.E.D.!

More precisely:

So here we have a two-qubit state that is fundamentally irreducible. There is literally no possible description of the individual qubits on their own. We can go through all possible states that each qubit might be in, and rule them out one.

Let’s pause for a minute to reflect on how totallyinsane this is. It is a definitive proof that according to quantum mechanics, reality cannot necessarily be described in terms of its smallest components. This is a serious challenge to the idea of reductionism, and I’m still trying to figure out how to adjust my worldview in response. While the notion of reductionism as “higher-level laws can be derived as approximations of the laws of physics” isn’t challenged by this, the notion that “the whole is always reducible to its parts” has to go.

In fact, I’ll show in the next section that if you try to make predictions about an system but analyze it in terms of its smallest components, you will not in general get the right answer.

Predictive accuracy requires holism

So suppose that we have two qubits in the state we already introduced:

You might think the following: “Look, the two qubits are either both in the state |0⟩, or both in the state |1⟩. There’s a 50% chance of either one happening. Let’s suppose that we are only interested in the first qubit, and don’t care what happens with the second one. Can’t we just say that the first qubit is in a state with amplitudes 1/√2 in both states |0⟩ and |1⟩? After all, this will match the experimental results when we measure the qubit (50% of the time it is |0⟩ and 50% of the time it is |1⟩.”

Okay, but there are two big problems with this. First of all, while it’s true that each particle has a 50% chance of being observed in the state |0⟩, if you model these probabilities as independent of one another, then you will end up concluding that there is a 25% chance of the first particle being in the state |0⟩ and the second being in the state |1⟩. Whereas in fact, this will never happen!

You may reply that this is only a problem if you’re interested in making predictions about the state of the second qubit. If you are solely looking at your single qubit, you can still succeed at predicting what will happen when you measure it.

Well, fine. But the second, more important point is that even if you are able to accurately describe what happens when you measure your single qubit, you can always construct a different experiment you could perform that this same description will give the wrong answer for.

What this comes down to is the observation that quantum gates don’t operate the same way on 1/√2 (|00⟩ + |11⟩) as on 1/√2 (|0⟩ + |1⟩).

Suppose you take your qubit and pretend that the other one doesn’t exist. Then you apply a Hadamard gate to just your qubit and measure it. If you thought that the state was initially 1/√2 (|0⟩ + |1⟩), you will now think that your qubit is in the state |0⟩. You will predict with 100% confidence that if you measure it now, you will observe |0⟩.

But in fact when you measure it, you will find that 50% of the time it is |0⟩ and 50% of the time it is |1⟩! Where did you go wrong? You went wrong by trying to describe the particle as an individual entity.

Let’s prove this. First we’ll figure out what it looks like when we apply a Hadamard gate to only the first qubit, in the two-qubit representation:

So we have a ​25% chance of observing each of |00⟩, |10⟩, |01⟩, and|11⟩. Looking at just your own qubit, then, you have a 50% chance of observing |0⟩ and a 50% chance of observing |1⟩.

While your single-qubit description told you to predict a 100% chance of observing |0⟩, you actually would get a 50% chance of |0⟩ and a 50% chance of |1⟩.

Okay, but maybe the problem was that we were just using the wrong amplitude distribution for our single qubit. There are many choices we could have made for the amplitudes besides 1/√2 that would have kept the probabilities 50/50. Maybe one of these correctly simulates the behavior of the qubit in response to a quantum gate?

But no. It turns out that even though it is correct that there is a 50/50 chance of observing the qubit to be |0⟩ or |1⟩, there is no amplitude distribution matching this probability distribution that will correctly predict the results of all possible experiments.

Quick proof: We can describe a general two-qubit state with a 50/50 probability of being observed in |0⟩ and |1⟩ as follows:

For any |Ψ⟩, we can construct a specially designed quantum gate U that transforms |Ψ⟩ into |0⟩:

Applying U to our single qubit, we now expect to observe |0⟩ with 100% probability. But now let’s look at what happens if we consider the state of the combined system. The operation of applying U to only the first qubit is represented by taking the tensor product of U with the identity matrix I: U ⊗ I.

What we see is that the two-qubit state ends up with a 25% chance of being observed as each of |00⟩, |01⟩, |10⟩, and |11⟩. This means that there is still a 50% chance of the first qubit being observed as |0⟩ and |1⟩.

This means that for every possible single qubit description of the first qubit, we can construct an experiment that will give different results than the model predicts. And the only model that always gives the right experimental predictions is a model that considers the two qubits as a single unit, irreducible and impossible to describe independently.

To recap: The lesson here is that for some quantum systems, if you describe them in terms of their parts instead of as a whole, you will necessarily make the wrong predictions about experimental results. And if you describe them as a whole, you will get the predictions spot on.

So how many states are irreducible?

Said another way, how much larger is A (the set of all states) than R (the set of reducible states)? Well, they’re both infinite sets with the same cardinality (they each have the cardinality of the continuum, |ℝ|). So in this sense, they’re the same size of infinity. But we can think about this by considering the dimensionality of these various spaces.

Let’s take another look at the definitions of A and R:

Each set is defined by four complex numbers, or 8 real numbers. If we ignored all constraints, then, our sets would be isomorphic to ℝ8.

Now, each share the same first constraint, which says that the overall state must be normalized. This constraint cuts one dimension off of the space of solutions, making it isomorphic to ℝ7.

That’s the only constraint for A, so we can say that A ~ ℝ7. But R involves two further constraints (the normalization conditions for each individual qubit). So we have three total constraints. However, it turns out that one of them can be derived from the others – two normalized qubits, when smushed together, always produce a normalized state. This gives us a net two constraints, meaning that the space of reducible states is isomorphic to ℝ6.

The space of irreducible states is what’s left when we subtract all elements of R from A. The dimensionality of this is just the same as the dimensionality of A. (A 3D volume minus a plane is still a 3D volume, a plane minus a curve is still two dimensional, a curve minus a point is still one dimensional, and so on.)

So both the space of total states and the space of irreducible states are 7-real-dimensional, while the space of reducible states is 6-real dimensional.

You can visualize this as the space of all states being a volume, through which cuts a plane that composes all reducible states. The entire rest of the volume is the set of irreducible states. Clearly there are a lot more irreducible states than reducible states.

What about if we consider totally reducible three-qubit states? Now things are slightly different.

The set of all possible three qubit states (which we’ll denote A3) is a set of 8 complex numbers (16 real numbers) with one normalization constraint. So A3 ~ ℝ15.

The set of all totally reducible three qubit states (which we’ll denote R3) is a set of only six complex numbers. Why? Because we only need to specify two complex numbers for each of the three individual qubits that will be smushed together. So we start off with only 12 real numbers. Then we have three constraints, one for the normalization of each individual qubit. And the final normalization constraint (of the entire system) follows from the previous three constraints. In the end, we see that R3 ~ ℝ9.

Now the space of reducible states is six-dimensions less than the space of all states.

How does this scale for larger quantum systems? Let’s look in general at a system of N qubits.

AN is a set of 2N complex amplitudes (2N+1 real numbers), one for each N qubit state. There is just one normalization constraint. Thus we have a space with 2N+1 – 1 real dimensions.

On the other hand, RN is a set of only 2N complex amplitudes (4N real numbers), two for each of the N individual qubits. And there are N independent constraints ensuring that all states are normalized. So we have:

The point of all of this is that as you consider larger and larger quantum systems, the dimensionality of the space of irreducible states grows exponentially, while the dimensionality of the space of reducible states only grows linearly. If we were to imagine randomly selecting a 20-qubit state from the space of all possibilities, we would be exponentially more likely to ending up with a space that cannot be described as a product of each of its parts.

What this means is that irreducibility is not a strange exotic phenomenon that we shouldn’t expect to see in the real world. Instead, we should expect that basically all systems we’re surrounded by are irreducible. And therefore, we should expect that the world as a whole is almost certainly not describable as the sum of individual parts.

Let’s more precisely define the Grover diffusion operator D we used for Grover’s algorithm, and see why it functions to flip amplitudes over the average amplitude.

First off, here’s a useful bit of shorthand we’ll use throughout the post. We define the uniform superposition over states as |s⟩:

We previously wrote that flipping an amplitude ax over the average of all amplitudes ā involved the transformation ax → 2ā – ax. This can be understood by a simple geometric argument:

Now, the primary challenge is to figure out how to build a quantum gate that returns the average amplitude of a state. In other words, we want to find an operator A such that acting on a state |Ψ⟩ gives:

If we can find this operator, then we can just define D as follows:

It turns out that we can define A solely in terms of the uniform superposition.

The quantum gate Uf was featured centrally in both of the previous algorithms I presented. Remember what it does to a qubit in state |x⟩, where x ∈ {0, 1}N:

I want to show here that this gate can be constructed from a simpler more intuitive version of a quantum oracle. This will also be good practice for getting a deeper intuition about how quantum gates work.

This will take three steps.

1. Addition Modulo 2

First we need to be able implement addition modulo 2 of two single qubits. This operation is defined as follows:

An implementation of this operation as a quantum gate needs to return two qubits instead of just one. A simple choice might be:

2. Oracle

Next we’ll need a straight-forward implementation of the oracle for our function f as a quantum gate. Remember that f is a function from {0, 1}N → {0, 1}. Quantum gates must have the same number of inputs and outputs, and f takes in N bits and returns only a single bit, so we have to improvise a little. A simple implementation is the following:

In other words, we start with N qubits encoding the input to x, as well as a “blank” qubit that starts as |0⟩. Then we leave the first N qubits unchanged, and encode the value of f(x) in the initially blank qubit.

3. Flipping signs

Finally, we’ll use a clever trick. Let’s take a second look at the ⊕ gate.

Suppose we start with

Then we get:

Let’s consider both cases, f(x) = 0 and f(x) = 1.

Also we can notice that we can get the state |y⟩ by applying a Hadamard gate to a qubit in the state |1⟩. Thus we can draw:

Putting it all together

We combine everything we learned so far in the following way:

If we now ignore the last two qubits, as they were only really of interest to us for the purposes of building Uf , we get:

And there we have it! We have built the quantum gate Uf that we used in the last two posts.