Q: What exactly is the vacuum catastrophe and what effects does this have upon our understanding of the universe?

Physicist: The vacuum catastrophe is sometimes cited as the biggest disagreement between theory and experiment ever. They disagree by a factor of at least 10107.

According to quantum field theory the energy of empty space can’t quite be zero. In fact, QFT gives us an exact value for how much energy empty space should have. Although we can never access that energy, it does have a gravitational effect.

One of the (many) things the Voyager probes did was allow us to estimate how strong those gravitational effects are. Unfortunately, they determined that the theoretical predictions are way, way, way off (too high).

It’s a catastrophe because QFT is otherwise a stunningly accurate theory (the most accurate ever, by far). But, at the end of the day, you have to fall back on observation, so something about our favorite theory is wrong.

One result of the Heisenberg Uncertainty Principle is that it’s impossible for a system to be in a zero-energy state. In a nutshell: if a particle definitely has zero energy, then it’s definitely not moving and its momentum is zero. However, to get that level of certainty you need the position to be completely uncertain and (for various reasons) that’s untenable. You can run through this mathematically, and you find that systems always have just a tiny bit more than zero energy, and that that energy is proportional to , where is the frequency of the particle/system in question. That little bit of energy is called the “ground state energy” or just “ground energy”.

The same thing applies to all particle fields, but rather than generalize, I’ll just talk about light: the electromagnetic (EM) field. It turns out that every frequency of the EM field, at every point in space, is its own tiny system (not at all obvious; that falls out of the math). As a result, instead of a tiny ground state energy for a single system, in any given region of space you have lots of systems. These form the ground state energy density, which is more commonly known as the “zero point energy”.

As a quick aside, a lot of people get very excited about zero point energy, but shouldn’t. Setting aside the fact that harvesting it would violate the Uncertainty Principle (which is set in stone pretty good), to generate usable energy you still need to drop things from high energy states to low energy states. For example, there’s a tremendous amount of potential energy to be gained by dropping all of the ocean’s water to the bottom of the ocean (a waterfall as tall as the ocean is deep would generate a lot of hydroelectric power). Of course, first we just need to pump all the water out. Then we can gain energy by pouring it all to the bottom (so, the net energy gain is at best zero).

Back to the point: So there’s a ground state energy for each frequency of light. Looking at all the frequencies up to you find that the ground state energy density is proportional to (again, not obvious).

But there are a lot of frequencies out there! As far as we know, there may be no upper limit, which would imply that the ground state energy is infinite. There are a lot of ad hoc estimates (that tend to be extremely high), based on the highest energy photons we can make with our accelerators, or the highest energy photons observed, or the highest energy photon it even makes sense to consider (if the frequency is too high, the wavelength is short enough that space gets “grainy”… sort of). All of these estimates maintain that the zero point energy is stupefyingly huge.

However, all energy and matter creates gravity, so you’d expect that all that extra stuff would affect how gravity works. Specifically, you’d expect the velocity of orbiting objects to all be about the same, regardless of the size of the orbit (still: not obvious). But, to the best of our ability to measure (which is pretty good), no effect has been seen at all in terms of the movement of stars and planets and whatnot.

So why not just abandon the whole zero point energy idea? Why not say: “it’s clearly not around, so let’s move on”? Because you can detect it! Curveball!

The "Casimer effect" and a recent experimental set up to measure it. Normally the pressures of the (nearly) virtual particles around us balance out. But between two surfaces the lower frequency (longest wavelength) wave functions can't exist, so they can't add to the pressure. As a result the outside pressure is higher, and the surfaces are pushed together.

The electric field inside of a conductor is zero (in a super-conductor at least). This principle is responsible for things like the shininess of metal and Faraday cages. In between two conducting surfaces the electric field can only assume wavelengths shorter than the distance between the plates (and thus only frequencies above a certain cut-off), because the plates nail down the field by forcing it to be zero.

This is a little like saying you’d expect to find big, low-frequency, waves in the ocean, but not in a cup.

Since the region in between the conducting surfaces is missing all the “tiny systems” corresponding to those low freuencies, there’s a slightly lower energy density between two conducting surfaces than outside of them (never mind that both densities may be huge), and this manifests as a tiny pressure that pushes the surfaces together. If there were no zero point energy at all, then you wouldn’t see this effect.

So, just like quantum field theory predicted, there is some ground state energy (1 point for QFT). However, the theory also predicts that there should be so much energy that its gravitational effects would overwhelm the gravity of everything else (whoops).

As far as what this means for our understanding of the universe: we’re missing something. But this is old-hat for scientists. As a people, we’re used to dealing with unknowns and weird experimental results. It’s just that, in physics at least, the last century has been one big prediction/verification win after another. A stumbling block like this stands out because we’ve been doing so well.

The vacuum catastrophe may lead to another big paradigm shift, or a slight correction, or who knows. Other small weirdnesses, like nuclear decay and Mercury’s orbit, have led to the creation of entirely new fields, like particle physics and general relativity.

We’re probably just not taking something into account, but it’s a big something whatever it is.

It’s not that tiny.
Other effects, like gravity or van der waals, can be carefully taken into account, one by one, and eliminated as variables. So, since the scientists doing the experiment are aware of the gravitational force, they can subtract it from their results.

No idea.
We should really get a cosmologist for this website.
I suspect you could fix it by changing the cosmological constant or something, which is basically the physicist’s way of saying “s’cool! Doesn’t count! No backsies!”

In response to Duke’s question, “What are the current theories attempting to explain this discrepancy?”, I’ve sketched one possibility in the last chapter (Chapter “Z”) of my book that you can get to by clicking on my name. Very briefly (leaving out links to the references that I provide in that chapter), Robert Klauber (author of the on-line book “Student Friendly Quantum Field Theory”) published a paper showing that the “zero-point catastrophe” is eliminated if one accepts not only the usual “retarded-wave solution” but also the “advanced-wave solution” to the field equations. As far as I recall, Kaluber doesn’t pursue the idea in his paper, but such a procedure is the same as accepting that time runs in the opposite direction in (the negative-energy) background space. That may seem to be a major inadequacy of the proposed solution, but actually, papers by John Cramer (Univ. of Washington) show that permitting time to run in the negative direction in space (an idea originally proposed by Wheeler and Feynman for classical electrodynamics) also eliminates some of the familiar and long-standing “paradoxes” of quantum mechanics. So, in sum, one way to eliminate “the discrepancy” is to admit that, in space, time runs in the opposite direction from the direction with which we’re familiar: its future is our history; our future is its history! Weird, but it seems to work!

“Specifically, you’d expect the velocity of orbiting objects to all be about the same, regardless of the size of the orbit (still: not obvious). But, to the best of our ability to measure (which is pretty good), no effect has been seen at all in terms of the movement of stars and planets and whatnot.”

What if all that energy is there, but it’s just set as potential energy? Everything that we can “read” is the bleed-out. Say…the wavelengths of the energy that exists are on the opposite ends of the “graphs” of the universe and that where the energy meets, they keep each other in check. Let’s pretend for a second that the wavelengths in space show their energy as ohms and use that as an example: a wavelength of energy that displays 15 ohm’s of energy is met, at any point along the wavelength’s path, with a wavelength that displays -15 ohm’s of energy. The bits we could read energy levels from are where the wavelengths cross at uneven sections of their paths. Like…imagine sin(x) and cos(x), now imagine sin(2x) and cos(x). There are plenty of points that don’t cross at exact opposite points.
And since this is happening in the vacuum of space and across the infinite volume, and according to fourth and fifth dimension theories the “area” of the flat zones, of space, we can’t accurately measure everything that’s happening. It just happens too fast, or on a plane we just can’t register yet. I guess you can imagine it being measured similar to seismic pressure on the tectonic plates. When stuff “happens” to the particles (plates), we can see it and measure it. When it’s in a static state, it’s more difficult to measure. It is possible to, we just haven’t done it right yet. This would also be more difficult to measure by the fact that a near infinite amount of these interactions could be happening at any given moment and could interfere with any other interaction. And it doesn’t have to be just two wavelengths that meet up. It could be 2, 2^10, 37, 5, or any real number (n) that meet. And those would all interact in a totally different way. Heck, it could even be any non real number or series of wavelengths. That would give off some very anomalous readings, I’m sure.

One of the ways of understanding the zero point energy of the photons is by application of the uncertainty principle to the harmonic oscillator. So any photon, which is in fact an excited state of the electro-magnetic field must also have a zero point energy. This can also be derived from the quantum mechanical derivation of the energy of a harmonic oscillator. However, if there is no photon present, for instance because it was not created by pair annihilation and susequently red shiftes by the expansion of the universe, there is no reason to apply the uncertainty principle. There is no uncertainty for non-existing particles.

Some papers like Cavity QED indicate a break in isotropy when Casimir cavities occur, and other papers suggest time symmetry is also broken resulting in negative dilation and anomalous decay rates for radioactive gas atoms loaded into these Casimir structures. My posit is that Casimir effect forces the longer wavelengths to bend space time such that they still fit between the plates from their own local perspective even though that measurement appears too small from our perspective outside the cavity..a nano scale Dr Who’s TARTAS. This could allow for a self assembling Maxwellian demon where HUP supplies the motive force for gas atoms while quantum mechanics in the form of changing Casimir geometry opposes molecular gas motion more than atomic, this molecular resistance to the sudden changes in isotropy being induced by the geometry discounts the disassociation threshold. Casimir cavities and skeletal catalysts are basically the same thing depending on your background, physicist or chemist, but catalytic action would fit well with this proposed changing of space time geometries. Perhaps metrics for catalysts should include categories for quantum geometries besides just surface areas and quality factors.

If you could inform me here, since I am almost certainly not the most knowledgeable when it comes to physics of this variety, it’s likely I’m missing something: is it not possible that, since there is actually zero point energy everywhere, that it actually DOES cause the extreme gravity as theorized, but it causes it so uniformly that the entire fabric of Space-Time is simply moved “down”, with only a slight tendency toward the center of the universe, where the ground energy is centered? That way it would only show extreme differences if you were closer to the edge of the physically inhabited universe, which we still don’t know how far, if at all existent, that boundary would be.
(Also I would like to make a reference to the somewhat recent article on the Alcubierre Warp Drive, and how having the entire field of Space-Time shifted down would affect that concept.)

Wow, what a great explanation. I’m one of the mindless masses out there that like to know about things. I really appreciate how the author (of whom I know nothing) manages to parley specialist knowledge into something tangible for normal people, because there is currently so much knowledge that most people can only consume small bits of the stuff that are generally only in the specialist’s arena. It’s not 1000 CE when one might have had a shot at actually understanding the sum of all human knowledge. I like how the author is meticulous about pointing out the non-trivial bits (the not obvious parts), because most of us would have no idea, and I also appreciate the down-to-earth tone (humorous, even) used in disseminating said knowledge. Further, I chucked at a few points. When the uninitiated can laugh at a semi-technical matter instead of nodding off, you’ve done well.