If you’ve been around the block once or twice, you know that the speed of light in a vacuum — 299,792,458 meters-per-second — is the absolute maximum speed that any form of energy in the Universe can travel at. In shorthand, this speed is known as c to physicists.

Image credit: user Fx-1988 of deviantART.

But you or I, no matter how hard we try, will never attain that speed. There’s a simple reason for this: we have mass. And for an object with mass, you can accelerate it all you want, but it would take an infinite amount of energy to reach c, and I’m sorry, folks, there’s only a finite amount of energy in the Universe.

But that doesn’t mean we settle for 90% of c, or 99%, or 99.9999%. We always strive for that extra fraction of speed, that extra bit of energy, that extra push ever-closer to the unattainable limit. You may be most familiar with our latest attempts to do this at CERN, where we’ve recently discovered the Higgs Boson.

Image credit: LHC / CERN.

By smashing two protons into one another, one moving at 299,792,447 meters per second (just 11 m/s shy of the speed of light) in one direction and the other moving at the same speed in the opposite direction, we can produce incredibly energetic particles, bounded only by the energy available via Einstein’s E=mc2. After the LHC’s upgrade is complete, that speed will increase to 299,792,455 m/s, which will make these the fastest protons ever created on Earth.

After all, a proton is a relatively heavy particle, some 1,836 times heavier than it’s orbiting friend, the electron! Even though we’ve created protons that are at higher energies than electrons, it only takes one-1,836th the energy (or 0.054%) to get the electron up to the same speed. Which means that LEP — the Large Electron-Positron Collider (and the LHC’s predecessor) — where they got electrons up to 104.5 GeV of energy (compared to the 6,500 GeV expected for the LHC after the upgrade), still holds the record for particle accelerator record speed.

What is that speed? 299,792,457.9964 meters per second, or a whopping 99.9999999988% the speed of light, just 3.6 millimeters-per-second slower than light in a vacuum!

Image credit: LEP / CERN, via http://www.madrimasd.org/.

But that’s just here on Earth, with our lame superconducting electromagnet accelerators, powered by puny chemical energy sources. Compared to what comes out of the Universe, our terrestrial sources don’t stand a chance.

Image credit: NASA, ESA, Hubble Heritage (STScI/AURA).

Outer space is filled with collapsed stars, supernovae and supermassive black holes — including ones at the center of active galaxies, above — where magnetic fields billions of times the likes of anything ever to appear on Earth are routine. From all directions in space, cosmic rays — high energy particles, mostly protons — fly through the Universe at energies dwarfing anything we’ve ever created or even experienced here on Earth.

Image credit: Simon Swordy (U. Chicago), NASA.

Yes, there are fewer particles as we go to higher and higher energies, but the highest energies are no longer measured in terms of GeVs (Giga-electronVolts, or 109 eV), TeVs (Tera-electronVolts, or 1012 eVs) or even PeVs (Peta-electronVolts, or 1015 eVs). Instead, these energies can spike all the way up into the 1019 eV-range!

Image credit: Wikimedia Commons user Sven Lafebre.

Now, this number is really, really interesting! Because above about 4-or-5 × 1019 eV, the Universe won’t let you stay at that energy! The problem, believe it or not, is that no matter how high the energy is of the particle you made, it has to pass through the radiation bath that’s left over from the Big Bang in order to reach you.

This radiation is incredibly cold, at an average temperature of some 2.725 Kelvin, or less than three degrees above absolute zero. If we sought to calculate the root-mean-squared energy of each photon in there, it’s on the order of just 0.00023 electron-Volts, a tiny number. Every time a high-energy charged particle has a chance to interact with a photon, it has the same possibility that all interacting particles have: if it’s energetically allowed, by E=mc2, then there’s a chance it can make a new particle!

And that particle doesn’t get that energy for free; it has to come from the system that created it! The lightest particle you can create from a collision like this is a neutral pion, which you need 135 MeV of energy to make. There’s a threshold for this that’s relatively easy to compute (previously done here), and what it tells you is that as long as you’re above a certain energy threshold — known as the GZK cutoff, named for Greisen-Zatsepin-and-Kuzmin — you’re going to emit those pions until you’re below that energy threshold!

For a long time — until the last few years — we thought we’d observed particles that exceeded this threshold, which meant that either they were being generated within our galaxy (somehow), which is the only location that would allow them to survive the trip to Earth, there was something wrong with our understanding of relativity (fat chance), or, as most people thought likely, there was a problem measuring these unprecedented high energies.

Lo and behold, now the two most modern observatories/experiments looking for these — Pierre Auger Observatory and the High-Resolution Fly’s Eye Experiment — both see the GZK cutoff clearly, and no cosmic rays above about 5 × 1019 eV. As far as a proton traveling with that energy goes, do you know what that means for speed? It tells us that a proton traveling at the GZK limit has a speed of:

299,792,457.999999999999918 meters-per-second.

Image credit: David Malin, UK Schmidt Telescope, DSS, AAO.

Or, to put that in perspective, if you raced a proton of this energy and a photon to the nearest star-and-back, the photon would arrive first… with the proton just 22 microns behind, arriving 700 femtoseconds later.

And every charged particle in the cosmos — every cosmic ray, every proton, every atomic nucleus — is limited by this speed! Not just the speed of light, but a little bit lower, thanks to the leftover glow from the Big Bang! So when you dream of traveling through the Universe, it’s not quite arbitrarily close to the speed of light; the radiation from the Big Bang — so low-energy in the microwave — will fry you if you move too quickly. And that’s the cosmic speed limit for you, me, and everything else made of matter!

Related

Comments

Fascinating stuff. Terrific graphs and charts. Don’t get me wrong. I understood almost NONE of it, but I am convinced you do, and that makes me ver happy that science is moving the world’s knowledge forward at an awe-inspiring pace. Please keepdoing what you do so well (and it sounds like you enjoy your work!).

This confuses me a bit, because one of the particles – the photon that’s part of the CMB – is massless. If two massive particles hit each other, the available energy for creation of new particles is computed from their combined inertial rest-mass frame, right? So isn’t it the case that the proper frame of reference here is that of the massive particle, in which it has zero kinetic energy?

Is the reason for the GZK limit that the CMB in one direction is blue-shifted so far that its photons have the energy to create these pions? If so, does the CMB effectively provide a sort of preferred reference frame for the universe which relativity would deny in its absence?

In classical mechanics, you work in the center-of-momentum frame for collisions. In relativistic mechanics, as long as you use relativistic momentum, you’re ok using whatever frame you want.

The quick-and-dirty formula — which comes about as a first order Taylor series expansion of the full relativistic treatment — tells you that when the quantity Sqrt(2 * E_CMB-photon * E_cosmic-ray) exceeds the mass of a pion, you’re going to make one. If you wanted to boost sufficiently into the center of momentum frame, you would get the same answer, only — as you intuit — the energy of the photon would be blueshifted incredibly, and the energy of the cosmic ray would be much lower.

But the CMB does provide a reference frame, in that regard, which defines “the Universe at rest relative to cosmic expansion”, which we’re moving through at many hundreds of km/sec, which is why we see the CMB dipole we do.

The units of energy in HEP are abbrevated as “eV”, but they are written out as “electron-volts”, without a capital “V”. This follows the SI convention that base units are spelled out in lowercase, but the abbreviation is capitalized only when the unit is derived from a proper name (the non-SI “litre”, abbreviated “L”, being the only exception).

The GZK cutoff does indeed arise from neutral pion production (via gamma p -> Delta+ -> p pi0), but that is not because the pi0 is the “lightest particle you can create.” In fact, inverse bremsstrahlung (gamma p -> p e+ e-, where the proton is the “beam” particle here) starts at a threshold of just a few x 10^17 eV, two orders of magnitude lower.

The difference between those processes is the proton’s energy loss: brem only takes a couple of per mil of the proton’s energy, but the Delta+ resonance eats something like 20%!

I’ve always had a hard time w/ eV’s. I have no reference for them other than Michaels last sentence. It would be nice to see a chart that put some of the eV’s in human terms, so that I might at least attempt to wrap my head around the units of measurement we are using.

That is really interesting! I’ve never done the full calculation of interaction cross-sections at those energies, just some naive approximations. It makes a lot of sense to think that combination of interactions at various energies would be what dominates, but thanks for that very informative comment!

crd2, I think Sean really nailed the CP-violation: it’s there, it’s great that we measured it, it’s exactly what the standard model predicted, and it can’t explain our Universe’s matter/antimatter asymmetry. For a little humanity on what an electron-volt is, yes, it’s the amount of energy you’d gain accelerating one electron through a potential of one Volt, but it’s also going to take that electron from rest and turn it into an electron moving at ~593 km/s because you gave it an energy of 1 eV.

My version of Excel can’t even store that number in one part, but if I’ve done my algebra right, if you had a spaceship that could travel through the universe at the GZK limit, 60 billion years would go by for every millisecond of shipboard time. Not bad!

I’m a lover of all things science, and enjoyed the post. I’m smart enough to know that I’m too dumb to grasp all this yet too dumb to accept some accepted theories (because I can’t get the answers I need to rationalize the trickier points in my head). For instance, I refuse to accept that increased speed slows time (cesium, schneezium…). I would like to be the one who figures out the ‘light as a wave or particle’ conundrum, but I don’t have any equipment. By the way, has the reaction speed of quantum pairs been measured? Food for thought…

@Marshall: The CMB dipole corresponds to a motion of about 200 km/s. The uncertainty in the energy scale for the data points near the GZK cutoff is much larger than that. Any directional difference in the spectral shape at the high end is going to be unmeasurable — consistent with no difference at all, within the uncertainties.

@Koogie Buffalo: The “argument from ignorance” is a well-known fallacy. “If I can’t understand X, then X must be wrong.” Your experience, which is what your intuition is based on, is limited to (a) large, non-quantum objects; (b) extremely low velocities; and (c) weak gravitational fields. You should not _expect_ to have a good intuition for the behaviour of small quantum systems, nor of the behaviour of extremely fast moving objects, nor of the behaviour of space and time in strong gravitational fields.

To understand those things, we use mathematical theories backed up by experimental test and verification. If you want to understand them yourself, then you need to invest the time and effort to study and learn those mathematical theories, the existing experimental data which support them, and the experimental techniques which you can use to explore new areas.

Hi Ethan,
very nice page. Wanted to send you a private message but couldn’t find your email. The cosmic ray flux graph you’ve put in is from Simon Swordy (as actually written in the graph), who unfortunately left us three years ago. Nice to see his graph still around, and would be nice to have his name attached to it.
BTW, it may very well that the end of the spectrum seen at the highest energies is not the GZK but the end of the accelerating power of the sources. It may actually be the preferred scenario given the Pierre Auger Observatory mass composition measurements at the highest energies.

How can one be sure that the cosmic background radiation is not light from galaxies from far away? In other words, the more we look at higher redshift, the more distant galaxies we can see. Isn’t it reasonable to expect the darkness of the sky to be more and more filled with stars and galaxies the further in redshift we go?

@Serge D: You are invoking “Olbers’ Paradox” (http://en.wikipedia.org/wiki/Olbers%27_paradox), which I invite you to read about. We can definitely observe galaxies from extremely far away, and those galaxies are observed in visible and infrared wavelengths, not in microwaves. We also observe, in those “ultra deep field” campaigns, that the galaxies observed are discrete sources, easily distinguished from the continuous background.

The cosmic microwave background is a separate phenomenon. We know, from direct measurements, that it is an extremely accurate (to within one part in 100,000) blackbody radiation spectrum (http://en.wikipedia.org/wiki/Black-body_radiation) corresponding to a temperature of 2.72548±0.00057 K (degrees above absolute zero). It is not possible for the visible and UV spectrum of a galaxy, redshifted, to mimic a continuous blackbody spectrum.

I would encourage you to research the data, either in the form of non-specialist review articles (e.g., Wikipedia) or actual peer-reviewed papers, in order to confirm this conclusion for yourself.

To Michael Kelsey RE: “If I can’t understand X, then X must be wrong.” Perhaps I have mistaken your arrogance as rudeness, but my argument is not that X (and Y and Z, as well) are wrong, just that there is more to X, Y, and Z than is currently known, and that some confirmed results have been misinterpreted, due to these unknowns. I’m happy to admit my ignorance and stubborn enough to still believe we’re missing the elusive key that will change everything. I just hope I live long enough to witness a valid unified theory that will support some of my apparently outlandishly naive hypotheses. Or maybe I’m just deluded.

” but my argument is not that X (and Y and Z, as well) are wrong, just that there is more to X, Y, and Z than is currently known, and that some confirmed results have been misinterpreted, due to these unknowns.”

But if X=”all physical clocks slow down by a factor of 1/sqrt(1 – v^2/c^2) in an inertial frame where they are moving at speed v”, or “all processes can be described by laws of physics which which are lorentz-symmetric”, then you *are* saying that X is just wrong, no? There’s no “interpretation” about it, these are just predictions about quantitative physical measurements. And there is abundant experimental support for both–if you think it’s just a matter of cesium clocks as suggested by your earlier comment, you should understand that the evidence is much, much broader, covering basically any experiment involving things that move at high speeds (pretty much any result having to do with particle colliders provides evidence for relativity, since quantum field theory is relativistic by nature).

As far as moving clocks running slow, the easiest everyday evidence is to be found in the GPS systemns you use in your car..
They GPS will not work accurately unless the dilation of time caused by the speed of the signal going up to the Satellite and back to your car, is taken into effect in the calculations.

Self absorbed post – on average I understand about 25% to 75% of Ethan’s posts, which is absolutely brilliant. For 10% it would still be a great return on effort. Every now and again a few things I didn’t understand from previous posts come together. Love this blog to bits.

Thanks for the informative article and thanks for Michael Kelsey’s clarifying comments! Much appreciated!

To place the SI nitpick in context, it is a device to suppress unnecessary error akin to the one that doomed a US Mars mission. The capitalization of abbreviations when they are proper names is the remaining honoring of the person.

In the same context there have been implementations for simplification (which I was taught) but I think they dropped it or never used it in generic versions. You still don’t put 10 eV with plural s, no ’10 eVs’, which simplifies (and here avoid conflict) and so assures. But there was also versions where you supported that with pronouncing it “10 electronvolt”, the rationale being that the unit is singular and the number carries the plural implicitly.

@Koogie Buffalo:

“Perhaps I have mistaken your arrogance as rudeness,”.

Science is elitist, but it is also been found to be correct by its mean of self-correction, the market of ideas. If there is arrogance in stating what we know, the ideas that won the market over by being highest prized, it is based on verifiable facts.

“but my argument is not that X (and Y and Z, as well) are wrong, just that there is more to X, Y, and Z than is currently known, and that some confirmed results have been misinterpreted, due to these unknowns.”

Any way you slice that, what you are saying is that X, Y, Z is wrong.

But the main problem is that your argument is based on the purported existence of unknown unknowns. However it turned out, for reasons which I note are among unknown unknowns (who would expect finite resources to converge on absolute facts among, apriori, infinite possibilities?) but it doesn’t matter, since what we have are known unknowns. The Laws Underlying The Physics of Everyday Life Are Completely Understood:

“But there’s no question that the human goal of figuring out the basic rules by which the easily observable world works was one that was achieved once and for all in the twentieth century.

You might question the “once and for all” part of that formulation, but it’s solid. Of course revolutions can always happen, but there’s every reason to believe that our current understanding is complete within the everyday realm.”

[Sean Carroll, theoretical physicist.]

Meaning, it is highly unlikely you can replace all what we know with something completely else. When the OPERA experiment looked for either problems with relativity (faster than universal speed limit neutrinos) or measurement error, they found measurement error.

To sum up, the relativity regime of the vacuum the cosmic ray’s travel in is everyday, the energy regime of the particles are less than the everyday car you drive everyday, the blackbody radiation of hot bodies (like the Sun) that testifies to quantum nature is literary every day, and the relativistic combination with quantum mechanics that gives particle physics quantum fields are everyday (or so I assume).

So there is nothing there that touches relativity or quantum mechanics.

It is by the way the quantum fields that buries the presumed “duality” between particle and wave (or so I assume, not having studied them):

“A particle is a nice, regular ripple in a field, one that can travel smoothly and effortlessly through space, like a clear tone of a bell moving through the air.”

“Even to say a particle like an electron is a ripple purely in the electron field is an approximate statement, and sometimes the fact that it is not exactly true matters.”

“This goes on and on, with a ripple in any field disturbing, to a greater or lesser degree, all of the fields with which it directly or even indirectly has an interaction. [See Fig. 7.]

So we learn that particles are just not simple objects, and although I often naively describe them as simple ripples in a single field, that’s not exactly true.”

Dark or black in biology refers to the colour we see in the absence of perception of light in the limited range of wavelength not perceived by the human eye. The “dark or black” that we see is an artificial colour created by the human brain (like pink, gray or white). I guess in physics dark sky has another meaning than biology, i.e. no photon received from a sky area regardless of its wavelength. In that sense, no part of the sky is really dark since you can always find some photon at some wavelength, whether microwave or not, especially if we observe long enough (days, weeks, months, years). Now when visible light is redshifted we find discrete sources (quasars, galaxies). When we look the microwave universe, the photons received cover the entire sky if I understand well. Other diffuse background light, like the cosmic infrared background might be linked to specific sources. How can one be sure that the same story will not be told once the microwave radiation is mapped with higher precision?

@Serge D: “Black-body” is a specific, technical term in physics. Since you are not familiar with that term, please read the Wikipedia article I referenced, which explains what a black-body is, presents the specific mathematical shape of the spectrum, and provides references for further reading.

Your hypothesis has already been demonstrated to be wrong by the detailed measurements of the black-body spectrum of the CMB, which cannot be reproduced (and certainly not reproduced to one part in 100,000) by discrete high-energy sources scattered over the sky.

“Let no one ignorant of geometry enter” — over the door of Plato’s Academy

Of course, there’s no help for the willfully ignorant — they have formed America’s long tradition of anti-intellectualism which shows no sign of going away.

Nevertheless, responsible popular and semi-popular books can encourage the young to learn and the not-well-informed 21st century American to appreciate advanced physical science. The next three, for example,

Paul Nahin. Time Machines — time travel in physics, metaphysics and science fiction. AIP Press. 1998.
Older, but covering the same landscape as above up to 1998. Also with mathematical appendices giving detailed algebraic derivations more detailed than Everett and Roman.

Dray, Tevian. The geometry of special relativity. CRC Press 2012.
More narrowly focused and more mathematically demanding — Dray’s swift moving, slender volume of 120+ pages will outpace and challenge a novice — but it offers a review of special relativity with a well-motivated exposition of hyperbolic functions and a refreshing simplification of creaky formulas to match the underlying geometry.

Now one intermediate volume for applied mathematical amateurs. Among books which are not textbooks — which are aimed at a mathematically literate (that is, numerate) readership, Paul Nahin has written a series of volumes for Princeton U Press. Among which:

Dr Euler’s fabulous formula. Princeton 2006
contains a 160 page tour de force exposition of Fourier series and Fourier integrals, with a high ratio of symbols to text. Nahin requires his readers to have a background: two years of calculus, a course in differential equations, elementary matrix algebra, and elementary probability theory. Not for everyone obviously, but there is not one iota of pretension or assumed superiority from Nahin — applied mathematics with no apology for short-cuts.

Finally, two “elementary” texts for self instruction or as an alternative to crappy assigned texts on Quantum Mechanics and General Relativity:
N. Zettilli. Quantum Mechanics: concepts and applications 2nd ed Wiley 2009.
filled with worked-out examples — high ratio of symbols to text — it’ll test your calculus and algebraic skills too. Fresh approach starting with matrix mechanics and Dirac’s notation for finite bases — puts matrix algebra to work and saves traditional Schroedinger wave equations until they too can be easily assimilated into Dirac notation. Not enough has been said about how felicitous notation speeds cognition.

B. Schutz. A first course in general relativity. 2nd ed 2009.
Schutz provides the fundamental mathematical tools and theoretical infrastructure in a scant 200 pages. Rather than impose textbook tradition with unwieldy and ugly contravariant tensors — Schutz follows the mother ship Gravitation (MTW 1973 1,200+ pages) in a geometricized exposition in which vectors meet their soul-mate one forms in tensorial bliss. His rather offhand introduction to Einstein’s index convention can be augmented by doing the “index-gymnastics” in Gravitation.

my argument is not that X (and Y and Z, as well) are wrong, just that there is more to X, Y, and Z than is currently known, and that some confirmed results have been misinterpreted, due to these unknowns

No doubt there are many things about our current scientific theories that will turn out to be wrong. After all, all our past scientific theories have turned out to be wrong. 😉

But there are several points worth considering:
1. As models/approximations, our theories have tended to get better. Maybe relativity is wrong, but we can be pretty sure its a better or more accurate approximation than Newtonian mechanics or any other idea we’ve previously held.

2. A superior replacement theory is highly unlikely to predict that stuff we’ve already observed to occur does not occur. Because if it predicts something radically different than what we observe to be true, its not a superior theory, is it? So, in the same way that NM gives the same answers as relativity at low speed and gravities, we can be pretty certain that if relativity gets replaced tomorow, the replacement will give the same answers as relativity for particle behavior that we’ve observed to be fully consistent with relativity. IOW, Replacement is very verly likely to also predict that clocks run differently at different gravitational strengths, etc. Because this is an observed occurrence, and a “better theory” is going to have to predict that as well as new things.

3. There is currently no better theory. If you know of one, please mention it.

4. Science will continue to use the best available theory until a better one come along. On its own, the statement “I think its wrong” is scientifically worthless/useless at this time even if your opinion turns out correct. For your complaint to have scientific use or merit, you need to propose a competing theory that (at a minimum) explains what we already see to be true plus makes some novel prediction (that relativity doesn’t make) that we can test.

Ethan, if two particles are each moving at very near lightspeed toward each other, is one moving at lightspeed relative to the other? Or do they both have to be traveling at lightspeed to reach relative lightspeed?

If either one moves at lightspeed, then their relative velocities will be seen to be the speed of light from either one’s viewpoint. But if both are slower, then both will see the other as being slower-than-lightspeed.

St. Thomas teaches that angels travel at the speed of thought, meaning they can travel anywhere in the universe instantly with no passage of time. Compared to that velocity of speed, the speed of light (albeit impressing) seems rather draggy.

I say you can cheat the cosmic speed limit by insulating yourself from space time don’t try to beat time but let it flow around you imagine you are in a stream flowing with it form a bubble around yourself and be still remove the bubble your further along in time.because the universe is expanding it would be futile to try to reach a vastly far away point drop out then pop back in.

Light, in a vacum is traveling at 299,792,458 meters-per-second except when it isn’t. When it isn’t traveling at 299,792,458 meters-per-second it is going slower or faster… which according to our wonderful manipulation of numbers is impossible… impossible only because then it would not fit the equations… the equations which have to equal something or we’re really, really in trouble…
Of course that is when the miracle happens or not. Kind of like atoms made of 99.9999% space and the other 0.0001?… is also space. So, just exactly are we talking about here???
Oh, of course the men behind the curtain… pay no attention to them. They do not exist… except of course, when they do.
2+2 does not always equal 4…. it depends on a variety of other factors that may or may not be given or included… don’t ya love it ?
coffee anyone?

Then how come the Plejarans get here in 7 hours, or in a millionth of a second, depending on the ship in use to cover the 550 light years. .How? Because they are 12,500 years ahead of us on tech. that’s how. http://www.thayfly.com

You are entitled to your own ideas. However, you are not entitled to your own facts. The simple fact is, whether you accept it or not, time dilation has been directly observed. As others have pointed out, GPS doesn’t work without accounting for time dilation, certain subatomic particles formed from cosmic rays don’t have sufficient lifetimes to be observed in ground based laboratories unless time slows down for them because of their motion relative to the earth, and probably most convincing for you, actual clocks (very accurate atomic ones) were flown around the world on airplanes and their times compared with stationary clocks. In all cases, observations are consistent with relativity.

It’s really just a natural consequence of the fact that the speed of light is the same for all observers. That fact was also proven by observation, namely the Michelson-Morley experiment. So far, all observations have supported relativity. It’s possilble that there will be some new observation that is not consistent with relativity, but there’s essentially no chance that time dilation will go away; it’s been directly observed, so any new theory that denies time dilation cannot be accepted since it would conflct with observation.

The cause of speed/energy limitation of this Universe is in the rate of its space expansion. nothing can move faster than the ‘speed’ of space expansion, not even the light.

Whether the speed of light is exactly the same as the Universe’s rate of expansion (299,792,458 *new* meters per second, (cubed, or more likely squared, due to black hole’s surface information bounding condition, but that’s another topic) is a question that depends on the value of the initial latency (t0) and the Tau parameter of the underlying space/surface (low pass filtered space, check the engineering equations, as well as particle potential barrier tunneling equations).

In other words, whether initial latency (t0) truly equals 0 or not is still undecided. It also most probably doesn’t make any practical difference.

What may be interesting to ponder is why do black holes preserve their whole history, from the very moment they were formed. Does information at the 0- boundary get frozen in the black hole’s interior and what happens to this information as the black hole shrinks as it radiates out its energy. Does black hole’s history start unraveling, running backwards in a manner of speaking, without anyone noticing anything?

Funny things, these black holes, aren’t they?

Any intelligence frozen inside a black hole would never know if its history ran forward of backward. Its memory would always contain only previous histories, never the future ones that were potentially radiated out. And if one were, purely theoretically speaking of course, able to expand and shrink a black hole at will, one could, again purely theoretically, manipulate black hole’s histories. Anyone inside that black hole would be none the wiser.

In a way, quick save feature is built-in by design, but quick load demands deletion by radiation.

I have a problem with the “massless particle” idea. Photons travel at the speed of light because they are supposed to be massless. I am having a hard time with this concept, If something exists then it must have mass. If it can be seen or detected then it exists therefore it has t have mass. I am no physicist just a normal guy trying to understand a basic concept looking at it from the side of reason.

The cause of speed/energy limitation of this Universe is in the rate of its space expansion. nothing can move faster than the ‘speed’ of space expansion, not even the light.

Sure it can. The expansion is about 74 km/s/MPc. There is about 4 cm between the tip of my thumb and my hand, which is about 3.5E-25 MPc, so the expansion between my thumb tip and hand is about 2.3E-26 m/s. C between the tip of my thumb and hand is not just bigger than that, its 35 orders of magnitude bigger than that. Which qualifies your statement as a candidate for the ‘not even wrong’ category.

One is invariant, the other varies with the distance between two points. I am not sure why you are equating them or why you think one fixes the other.

Actually that’s very close to a real solution to General Relativity that allows FTL travel — the Alcubierre Drive! It essentially creates a “pocket” of spacetime in which your spaceship can sit, completely motionless so as not to offend the speed of light all, while spacetime around the pocket expands and contracts in such a way as to move the pocket at super-luminal velocity, dragging you with it. It is, at the highest level, basically the Star Trek Warp Drive (which was actually Alcubierre’s inspiration).

While this is a real, legitimate solution to the equations of GR, there’s a couple problems with actually doing it in our universe, namely:

1. It would require a ridiculous amount of energy.
2. Much of that energy has to be in the form of “negative mass”, which isn’t known to exist.
3. The solution only describes an existing Alcubierre region. How you actually go about creating one within ‘normal’ space is unknown.
4. The bubble inside the drive is causally disconnected from the rest of the universe. So stopping and getting off at your destination is a problem.

You’ll be pleased to know that some bright souls recently worked out that you could create the drive using a much more reasonable amount of energy — the mass-energy of say Jupiter, rather than many stars.

So now that the practical engineering problem has been solved, all that’s left are figuring out the theoretical impossibilities! Wait…

Mass is a property, like charge or spin. Some particles have no electric charge. Some have spin 0. Some have mass 0.

Possibly this issue is linguistic – maybe ‘particle’ is not the best word to use. Unfortunately I’m not sure there is any better term; normal language (at least normal English) doesn’t have a concise, intuitive term for a fundamenal, um, ‘thing’ that doesn’t have mass.

If something exists then it must have mass. If it can be seen or detected then it exists therefore it has t have mass. I am no physicist just a normal guy trying to understand a basic concept looking at it from the side of reason.

The way out of this seeming paradox is to realize that your first premise/assertion is wrong; things can exist without mass.

***
Partial Retraction: my math in @47 is very wrong – I think I divided something when I should’ve multiplied. The real speed of expansion of space over a 4 cm distance is about 1E-17 m/s…Notaphysicist is only wrong by ~26 orders of magnitude, not 35. My, um, apologies.

I thought spacetime was space and time linked together, as proven by the atomic clock experiment (two identical clocks, one put on a plane the other left on the ground, the airborne clock was a bit slower after flight). My understanding was that this was had something to do traveling in or counter to the direction of the universe/direction of time, but after reading your “size of the universe” entry I think I must be very wrong. However time does very by speed and motion inside spacetime, so how does this affect the speed of protons? (and photons?)

Also, (sorry if this is slightly not the same topic but it’s hard to talk about speed without size in a spacetime), How does the expansion of the universe affect the speed of matter and non matter particles? Up until recently, I thought since the universe was 13.7 billion years old (more or less) it was 13.7 billion light years “big” because space and time are one. I also thought the universe was expanding from an invisible unknown center, and if you flew a spaceship very fast in the opposite direction of the universe you would go back in time (and get younger until you’d vanish). I was very wrong on that and now I’m very confused. Apparently it’s 46.5 light years (just for fun I divided 46.5 by 13.7 and got 3.394160584… rounded up that’s 0.25 off pi but that may mean absolutely nothing.) If the universe can go faster that the speed of light even when everything in it cannot, what is the name of the special force acting on the whole universe and how does it affect the speed (or apparent speed) of particles? I’m very interested and very confused.

Alissa, I’m not a physicist, but in lay terms this is my understanding (which may also be “not even wrong,” so any working scientists here are welcome to slap my wrist for this one):

Some of the trouble comes from the linguistics of the word “particle,” which we usually understand to mean “something very small but having measurable volume,” for example a particle of soot that we can examine under a microscope and measure its length, width, and depth.

But when used in physics and referring to photons, the way I think of a “particle” is as a geometric point, having zero dimension: no length, no width, no depth: almost a kind of mathematical abstraction. We can “draw” a point on a piece of paper by making a dot, but the dot is just a visualization, not an actual representation of the real nature of the point.

The photon doesn’t have dimension: no length, width, or depth, but it does carry the electromagnetic force, just as electrons do when passing through wires.

The way to begin to understand this, is to start by realizing that the word “particle” in its normal usage produces something like an “optical illusion” in our thinking: we’re so accustomed to thinking of particles as solid objects, that it’s difficult to unhook from that idea when dealing with photons and the math that describes them.

Here’s another analogy for that:

You can draw a circle on a piece of paper, and then get some string and make a matching circle, and then stick your fingers into the circle of string and pull it into a square. Then you can use something smaller than your fingers to make the square appear to be a very accurate square. You can do simple arithmetic to calculate the area of the circle you had to begin with, and also to calculate the area of the square. If you’re doing architecture, an exercise like that, done on a computer, might be sufficient to compare a building with a square footprint to one with a circular footprint.

BUT…. in math, it doesn’t work! In math, there is no equation that will do the same thing for you (hence the expression “you can’t square a circle”). And there are plenty of people over the course of history who have tried to, and who believed they got it, but they didn’t really. Some of them went quite crazy believing they got it. The reason for this is, your empirical circle (made of string) and the square you made from the same length of string, are not identical with the precise mathematical values that describe a circle and a square as abstract mathematical objects. Math is too precise to allow for the tiny variations in our model made of string, or its computer equivalent.

It’s the same thing with particles and points. A dot on a page is a visualization for a point, but an actual point in the mathematical sense has no visual equivalent such as a dot. Even though photons reach your retina and convey the visual impression of light and color.

Re. the airplane/clock experiment:

What you said about “traveling counter to the direction of the universe or time,” isn’t correct. What’s correct is, “traveling in any direction.” (The universe doesn’t have any one direction: it’s expanding in all directions simultaneously.) The mere fact of motion is what causes the change in the measurement of time. The airplane could have traveled East to West and then landed and flown back in the opposite direction, West to East, and the same result would have occurred in each direction. Theoretically you could repeat the experiment with a very slow airplane instead of a jet (or even with an automobile on a circular racetrack), but you’d have to stay in the air (or on the racetrack) for much longer to observe the difference caused by the slower motion.

Re. the universe:

There wasn’t a “center.” This is a tough one to get a grip on, because it’s too easy to think of the Big Bang as an “explosion,” and an explosion always has a center: for example with a piece of dynamite blasting a boulder. The contractor sets off the dynamite and you see the burst as the pieces of rock fly away from the center of the explosion, and you see a cloud of smoke doing likewise.

But the expansion of the universe is the expansion of spacetime itself: there is no “outside” vantage point from which to watch. So it’s as if the “explosion” is happening everywhere that exists, simultaneously. By analogy to that cloud of smoke when the contractor sets off the dynamite: the points inside the cloud of smoke are “all there is” to the universe, except that the explosion isn’t coming from one place (the dynamite), it’s coming from every point in space simultaneously.

Sometimes it’s useful to just try to get a visualization of a phenomenon, even if you don’t really understand it yet, and then plug it into a larger theory and try to get a visualization for that in turn: there comes a point where the whole thing “makes sense,” and then you can go back to the parts that didn’t make sense before, and see how they fit into the bigger picture. In other words, don’t let yourself get stuck on one particular step you don’t understand: keep going to the next step and the next from that, and eventually the piece you were stuck on will make sense in retrospect, and you won’t be stuck.

I had to do something like that to understand the Copenhagen interpretation of quantum theory well enough to know that it has nothing to do with “the presence of the observer’s mind” and everything to do with “the minimum conditions necessary to make a measurement.” But in retrospect it seems obvious.

“However time does very by speed and motion inside spacetime, so how does this affect the speed of protons? (and photons?)”

No time passes for a photon. No distance is crossed, either (Lorentz contraction), therefore the speed of the photon, in its frame of reference, is 0/0, which is defined only if you know how the values get to zero in both cases, and in the case of Special Relativistic effects, that comes out to be the speed of light in a vacuum, c.

(an example of defining 0/0. Lets say the equation was x/x. It is ALWAYS 1, correct? Well, what if x=0? Then the equation is 0/0, which is “undefined”. But we KNOW that x=x == 1. In advanced maths, the solution is to take x/x in the limit x->0 and the proof of this can be shown. 0/0 is undefined because in that statement solely, there is no definition of how you got a zero on either end.)

“How does the expansion of the universe affect the speed of matter and non matter particles?”

If you place a dot on an expanding table and expand the table, the dot, despite not moving AT ALL is being translated in your space (not his) at some speed.

This is the same thing with the expansion of space and the objects within it.

The scenario really requires you to do the maths, though, since the language we use for everyday use was never intended to be able to explain this and our everyday experience never reaches anywhere like this scenario to give us a mental picture.

You really are going to have to take it as given or learn the maths to do the proof yourself.

Re. Wow: Actually your analogy to a dot on an expanding table is excellent. I suspect that people can build a scientific/rational worldview on the basis of “learning by analogy” even if they can’t get the maths. For instance once someone gets the piece about the red shift, they’re vaccinated against young-Earth creationism.

One thing I suspect would very much help more of us to understand mathematics, is a “phonetic reading” approach. When I see certain equations, I might know that some of those symbols are from the Greek alphabet, but I don’t have a “sound” for them, so I can’t use audio imaging to overcome the dyslexic errors in my visual processing the way I do with language. But if we had, for example in middle school, a section in math where we learned the sound of all of those characters and symbols, we could “read” the equations out loud, and then read them in our mind’s ear, as it were. That would help immensely.

Also having mathematical symbols built into standard computer keyboards in a manner that’s cross-platform, so we could write equations as easily as writing any other text. At first there wouldn’t even be a need for software to “spell check” one’s equations: the ability to write them at all would be enormously helpful. This is the kind of thing that Bill Gates could tackle, and make it stick by making it standard in Windows, and then releasing the copyright so Apple and the open-source community could use it.

Re. squares & circles: what I was going for, was a bit of vaccination against some of the kookery out there, that so many laypeople get drawn into because it claims to have special knowledge: as far as I know, claims to square the circle are a paradigm case of kookery and I was trying to illustrate how one might fall into it innocently. So if you’re right about r^2 = x^2 + y^2, and it’s not a clever joke of some kind that I can’t parse, then I guess that part gets revised accordingly.

True, the classical problem of squaring the circle does not have a positive solution. When I was in graduate school faculty used to get badly written papers in which people claimed to have accomplished it. They didn’t accept the idea that a proof could demonstrate something was impossible.

Ethan: If the cosmic speed limit today depends on the energy of the microwave background photons, the speed limit must e slower in the past, because the energy of the background photons was higher. So if we observe very distant objects, does the speed limit drops to a magnitude that could in principle be measured?

@Semmel: The “cosmic speed limit” (i.e., the speed of light in vacuum) does _not_ depend on the energy of the microwave background photons. So far as we can tell (our best limits come from comparing a large number of spectral absorption lines in distant quasars), the speed of light, and other physical constants, were the same in the distant past as they are now.

The energy of the CMB _was_ higher in the past! That is the fundamental prediction of the Big Bang model, and our observation of the CMB itself was it’s validation! The cosmological expansion stretches out the wavelengths of the primordial (CMB) photons as they cross the universe. The wavelengths (and hence energies) we see today are much longer (colder) than when they were created 13.7 billion years ago.

Then I must have understood something wrong. Thats how I got it: The background radiation photons interact with very fast protons etc. The combined energy of protons and photons must exceed the energy necessary to create new particles, so quantum theory can do its thing and create something different from the proton and photon. If the energy of the background photons is higher, the energy of the protons doesn’t need to be as high as today to get the interaction going. If you say that the “speed limit” doesn’t depend on the energy of the background photons, than I dont understand it any more. 🙁

The problem is that pair production can conserve energy but doesn’t conserve momentum, therefore there needs to be some mass that it can interact with nearby to allow it to “palm-off” some momentum to keep the energy equal whilst allowing the photons to move at the speed of light.

What I think you’re missing is that redshift will, at a given Z value, shift the wavelengths by a factor of Z.

Since energy depends on wavelength, and pair production requires you have at least enough energy to produce the matter/anti-matter pair, that will be achieved at lower Z values if the radiation you’re considering is of a higher energy.

Semmel isn’t talking about the speed of light in a vacuum. They were asking about the “cosmic speed limit” discussed in the article, about charged particles being limited in velocity by interaction with the CMB. The one that causes the GZK cutoff.

And it does seem reasonable that when the energy of the CMB was higher that the cutoff would be lower, and I’d be interested what the implications were.

My guess is it makes almost no difference back to when everything was gamma rays. Wolfram Alpha tells me that to affect the energy of interactions near the cutoff by 0.01% (so 1E15 eV), the photon would need to be at a frequency of 1E29 Hz.

By the way, here is a link to a paper by the HiRes team with graphs that more clearly show the cutoff in their measurements. It can be kinda hard to see the “knee in the curve” with log-log plots of the whole energy spectrum. 🙂

@CB, @Wow, @Semmel: CB is right, and it’s my fault for misconstruing the reference 🙁 The GZK cutoff (the “maximum energy” for cosmic ray protons due to interactions with the CMB) does depend somewhat on the background photon energy, but that dependence is quite soft.

University of Tokyo’s Akeno Giant Air Shower Array – 111 particle detectors spread out over 100 square kilometres – has detected several cosmic rays above the GZK limit. In theory, they can only have come from within our galaxy, avoiding an energy-sapping journey across the cosmos. However, astronomers can find no source for these cosmic rays in our galaxy. So what is going on?

Alan Watson, an astronomer at the University of Leeds, UK, and spokesman for the Pierre Auger project, is already convinced there is something worth following up here. “I have no doubts that events above 1020 electronvolts exist. There are sufficient examples to convince me,” he says. The question now is, what are they? How many of these particles are coming in, and what direction are they coming from? Until we get that information, there’s no telling how exotic the true explanation could be.

Michael Kelsey wrote (#4, April 26, 2013):
> […] written out as “electron-volts”, without a capital “V”. This follows the SI convention that base units are spelled out in lowercase, but the abbreviation is capitalized only when the unit is derived from a proper name

@Frank Wappler: The exception for “degrees Celsius” is the _opposite_ of the exception for litres: by convention, the word “Celsius” in the unit ought not be capitalized, but it is, historically, as are _all_ of the non-absolute-scale temperature units (degrees X, where X = Fahrenheit, Celsius, or Rankine). Note that the SI unit is just plain “kelvins”, not “degrees Kelvin”, so at least the two cases are consistent.

Regarding energy loss _by_ neutrinos interacting with the CMB, I’m not aware of a sensible published limit. The only way for neutrinos to interact with the CMB is at least at fourth order (a box diagram involving leptons and W’s, analogous to photon-photon scattering), which means it will be highly suppressed.

Now, the _relic_ neutrino background is different. Since neutrinos would have become free-streaming much earlier than photons (at the ~TeV scale electroweak freeze-out), the CNB or CvB (using lowercase ‘nu’, not Latin ‘v’) is predicted to be at a lower temperature than the CMB, less than 2K.

In principle, there is a GZK-like “cutoff” for cosmic rays interacting with the CNB, but at a much higher energy. Consequently, the neutrino-cutoff is irrelevant in this universe since the photon GKZ limit hits first.

Michael Kelsey wrote (May 6, 2013):
> The exception for “degrees Celsius” is the _opposite_ of the exception for litres

For a suitable sense of “_opposite_“, right. It’s an exception to the first part of the convention, as stated (#4, April 26, 2013): “the SI convention that […] units are spelled out in lowercase, but […]“.

(Also, note again that the BIPM table just mentioned recognizes “degree Celsius” as a “coherent derived unit”, thus apparently “of the SI”.)

> Regarding energy loss _by_ neutrinos interacting with the CMB, I’m not aware of a sensible published limit. The only way for neutrinos to interact with the CMB is at least at fourth order (a box diagram involving leptons and W’s, analogous to photon-photon scattering),

I’d consider neutrinos interacting with CMB already by third order processes: neutrino giving off a virtual Z; photon converting to anti-lepton and virtual lepton; virtual lepton absorbing virtual Z, leaving as lepton. Or even: neutrino converting to lepton and virtual W, and so on. (Of course, none of this is strictly scattering; but surely interaction.)

> Now, the _relic_ neutrino background is different.

Of course. I wonder which might be more relevant for neutrinos of initially sufficiently large energy/speed, in terms of (possible) “speed limits” and “traveling through the universe” considered in the ScienceBlogs article above; within your favorite (and/or “standard”) model of distributions of CMB and relic neutrinos.

“The GZK cutoff (the “maximum energy” for cosmic ray protons due to interactions with the CMB) does depend somewhat on the background photon energy, but that dependence is quite soft.” – too bad.. but thanks for the answer! 🙂

Regarding the neutrino backround.. how could that ever be detected? Detecting the direction from which neutrinos come seems to be quite poor. That said, we dont know exactly where they come from. Because of that, we cant really subtract all the non-CNB sources. So how is this ever going to be measured?

Here “lepton” is to be understood as “lepton with el.-mag. charge”, please.
(This shorthand/jargon can be useful in some circumstances; while neutrinos are of course also classified as leptons, without el.-mag. charge.)

G.
Thank you that helps immensely. I think I understand now. Even better, the visual analogy I have in my head is so much cooler and more amazing than my previous incorrect one. I think I just visually understood time, and why we can only go forward in it (unlike the space dimensions which all have two directions, up, down, etc.). I also understood that the universe has no center and why time slows down if you travel fast. Awesome, I love science! I know it’s obvious when you think about it but it took me a while to grasp the concept that time “started” with the big bang (at least our time) and that time and the expansion of the universe are essentially the same thing. I still don’t completely understand how we know the universe has expanded to 46.5 light years right now, but I’ll tackle that another day.

[…] Space is just too big. The distance between star systems is just too far. The speed of light is too unbreakable. The fuel for energy it would take to get through space is just too much. And finally, our biology […]

My knowledge in this matter is miniscule compared to most around here. But i had a naive question. If light travels at almost the cosmic speed, & the light is a wave, the photon does traverse a lot more of a distance than when you measure on a straight line doesnt it? so actually the speed of photon is way more than what we perceive the light’s speed is? Where am i going wrong?

Light IS photons, so I’m not sure what you are getting at here. Photons travel at c. They cannot travel at any other speed; they ARE light so they travel at the speed of light by definition. (BTW, that’s not the same as the cosmic speed limit that Ethan is discussing here. That cosmic speed limit is the upper limit for the speed of charged particles and is slightly lower than the speed of light.)

Thanks Sean.
Thanks for correcting my understanding on the cosmic speed limit.
My original question was something i used to wonder all along as the speed of light or anything that travels as a wave is measured in terms of linear distance. Depending on the frequency of the wave, it would travel a much further distance than measured wont it? Like say the photon would move much more than the linear distance that is measured as it would be traversing the path of the wave.

Well, you are being a little dumb, but if you’ve not been told, it’s not a problem.

First of all, there is no line at the top of the wave that the photon travels on. When you draw a wave with a pencil on paper, your pencil may be moving faster than the point along the straight line, but that’s not what the *wave* actually does. And in real waves, there is no pencil.

The linear distance is the distance is the distance the wave travels, not the distance of you drew out a wobbling line.

And for light, the “wave” vector you see drawn is the electric field strength. That is not the light itself, and it doesn’t move along the linear line the light takes.