"Takes 1 part pop culture, 1 part science, and mixes vigorously with a shakerful of passion."
-- Typepad (Featured Blog)

"In this elegantly written blog, stories about science and technology come to life as effortlessly as everyday chatter about politics, celebrities, and vacations."
-- Fast Company ("The Top 10 Websites You've Never Heard Of")

the importance of being nitpicky

Jen-Luc Piquant and I are all jacked up on caffeine and frustration-derived adrenalin at the APS April meeting here in Dallas, after a fruitless hours-long effort to find a high-speed internet connection that works consistently. We would like to state uncategorically that the Dallas Hyatt Regency has the worst high-speed wireless internet connection we have yet encountered in our many jaunts to physics conferences.

The signal is weak, and sometimes disappears altogether. Sometimes one gets a signal, but no connection. And when one does get a connection, it is unbearably slow, so much so that we briefly considered resorting to dial-up. (Oh, the horror!) There are rumors of one good "hot spot" in the hotel bar, right by the piano, that seems to work well, so whenever sessions let out, there is a mad rush to that area by physicists desperate to check their email before the next set of papers. It looks suspiciously like a pre-arranged flash mob, when in fact it's more like a weird kind of emergent behavioral phenomenon dictated by dire circumstances of Internet withdrawal.

And this is why we have been delayed in posting something about the ongoing conference. "Really," Jen-Luc huffs, in haughty high dudgeon, "How can we possibly blog under such barbaric conditions?" Nonetheless, we shall try, although we might lack our usual polish. Because while there has been much wailing and gashing of teeth in the press room about the lack of "real news" at this year's meeting, that doesn't mean there isn't a bunch of really cool stuff going on. It's just hidden in the nooks and crannies, rather than displaying itself brazenly in the center of the town square -- although for some reason, all the sessions on cosmology and dark matter/energy have been standing room only. (People were actually standing on chairs in the hall to hear Cosmic Variance's own Sean Carroll talk about "the future of theoretical cosmology," which was delivered with characteristic panache.)

It occurred to me today that perhaps we focus a bit too much, as science writers, on major results or breakthroughs -- so much so that we miss a lot of the tiny, incremental breakthroughs that are constantly taking place all the time, year after year, which eventually add up to the major "news worthy" results that everyone makes such a fuss about. But what about all the other forgotten, unsung experiments (or planned experiments that sadly never saw the light of day), the fascinating conjectures, colorful minute details, and amusing anecdotes? These, too, are a seminal part of physics, and a big part of what makes the field so fascinating.

Okay, I know I sometimes whinge about certain scientists being excessively nitpicky about minute technical details. That's relevant to a discussion of public communication of science. But in the actual practice of science itself, those nitpicky elements are indeed absolutely crucial. And while it's hard to "sell" those kinds of stories to the press, it's not impossible.

I was reminded of the importance of being nitpicky in physics at a press conference yesterday on experimental attempts by Eric Adelberger's group at the University of Washington to find violations in one of the most fundamental aspects of special relativity: Lorentz invariance. (For more specific detail about this experiment, and several others, go here.) That's the bit about the laws of physics being the same for all observers, regardless of frame of reference. It's something we all kind of take as a given these days, but before 1905, it was by no means accepted. Or even obvious. Physicists of prior eras firmly believed that light would show the effects of motion, but experiment after experiment failed to produce this result, with the final nail being driven in the coffin when Michelson and Morley (once again) failed to observe this prediction. But experiment after experiment has validated this particular aspect of special relativity.

So, if special relativity, as a theory, has already been confirmed, repeatedly, one might ask, why even bother to keep testing? The same question came up earlier this year with the announcement of the most precise experimental confirmation to date of another Einstein workhorse, E=mc<2>. To someone unversed in the scientific method -- and they are legion, as evidenced by all those folks who think saying evolution is "just a theory" means it's incorrect -- it seems like a waste of time to keep testing something we already know is right.

This is why: one of the best things about physics, is that it never assumes it has all the answers of the universe -- just the best answer we can verify for now. The longer a theory is in play, and the more experimental evidence is compiled to support it, the more likely it is that this theory is correct... and the more emphatically physicists will defend it. But there is always the tiniest possible chance that some experiment, somewhere down the line, will find a violation of a basic principle at, say, the eighth decimal point. And that point, as Feynman would say, is when things become "very IN-teresting." Even a slight departure from the expected behavior could signal the start of a new line of inquiry that might one day revolutionize our understanding of the universe. Adelberger's group designs all kinds of experiments to test a wide range of established physics theories. And it just seemed natural to ask: what would it take to dislodge special relativity, or at least shake up its foundations? (The answer appears to be, quite a lot. And we haven't found it yet.)

Granted, this is not the kind of thing that tends to make non-scientists sit up and take notice. Back in 2000, at the APS April Meeting in Long Beach, California, I attended a press conference reporting on recent conflicting measurements of the gravitational constant, affectionately known as "Big G." Adelberger's people made one of those measurements. The scientists on hand were excited about the discrepancy -- to them this was fascinating physics, for good reason -- but to the assembled reporters, and to any lone members of the public who might have wandered by, it all seemed irrelevant. Finally, one reporter from a local newspaper asked (and I paraphrase), "So, why even bother doing this experiment, if you already have a reasonably good measurement that works just fine for all practical applications, and won't be affected at all by this bit of improvement in our knowledge?"

There were many ways the researchers could have answered this question -- most obviously, it could become relevant as theoreticians come closer to devising a theory of everything that incorporates both gravity and quantum mechanics -- but they froze, like deer in headlights, until one of them said, "Um, because it's FUN!" I found the comment charming, actually, because he clearly had enjoyed working on this problem, and it showed. But I was in the minority. Spending precious research dollars re-doing centuries-old experiments so you can more precisely measure a fundamental constant to yet another decimal point sounds a bit dodgy to the average taxpayer.

Fortunately, the press did find an angle: "Earth loses weight!" a BBC News headline screamed, and media outlet after media outlet followed their lead. One of the "applications" of knowing the gravitational constant is that it enables scientists to determine exactly how much the earth "weighs." Doing so is no mean feat, since gravity is so much weaker than the other fundamental forces. Any measurement experiments must be completely isolated and performed in a vacuum, although almost nothing can completely shield it from minute outside gravitational influences. (In fact, during one such experiment at the University of California, Irvine, the experiment showed tiny "wriggles" in the data which turned out to be caused by the sprinkler system just outside the physics lab building.)

Context is everything, of course, and there's an excellent account from 1998 about the history behind all of this by David Kestenbaum in Science magazine. A physicist named Henry Cavendish was the first to make this measurement -- by candlelight, no less, how romantic --using a small suspended lead barbell hanging from a twisting fiber. The tiny motions of twisting that he observed revealed the strength of gravity between the two masses in his experiment: two weights the size of bowling balls.

Some version of Cavendish's torsion balance is usually employed to measure G ever since. In fact, if you're patient enough, and reasonably well-versed in physics, you can undertake your own basement experiment to make this exact same measurement. If nothing else, it should serve to demonstrate just how impressive Cavendish's achievement was.

The new measurements, while not completely in agreement with each other, nonetheless had a major impact on the overall weight of the earth. Our pretty blue planet "lost" something on the order of 10 billion billion tons overnight -- just by tweaking the parameters of one tiny constant by a few increments here and there. "Weight," apparently, is somewhat "relative" as well. And that gave reporters the hook they needed. In fairness, it should be noted that even many physicists are a bit blase about things like making more accurate measurements of G. In Kestenbaum's 1998 article, Clive Speaks of the University of Birmingham in England is memorably quoted as saying, "Nobody gives a damn about Big G." (Tell that to the Earth, which lost several dress sizes in 24 hours and gained a big boost in self-esteem.) Kestenbaum himself likened these kinds of experiments to a sort of extreme sport in physics: "the Mount Everest of precision measurement." Why keep making these measurements of G? Because it's there. Who cares if thousands of others have climbed that mountain before?

Of course, sometimes what you get out of a meeting isn't so much from the sessions themselves, or even the organized press conferences, but from the random casual encounters and conversations with scientists that invariably take place. For instance, while hanging out in the hotel lobby coffee bar last night, I started chatting with a few random cosmologists, who graciously answered my sophomoric "why is the sky blue" questions about why entropy equations keep turning up in everything from economic theories to parabolic arches -- and I thank them for their patience and clarity in doing so. Sure, the laws of physics are universal and all, and should therefore apply regardless of the system. But they said that there does seem to be something special about the laws of thermodynamics; they are deemed least likely to be proven wrong by future experiments. (I'm afraid I was just the slightest bit tipsy and am thus a bit fuzzy today as to why that is. So perhaps this isn't the best example of a positive educational encounter.) Which doesn't mean those won't continually be tested, at least by would-be inventors of free-energy machines.

I hope if nothing else this post will give any non-scientists who are reading this a small sense of why scientists are so obsessed with fine details. Sure, it can be annoying in a casual social setting, or when you're trying to make a much broader point that gets sidetracked by an argument over one tiny choice of wording. But there's a positive, flip side to that particular coin. That same precision and attention to detail has given rise not just to grand theories about the universe, but the inner workings of most modern technology. Except, of course, the wireless network at the Dallas Hyatt Regency. Whoever set that up was clearly not nitpicky enough, and we are paying the price for their slackerdom. A pox upon them, I say. Where are the nitpicking physicists when you need them?

Comments

Re: talking to reporters, Feynman had a great story about what happened when he won the Nobel Prize. He was constantly bombarded with questions about just why he had won, and quantum electrodynamics is not the sort of thing easily summarized in a sound bite. He eventually thought up a stock phrase about electrons and light and eliminating the infinities in calculations ("sweeping them under the rug"). A taxi driver suggested that he tell reporters, "Listen, buddy, if I could tell you in thirty seconds what it was about, it wouldn't be worth the Nobel Prize."

I found this evasion useful myself, at certain times. "Listen, Mom, if I could tell you over Christmas break what I learned, it wouldn't be worth a degree from MIT."

It's a cheap shot, of course. Feynman, Bohr and lots of other people have endorsed the opposite view, or a moderate form thereof: if you can't explain what you're doing to a patient person with some rudimentary curiosity, you probably don't know what you're doing.

If anybody asked **me** why it was important to measure gravity so carefully, I'd say, "Because there's a chance it will show us that there exist directions of movement at right angles to reality." For a quick, popularized overview of this idea, see here:

Entropy =-\sum_i p_i log(p_i) is a measure of how broad a probability distribution is, just like the Variance of a probability distribution. (Except It's better than Variance because it has nicer mathematical properties). Energy is like how expensive it is to do something, like the cost of doing business. The laws of thermodynamics relate energy and entropy (cost and variance).
Energy and entropy are such basic measurements that it is not surprising that they can be applied to so many systems.

One interesting thing about Lorentz invariance is that it's possible for it to be true at the fundamental level, yet it could be apparently violated at an experimental level because of something called "spontaneous symmetry breaking" (see http://en.wikipedia.org/wiki/Spontaneous_symmetry_breaking ). This is analogous to what happens when a heated ferromagnet cools off and groups of nearby dipoles in the magnet spontaneously line up, creating "magnetic domains" within the ferromagnet, so that a tiny observer inside a particular domain in the cooled-off magnet would see a magnetic field with a preferred direction in his little "universe". Of course there is nothing in the fundamental laws of physics that cause this direction to be preferred, and the direction would be different in different magnetic domains. Similarly, it's thought that some fields that were symmetrical at the time of the Big Bang could have frozen out with a random preferred reference frame in our region of space (if this was the case the preferred frame would be different in sufficiently distant regions of the universe, maybe beyond the observable universe), so that Lorentz-symmetry might seem to be violated--you can check out http://physicsweb.org/articles/world/17/3/7 for more details.

Physics Cocktails

Heavy G

The perfect pick-me-up when gravity gets you down.
2 oz Tequila
2 oz Triple sec
2 oz Rose's sweetened lime juice
7-Up or Sprite
Mix tequila, triple sec and lime juice in a shaker and pour into a margarita glass. (Salted rim and ice are optional.) Top off with 7-Up/Sprite and let the weight of the world lift off your shoulders.

Any mad scientist will tell you that flames make drinking more fun. What good is science if no one gets hurt?
1 oz Midori melon liqueur
1-1/2 oz sour mix
1 splash soda water
151 proof rum
Mix melon liqueur, sour mix and soda water with ice in shaker. Shake and strain into martini glass. Top with rum and ignite. Try to take over the world.