"Takes 1 part pop culture, 1 part science, and mixes vigorously with a shakerful of passion."
-- Typepad (Featured Blog)

"In this elegantly written blog, stories about science and technology come to life as effortlessly as everyday chatter about politics, celebrities, and vacations."
-- Fast Company ("The Top 10 Websites You've Never Heard Of")

[NOTE: We are seriously under the weather and hence original posting must wait until this godawful headache subsides. "Bring on the magic mushrooms!" sez Jen-Luc Piquant, although we are not quite that desperate. Yet. In the meantime, we feel the need to indulge in silliness, and offer this repost on the science of pirates. Yeah, you heard me: science and pirates!]

If you're anything like me, you thrilled to Johnny Depp's glorious performance as the winsome, piratical rogue, Captain Jack Sparrow, in 2003's Pirates of the Caribbean. In fact, last summer Jen-Luc Piquant and I were waiting on tenterhooks for the next installment in the ongoing saga, Dead Man's Chest, to arrive in the DC area, even though Jen-Luc correctly pointed out that it's a rare case when sequels live up to the original. (She held out no hope whatsoever for the third installment, and if the reviews are accurate, she was right.)

Still, what is it about pirates that holds such universal appeal, for all of us, not just for the lovely Elizabeth Swann? Jen-Luc maintains its all about "the look" -- swashbuckling boots, colorful jacket and bandanna, a jaunty eye patch -- and has adapted her own personal style for the occasion. I would make the case that it's the salty pirate speech; in fact, you can translate this entire blog post into Pirate-Speak by clicking on this link and typing in the appropriate URL. (It's also amusing to type in the URL of, say, the typically fusty George Will's latest column; his conservative musings take on an entirely different flavor when rendered in Pirate Speak.)

British comedic author Gideon DeFoe tackles this very issue in the opening chapter of his most excellent book, Pirates! In an Adventure with Scientists. (The paperback version just released in the US comes bound together with DeFoe's own sequel, Pirates! In an Adventure with Ahab, but in this post, we're focusing on the scientists, for obvious reasons.) The pirates in question, those scurvy knaves, are lolling about the deck of their ship, debating the best thing about being a pirate. One says the looting, another marooning, still another sings the praises of pirate grog, and a fourth insists it's the Spanish Main. A brawl inevitably develops, cut short by the appearance of the Pirate Captain, who settles the dispute by declaring that the best part of being a pirate is... the sea shanties.

This might be a good place to point out that the Pirate Captain is not the brightest bulb in the Yuletide tree, and that pirates are not known for their sophisticated musical tastes. DeFoe's book, however, is bloody brilliant. Sure, it has a silly premise: the pirates mistakenly loot the H.M.S. Beagle -- on its second voyage, to the Galapagos Islands, circa 1831 -- believing it to be carrying gold rather than exotic natural specimens. (Note that Jen-Luc has adopted a cute little lizard rather than the customary parrot, in keeping with the Galapagos theme.) They sink the ship, and feel kinda bad about it. So they agree to transport Charles Darwin, Captain Robert FitzRoy, and Mister Bobo (Darwin's trained "Man-Panzee") back to Victorian London.

Like I said, a silly premise. But DeFoe includes fascinating factual tidbits in the footnotes, so he's no slouch when it comes to history,
scientific or otherwise. According to one footnote, Darwin memorably described
the Beagle voyage in a letter as being "one continual puke." There is also a footnoted mention of John Venn
(born in 1834, a few years after the book's events supposedly took
place), a British logician and philosopher best known for introducing "Venn diagrams" around 1881. It's nice to see such a fine meshing of silliness with snippets of serious science, even if the price is an occasional anachronism. It's all in the name of good clean fun. And did I mention there's a feisty damsel in distress named Jennifer,
who ends up joining the pirate crew? Really, what's not to love in such
a book?

In the course of their adventure, the pirates crash London's Royal Society, donning pens, rulers and white lab coats to disguise themselves as scientists. The pirates show an uncanny knack for engaging in scientific discourse, "nodding politely and saying 'Really?' a lot as they listened to [the scientists] drone on about their latest inventions and discoveries." Sounds like the average scientific press conference, doesn't it? Personally, I think the technical sessions at meetings would be livened signficantly if the speakers were clad in Pirate garb. They should also be armed with cutlasses so they could -- as the Pirate Captain is wont to do -- use said weapons to run through any especially obstrepterous colleagues in the assembly.

A bit of the science behind nautical navigation is to be expected, of course. The Pirate Captain's cabin is equipped not just with the usual nautical maps and charts, but also an astrolabe. Astrolabes are very ancient instruments -- possibly dating as far back as the Second Century, B.C. -- for determining the time and position of the stars in the sky. They were mostly used in astronomical studies, not for navigation, but there was a mariner's astrolabe, a simple ring marked in degrees for measuring celestial altitudes.

In DeFoe's book, the Captain likes to fiddle with his astrolabe for show, pretending he can carry out complex calculations in the midst of casual conversation, but he isn't entirely sure of the difference between an astrolabe and a sextant. The sextant wasn't invented until the 18th century, and quickly displaced the mariner's astrolabe for navigational purposes because it was much more precise. A sextant measures the angle of elevation of a celestial object above the horizon. Using this angle, combined with the time of measurement, enables the navigator to calculate a precise position line on a nautical chart. For example, a sextant could be used to sight the sun at high noon in order to determine one's latitude. Hold the thing horizontally, and you can measure the angle between any two objects: say, a couple of lighthouses, giant Galapagos sea turtles, or mermaids lazily sunning themselves on conveniently located boulders.

There's also mention of the famous Beaufort wind force scale, a 19th century means of empirically describing wind intensity based on observed sea conditions. It was the brainchild of Sir Francis Beaufort, a British naval officer and friend of Darwin who sought to remove the subjective measures for windy weather observations at sea by describing wind conditions according to the effect on the sails of a man of war, then the main ship of the Royal Navy. The original Beaufort scale ranged from 0 to 12 (later extended to 16), and its descriptions ranged from "just sufficient to give steerage" to "that which no canvas can withstand." As the albino pirate correctly points out, a Beaufort scale ranking of 6 would be a "strong breeze," while 8 would indicate a "fresh gale" -- or, per the Pirate Captain, "that which will make a pirate's trousers billow about so it looks like he has fat legs." (Hurricanes, in case you're interested, begin at 12 on the Beaufort scale, which corresponds to a Category 1 hurricane on the modern Saffir-Simpson Hurricane Scale. Even as far back as 1712, the Caribbean and Gulf of Mexico were beset by hurricanes. That year, according to DeFoe's informative footnote, a single storm destroyed some 38 ships moored in Port Royal's harbor.)

Beaufort isn't the only historical personage to make a cameo appearance in DeFoe's novel. FitzRoy really did captain the Beagle and select Darwin as the onboard naturalist, despite purportedly not liking the shape of Darwin's nose. (Hey, that could get really irritating on a long sea voyage, particularly on a tiny ship like the Beagle, which was a mere 90 feet long.) He was an amateur meteorologist, eventually heading the British Meteorological Department and pioneering the printing of a daily weather forecast in newspapers. Alas, the unfortunate FitzRoy did indeed commit suicide in 1865 by slitting his own throat, ostensibly in a fit of depression over not being selected as Chief Naval Officer in the Marine Department.

And while crashing the Royal Society, the pirates encounter James Glaisher, an English meteorologist who tells them of his passion for "lighter-than-air" ships, a.k.a. "dirigibles." The concept dates back to 18th century France, when the Mongolfier brothers (paper makers by trade) noticed that smoke from a fire built under a paper bag would cause the bag to rise into the air. The science behind this is simple: the hot air inside expanded, and thus weighed less, by volume, than the surrounding air. The Mongolfiers built the first hot-air balloons around 1782. Another Frenchman, Henri Giffard, built the first dirigible, inflated with hydrogen, a gas that is naturally lighter than air at normal temperatures. Alas, as the 1937 Hindenburg disaster revealed, hydrogen is also highly flammable; modern airships use helium, an "unburnable" gas.

Glaisher was indeed a pioneering balloonist, making numerous ascents between 1862 and 1866 to measure the temperature and humidity of the atmosphere at the highest possible levels. On one such flight, he and his pilot, Henry Coxwell, set a world record of 29,000 feet, nearly losing their lives in the process. Glaisher passed out from the lack of oxygen, while Coxwell's hands were so stiff with cold he could barely manage to free a tangled valve and thereby halt their ascent to even higher (and more deadly) altitudes. (An artist's rendition of Glaisher's harrowing experience can be seen at right.) They still hold a few world records in this area -- not that this was Glaisher's primary motivation, nosiree. As the fictional Glaisher explains to DeFoe's assembled pirates, "What is science for? Pushing back frontiers! The thrill of discovery! Advancing the sum total of human knowledge and endeavour! And looking down ladies' tops!"

I actually learned something new from DeFoe's footnotes regarding dirigibles, namely, that helium wasn't technically "discovered" on earth until about 1895, despite being abundant in the universe. I also learned that America, in particular, faces a looming helium shortage. It turns out that almost all of the global supply of helium is located within 250 miles of Amarillo, Texas; it's distilled from accumulated natural gas, extracted during the refining process. Since the 1920s, the US has considered its helium stockpile as an important strategic natural resource, amassing some 32 billion cubic feet in an underground bunker in Texas, but now, it's selling off that stockpile bit by bit to interested industrial buyers.

Helium is used for arc welding and leak detection, mostly, although NASA uses it to pressurize space shuttle fuel tanks. Liquid helium cools infrared detectors, nuclear reactors, and the superconducting magnets used in MRI machines, too. The fear is that, at current consumption rates, that underground bunker will be empty within 20 years, leaving the earth almost helium-free by the end of the 21st century. This could be bad for US industry, not to mention future patients in need of MRI diagnostics. It's also bodes ill for the prospect of fusion using helium-3, a rare isotope that is missing a neutron. Physicists have yet to achieve pure helium-3 fusion, but if they did, we'd have a clean, virtually infinite power source. Or so the theory goes. Still there is hope: the moon's lunar soil is chock-full of helium reserves, thanks to the solar wind. In fact, every star emits helium constantly, suggesting that one day, spaceships will carry on a brisk import and export trade to harvest this critical element. And I daresay it will pave the way for a lucrative "space pirate" business as well.

Perhaps my favorite scene is when the Pirate Captain chases a villainous Bishop through London's Natural History Museum. The latter flings armloads of trilobites culled from the display cases at him, and when the chase moves to the Mineral Room, both men resort to projectiles of various mineral elements, choosing them according to atomic weight. For example, the Bishop hurls a chunk of iron (atomic weight: 55.85), and the Pirate Captain counters with a chunk of nickel (atomic weight: 58.69). Really, how many authors who write silly books about pirates can rattle off the atomic number (44) and atomic weight (101.07) of a rare transition metal like ruthenium? (Jen-Luc pipes in with the pointless information that trace amounts of ruthenium are often added to titanium to improve its corrosion resistance.) Or osmium -- atomic weight: 190.2 -- for that matter?

A few winks at Darwin's expense are inevitable. To make room for
Darwin and his crew on the pirate ship, the Pirate Captain makes a few
crew members walk the plank. When Darwin objects to the brutality, the
Pirate Captain assures him that only "fools and lubbers" would be
sacrificed, concluding, "It's for the good of the species." And upon arriving in London, and visiting the Royal Society, the Pirate Captain nobly puts his gift of showmanship to work on Darwin's behalf, assuring the young naturalist that good science isn't enough: "You need a gimmick! A bit of controversy! It's all about the presentation."

These days, of course, Darwin doesn't want for controversy, as that thinly-veiled form of creationism, Intelligent Design, simply refuses to die in school districts across the country. When On the Origin of the Species was first published, Darwin wasn't especially surprised that people -- even his fellow naturalists -- resisted many of his conclusions; there's a letter now up for auction by Sotheby's of London that says as much. But I think he'd be more than a little dismayed to see how people continue to resist the modern theory of evolution, in the face of overwhelming scientific evidence to support it. There's certainly no debate among respected scientists: just last week, the national academies of some 67 countries issued a joint statement urging schools, parents, teachers, etc. to stop denying the scientific facts of the origin and evolution of life on earth.

The problem is that evolution has been vilified as a lie propounded by godless heathens -- you know, like P.Z. Myers, who routinely incurs the ire of Believers (not just Creationists, either) with his staunch scientific atheism. In fact, there's a big brouhaha fomenting over at P.Z.'s blog right now over his stance on science and religion, specifically over his recent post about whether it's possible for any good scientist to be anything other than an atheist. P.Z. might have a sharp-ish way of expressing himself at times, but it's pretty tough to argue with the sense of this comment:

If a scientist applies the same kind of critical thinking she uses in her work to religion, she gets the same answer an atheist does.... [S]he doesn't have to apply that kind of thinking to every aspect of her life, of course, and none of us do. If she wants to claim she's happy to be a Presbyterian and accepts it as a matter of simple faith, there is no argument, the case is closed, and she can go about her business unhassled by science.

(Admit it, now you're just dying to hop on over there and see for yourself. But please -- not before you finish this post.)

Darwin himself, it should be noted, wasn't an atheist; he described himself in his autobiography as a theist at the time of writing Origin of the Species, although by then he had summarily rejected William Paley's famed "argument from design" which had so influenced him in his youth. In his later years, Darwin was resolutely irreligious, moving firmly into the agnostic camp by the time he died. But he never seemed to feel there was much conflict between science and religion provided -- and this is a critical proviso -- that religion remained a strictly personal matter and was kept separate from science. For Darwin, "the question of god's existence was outside the scope of scientific inquiry."

That seems an eminently sensible viewpoint. Unfortunately, lots of people seem incapable of making these critical distinctions, which is why we end up with situations like the controversy that erupted over setting science standards in Kansas, for example, or the recent landmark court case in Dover, Pennsylvania. That decision stated unequivocally that Intelligent Design was not a science, and should not be included in the science standards set by state Boards of Education. It doesn't get more clear than that, yet the manufactured "controversy" rages on. What's it gonna take to stamp out this pseudo-scientific nonsense once and for all?

We could always make the particularly pig-headed walk the proverbial plank: natural selection in practice, not just theory. Or perhaps scientists should follow the Pirate Captain's lead in DeFoe's novel, and stage a WWF-style showdown between science -- represented in the novel by Darwin's Man-Panzee, Mister Bobo -- and religion, personified by the "Holy Ghost" (actually a pirate named Scurvy Jake in disguise). "The science you are doing is too shocking by half!" the Holy Ghost declares with righteous indignation. "I will lay the smackdown on your wicked ways!" Then Mister Bobo hits him over the head with a folding chair, knocking him out cold, and is declared the victor. See? Science always triumphs in the end. Darwin becomes the toast of London and quite a favorite with the ladies, while Mister Bobo gets featured on the cover of Nature. It's as good a strategy for combating Intelligent Design as any. I say let's set up a booking in Vegas right now and put the smackdown on pseudoscience.

In 1769, Gaspar Portola, the Spanish governor of Baja California, led an excursion across the Los Angeles River and down what is now Wilshire Boulevard in modern-day La-La Land. According to a journal kept by one of the expedition members -- a priest named Juan Crespi -- the travelers "saw some large marshes of a certain substance like pitch, they were boiling and bubbling... and there is such an abundance of it that would serve to caulk many ships." This is the first recorded sighting of the famed La Brea Tar Pits.

The landscape has changed a bit since then. Okay, it's changed a lot. Shortly after I moved to Los Angeles this past April, we were driving down Wilshire Boulevard en route to the city's high-end shopping mecca, Rodeo Drive, when we passed a sign reading "La Brea Tar Pits." Initially I assumed it was an advertisement, but the Spousal Unit assured me that no, in fact, the tar pits were right there, just off Wilshire, and couldn't I detect that telltale whiff of methane in the air? So one of the most well-known U.S. sites for paleontological fossils is located smack in the middle of suburban sprawl, amid condominium complexes, a Rite-Aid, and various fast food joints like Baja Fresh, Koo Koo Roo and IHoP. You can admire the pretty Pleistocene fossils, then pop on over to LA's Disney-esque shopping complex, The Grove, stopping off at Whole Foods for some organic produce on the way home.

You'd think that with the place so close, and me being such a big science geek and all, it would be one of the first places I'd visit. Alas, you'd be wrong. We didn't get around to visiting the La Brea Tar Pits and accompanying Page Museum until last weekend, when fellow physics blogger Chuck (a.k.a., that lounging Lab Lemming from the Land of Oz) blew through town with his family, on their way to visit relatives elsewhere in the U.S. We had a great time strolling through the exhibits, admiring the rather seedy (by now) animatronic mammoth, and placing bets on whether the ground sloth or the saber-toothed tiger would win their sculpted pitched battle. True, they are frozen in time in the Page Museum, but I maintain that the tiger is on the verge of victory: its enormous canines are in the perfect position to rip out the sloth's jugular.

Technically, that substance oozing out of the ground in Hancock Park isn't really tar (a byproduct of the distillation of coal or peat); it's asphalt, the lowest grade of crude oil. The bubbling is due to methane gas, a byproduct of the decomposing remains of plants and animals trapped in the pits, which turn into crude oil. Which explains the origins of the large petroleum reservoir, the Salt Lake Oil field, located just below the surface, north of Hancock Park. According to the museum's various informative placards, the oil was formed from marine plankton that found themselves summarily deposited in an ocean basin between 5 and 25 million years ago, and eventually time and pressure converted that organic matter into oil. Beginning some 40,000 years ago, petroleum began seeping to the surface around Hancock Park, forming hundreds of sticky pools of ooze. And the La Brea Tar Pits were born.

Anyway, for tens of thousands of years, animals would come sniffing around the pits -- usually predators and scavengers drawn by animals already trapped in the tar, thinking them easy prey -- and would in turn become trapped in the sticky ooze themselves. They would gradually get sucked down into the pit and asphyxiate, and over time their remains became fossilized as the lighter fractions of the petroleum evaporated, leaving the bones trapped in a more solid substance. And there they stayed, awaiting discovery by modern archaeologists thousands of years later.

The centerpiece of the La Brea grounds is a large Lake Pit outside, where methane gas still bubbles to the surface every few minutes or so, making the whole place smell like freshly laid asphalt. To capture the tragedy of senseless animal deaths, traumatizing young children in the process, the Lake Pit also features a diorama of a mammoth vainly struggling to free itself from the muck, while what can only assume are its mammoth-y family members look on in helpless horror.

One might be forgiven for thinking, "C'mon, how hard can it be to get out of a tar pit?" After all, Arnold Schwarzenegger fell into the tar pits in Last Action Hero and just swam out and wiped himself clean. The museum knows we're thinking this, and has helpfully set up a terrific hands-on demonstration of just how sticky this gooey stuff really is. We struggled mightily to pull metal plungers of different sizes and weights out of a vat of molten asphalt. Point taken. And the pits have their own form of camouflage, since they're covered by a layer of water, or dirt, or dust. It would be very easy to think you were just strolling through a marshy sort of puddle, and suddenly find, too late, that there's sticky asphalt underneath.

The local Native Americans used the sticky asphalt as a glue and to waterproof their baskets and canoes. When Westerners arrived, they mined the tar and used it as a roofing material. From the beginning, bones were occasionally found in the tar, usually dismissed as belonging to unfortunate cattle who'd become trapped in the pits. It was not until 1875 that the geologist William Denton visited the tar pits and identified the canine tooth of a saber-toothed cat. The rest of the scientific community just ignored his conclusion. Timing is everything, and Denton was too far ahead of the curve. The first bona fide scientific excavation of the pits didn't begin until 1901, when William W. Orcutt, a geologist who was investigating oil resources in the vicinity, noted that the bones in the asphalt seeps belonged to many extinct species.

And so the pillaging of these priceless artifacts began. Between 1905 and 1915, literally millions of bones were taken out of the ground. In 1913, the landowner, George Alan Hancock, feared that the fossils would be taken from the community and scattered widely. So he granted exclusive rights to excavate the fossil resources to Los Angeles County’s fledgling Natural History Museum—but only for two years. Between 1913 and 1915, museum crews made nearly a hundred excavations, collecting roughly a million bones. The work was messy, and not without its perils: excavators used boiling kerosene to clean the sticky bones, which would sometimes accidentally catch fire and singe the eyebrows of workers. But it was worth it. In all, the species count from the excavations in the early 1900s included 133 birds, 63 insects, 43 mammals, and 29 plants, plus a handful of amphibian, mollusk, reptile, and water flea species.

Surprisingly, there's a lot more fossils of carnivores and predators than there are of herbivores -- roughly 9:1, nowhere near the ratio most likely typical of the animal population living in the area back then. Scientists think it's because of the aforementioned hypothesis that predators and scavengers were drawn to the scent of trapped, rotting corpses and became trapped themselves. That's the prevailing hypothesis among scientists, although predictably, those wacky Young Earth Creationists maintain that the tar pits are evidence of a global flood. Noah's Ark totally happened, dude! Just like in Evan Almighty! By their "logic" (note judicious use of scare quotes to indicate sarcasm), the high ratio of carnivore fossils is due to the fact that theyw ere carried there from somewhere else by a huge flow of water. The fact that the tar pits date back some 40,000 years and are therefore older than the supposed Young Earth by a factor of 4, is considered to be a trifling dating error, or some mischievous test of faith by the Deity. Sigh.

Anyway, the animals didn't seem to learn from their mistakes, which could explain why they eventually became extinct: camels, mammoths, mastodons, long-horned bison, and the saber-toothed cat are no longer found in North America, and Chuck informed me (and the museum exhibit confirmed) that the horse, originally native to the area, died out and was introduced to the region by Spanish settlers. The dire wolf, in particular, seems to have been a bit lacking on the brain trust front: skulls and fossilized remains of the dire wolf are among the most common finds in the La Brea tar pits (over 3000 found to date, and still counting), and the Page Museum has an entire wall displaying nothing but hundreds of dire wolf skulls recovered over the years.

There's one species that is not well represented among the fossilized remains recovered from the tar pits: homo sapiens. You'd never know that from how often the tar pits feature in various thrillers and murder mysteries. For instance, in the forgettable 1990 film Bad Influence, the two men try to cover up a murder committed by a third friend -- why? misplaced guy loyalty, I guess -- by tossing the dead woman into the La Brea tar pits, where she is discovered the following day. In fact, there has been only one human being recovered from the pits, known as La Brea Woman; carbon dating revealed she died some 9000 years ago.

Technically, just her skull and a few other bones were recovered, and even so, the femur was stolen in the 1970s while the remains were in transit from their original home at the Natural History Museum to the newly constructed Page Museum. (Jen-Luc Piquant wonders aloud what kind of sick mind would steal a fossilized femur, and then recalls that her pal El Finster brazenly sports a bona fide human skull on his bookshelf. Authorities might want to check his office for any missing fossilized remains.)

Anyway, scientists believe La Brea Woman didn't end up in the tar by accident: there's strong circumstantial evidence that she was murdered, her skull bashed in with a blunt instrument, most likely with an artifact conveniently found a few inches from where her skull was found. This makes her the first documented murder victim in Los Angeles. They killed her little dog, too: canine bones were also found near her remains. Other than that, very little is known about La Brea Woman; the most extensive discussion I could find was this 2006 article in the Los Angeles Times by Amy Wilentz. That's where I learned that the museum used to have an exhibit devoted entirely to La Brea Woman until just a few years ago, when the curator decided to remove it amid concerns of offending local Native Americans. Like many of the exhibits in this woefully under-funded museum, the original exhibit was a bit archaic, featuring mirrors and spotlights to create a "Pepper's Ghost" kind of illusion: people would see her skeleton, then the special effects would kick in to create a mannequin version of how she might have looked in life.

Wilentz's article also informs us that the skeleton on display wasn't actually La Brea Woman; after all, most of her remains weren't found. Instead, they obtained the body of "a modern Pakistani female" of similar age and height, treated the bones so they looked like the dark bronze color that typifies tar pit bones, shortened the femurs, and attached the original skull. And the artistic depiction of how she would have looked in real life? Not so much with the accuracy. Far from being, in Wilentz's words, "a young, attractive sensuous tanned brunette with long, long hair strategically covering her nipples," La Brea Woman was a bit on the homely side, and probably considered middle-aged by 20 (most "elders" from that period were around 30; life was cheap and very, very short in the Pleistocene). She had an ectopic tooth protruding above her lip -- one of her few remaining teeth -- and an impacted molar in her jaw. Yet her remains have inspired subplots in at least two novels: Michael Connelly's City of Bones, and Robert Masello's The Bestiary. Something about La Brea Woman and her all-too-human fate speaks to us across the ages.

One might assume that after nearly a century of digging, the La Brea Tar Pits would be picked clean by now, but such is not the case. The focus has merely shifted from the large skeletons of mammoths, ground sloths, and saber-toothed tigers, to birds, bone fragments, seeds, pollen, insects, fish, rodents, and other objects so tiny, it's hard to believe the scientists were even able to identify them. The focus of the museum's current excavation activities is Pit 91, measuring 28 square feet and descending about 14 feet into the earth. Every summer, the pit is open for business from June to the beginning of October, so visits can stand in a special observation area and watch local paleontologists and a few lucky volunteers sift through the muck armed with dental picks, trowels, small chisels and brushes, in search of fossilized treasure.

And what a treasure trove the pit continues to be! In 2006 alone, the teams collected over 1000 in two months, ranging from three saber-toothed cat skulls, four dire wolf skulls, bones from sloths, horses, bison, coyotes and birds, insects, and a few plant fossils. (Since excavation of Pit 91 began in 1915, over 250,000 fossils have been recovered.) After the fossils are removed, the surrounding sediment is placed in screen baskets, then boiled in solvent to remove the asphalt, revealing a mix of sand, pebbles, small pieces of fossils, and microfossils like seeds. I feel for whoever has the thankless task for cleaning, sorting identifying and labeling this flotsam and jetsam.

Okay, I'm being prematurely dismissive: those bits and pieces can actually tell us a lot about the area during the Pleistocene, specifically, the habitats and climate (apparently LA was cooler and moister in the Pleistocene). Most recently, scientists have discovered 200-300 previously unknown species of bacteria, extremophiles who flourish in the harsh conditions of the asphalt pits: no water and very little oxygen, but loads of yummy toxic chemicals! They chomp away at the petroleum, breaking it down with special enzymes (e.g., polychlorinated biphenyls, or PCB) and then releasing the telltale methane that bubbles to the surface. (Technically, the bacteria suffer from gas and post-meal bloating, and no wonder, considering their diet.) A team from University of California, Riverside, led by David Crowley, identified the bacteria by sequencing recovered DNA. They froze the tar with liquid nitrogen and pulverized it into a powder, exposing the bacteria, and thus were able to extract the DNA. It's quite possible that this discovery could lead to new methods for cleaning up oil spills and enhancing oil recovery.

The tar pits have had more than their share of 15 minutes of fame in popular culture, usually in wildly speculative contexts, none so much as in 1997's Volcano, in which a volcano sprouts out of the Lake Pit and spews a river of hot lava down Wilshire Boulevard -- no doubt engulfing loads of unsuspecting well-heeled shoppers. (Young Earth Creationists would most likely attribute this to the avenging hand of a wrathful Deity.) But the reality is haunting in its own quiet way, with no need for exaggerated pyrotechnics. Wilentz describes the museum, for all its noble educational intentions, as "an homage to an oil reserve where millions of creatures died," a structure that has been built "above what can only be described as a mass grave." The La Brea tar pits are LA's very own heart of darkness, the epitome of "Nature, red in tooth and claw." After all, asphalt doesn't much care how many species perish in its sticky depths.

Orlando, Florida, is best known as a family vacation destination thanks to the megacomplex of Disney World, but last week both locals and out-of-towners had another option for wholesome "edu-tainment": a Plasma Science Expo, held in conjunction with the annual meeting of the American Physical Society's Division of Plasma Physics (DPP). This is the second time (at least) the DPP has done something like this, and once again it was hugely popular. The event ran for two days, and offered loads of hands-on activities and demonstrations to teach the Orlando populace about this ubiquitous fourth state of matter known as plasma.

I should point out, for non-physics types, that this is not the blood-related plasma we hear about on various TV forensics shows, but a cloud of gas that becomes "ionized," i.e., the gas is heated to the point where the electrons are ripped free of atoms and molecules, making the gas highly conductive, just like certain metals. Jen-Luc Piquant is still mourning having missed out on making her own impressive lightning arc, or manipulating a glowing plasma with magnets. We hope Orlando residents appreciated the opportunity in our stead (grumble, whine).

Plasmas pop up semi-frequently at the cocktail party (see prior posts here and here), in part because we think they're really cool. Mostly, though, it comes up because plasmas are pretty darned important in all kinds of different ways, not least of which would be fluorescent lighting, and that cutting-edge plasma TV some lucky readers might have in their homes (although rumor has it those TVs are a major energy suck). Those are just two examples of things we make with man-made plasmas. We also use them for arc welding and in various semiconducting manufacturing processes. Nature makes her own plasmas. Did we mention the sun? I'd say that's a pretty crucial object. Solar flares, lightning bolts, and vivid natural displays like the Northern Lights are all examples of naturally occurring plasmas, which make up 99% of the visible universe. (One used to be able to just say plasmas were the most abundant state of matter in the universe, but thanks to dark matter and dark energy, we must now qualify that statement by specifying "the visible universe." A bit tiresome, but science marches on, and one must keep pace with the change.)

It's the conductance capability of plasmas and the similarity to certain metals that make plasmas attractive as an alternative material for the antenna of the future. The 1500+ DPP attendees heard all about it from Igor Alexeff, a professor emeritus of the University of Tennessee and currently chief scientist of a small start-up company in Brookfield, Massachusetts called Haleakala R&D. (You can find a press release on this and several other DPP session topics here.) He's developed a series of prototype plasma antennas that are, per the meeting press release, "stealthy, reconfigurable, and jamming-resistant." Nor is Haleakala the only company working to develop a viable product: there's a British company called Plasma Antennas in Oxford, England, and a strong research effort underway at Australian National University, for example.

We're all familiar with conventional metal antennas because they're everywhere: in our satellite dish, on our cell phones, atop radio transmission towers, built into our laptops, etc. Most of us rarely stop to think about how all this wireless connection really works. An antenna is just a conductive, solid metal wire sized in such a way to emit and receive electromagnetic radiation at one or more selected frequencies -- usually in the radio end of the spectrum (which is actually a pretty broad range).

Everything vibrates at a natural resonant frequency, and an antenna exploits this effect to transmit and receive electromagnetic waves at a selected frequency. Basically, the energy of the incoming wave couples to the tuned structure, just like those old Memorex commercials where Ella Fitzgerald shatters a crystal wine glass with her voice. She can do it because she sings and holds a note that resonates at the natural resonant frequency of the glass. (Admittedly, they cheated a little and amplified the decibels, but you could call that artistic license.) You "tune" a simple dipole antenna, for instance, by splitting the metal wire or rod into two equal arms insulated from each other. The basic rule of thumb is that its total length should be half the wavelength of the desired incoming signal wave you wish to receive. Which is why a transmission antenna on a radio tower (680,000 Hz) might stand 361 feet, while your 900 MHz cell phone is only about 3 inches.

Metal antennas have withstood the test of time because scientists really haven't found anything that works better, once all the technological, economic, and practical use parameters are figured into the equation. But there are some niggling drawbacks to metal antennas, like "ringing" and vulnerability to interference, not to mention a pronounced lack of "stealthability"; plasma antennas can address those issues. A plasma antenna replaces the metal wire conducting element of a standard antenna with ionized gas enclosed in a sealed glass or ceramic tube: argon, neon, helium, krypton, mercury vapor, or zenon, to name a few gases that have been used in experiments to date. Apply a voltage at a radio-wave frequency to the tube, and you get that all-important coupling effect, resulting in the gas inside becoming ionized. Voila! You've got yourself a working plasma antenna!

Physics history buffs might recognize the concept as a high-tech version of a Crookes tube. Invented by Sir William Crookes (who mistakenly believed they were producing something akin to ectoplasm from the "spirit world"), these devices were all the rage in the mid-19th century, when traveling science lecturers -- the physics equivalent of wandering bards -- used them to wow audiences with demonstrations of these mysterious "cathode rays." A Crookes tube is just a glass tube in which most of the air has been pumped out to create a vacuum. Apply an electrical current to the glass tube, and the interior would glow in pretty fluorescent colors -- the result of energized electrons interacting with residual gas inside the tube and emitting radiation. The modern cathode ray tube operates on the same principle. (You can see a Java animation here.)

We mentioned earlier that plasmas have very high densities of electrons, ergo, they are excellent conductors. But they only conduct when the voltage is turned on, unlike metal antennas, which conduct all the time. This means that once you turn off the voltage that creates the plasma in the first place, the substance reverts to a neutral gas. The "antenna" essentially disappears, until you turn the voltage back on again. Now you see it, now you don't. Behold, the Amazing Vanishing Antenna!

Perhaps this sounds like a bad thing, but depending on the application you have in mind, it's actually a huge advantage -- for, say, US Navy ships keen on avoiding detection by enemy radar. A regular metal antenna will backscatter any incoming radar signals, giving away the ship's presence and enabling a potentially hostile craft to pinpoint its location. In times of warfare, this is very bad. So it's not surprising that the US military is keenly interested in developing plasma antenna technology.

Ships using surface wave radar also often experience problems with the high metal masts installed on board, since they can interfere with each other. Conventional metal antennas pick up interfering signals quite easily, along with noise bouncing off nearby objects. The fact that a plasma antenna ceases to exist when it is turned off and not in use significantly reduces this type of interference. Metal antennas also tend to "ring" because they continue to radiate energy even after the incoming energy stops; the oscillations need to die down. This is especially problematic for short-range ground penetrating radars used in petrochemical and mineral exploration. Thanks to their rapid switchability, plasma antennas don't ring.

That on/off switching ability also reduces a plasma antenna's vulnerability to "jamming," a common military counter-measure. It involves deliberately confusing the radar signal by hitting it with intense signals at just the right frequency to make its readings unreliable. There's a nifty example of this kind of thing in Nature: Bats hunt and navigate by emitting ultrasonic pulses and using the returning echoes to determine the location, speed and distance of nearby objects or prey. But certain insects have evolved a kind of frequency jamming ability to disorient a hungry bat. I wrote previously about the big brown bat's unique strategy for protecting itself against jamming. Although they prefer higher frequencies, when something jams their preferred range, the bats shift down to lower frequencies to compensate.

Both sonar and radar systems are vulnerable to this kind of jamming; they just use different frequency ranges (ultrasound versus radio). And since the US military relies heavily on both these technologies, they are keenly interested in anything that can reduce that vulnerability. Some scientists are studying the big brown bat in hopes of figuring out how to combat jamming in the ultrasonic frequency range; those interested in combating radar jamming are looking to things like plasma antennas. "Plasma antennas have a high frequency cut-off that can be adjusted electrically," says the DPP press release. "Thus, a plasma antenna can be transmitting and receiving signals while intense incoming high-frequency signals pass freely through it without interacting."

Loose translation: you can configure a plasma antenna to be receptive to certain frequencies while blocking out others. Plasma antennas boast terrific reconfigurability thanks to the development of so-called plasma "windows" -- again, something made possible by the fact that the plasma antenna "disappears" once it is turned off, or its electron density becomes too low for efficient conductance. Plasma windows are "apertures," or areas around the antenna's surrounding plasma "blanket" that "open" when the electron density in the plasma is lowered or turned off entirely, making it transparent to antenna radiation. The "windows" will "close" when the plasma density becomes high enough again to reflect antenna radiation well. One window creates a single antenna "lobe," while multiple windows can create multiple antenna "lobes." So you can "tune" your plasma antenna to whatever frequency you wish.

That's another huge advantage, because one plasma antenna can perform the same function as several conventional metal antennas. Either you can reduce the number of antennas you need, cutting down on clutter and weight, or you can exploit this feature to build an array of many small plasma elements that the user could reconfigure simply by turning one (or more) of the various elements on or off. The various windows can even be opened and closed in a particular sequence to steer the antenna beam. Alexeff is currently developing a "smart" plasma antenna that essentially steers the antenna beam 360 degrees to scan a given region for transmitting antennas, then locking onto that signal -- much like your built-in wireless connection in your laptop scans the vicinity for the strongest wireless network signal.

An array of such smart plasma antennas, when coupled with cutting-edge signal processing software, could maximize signal strength and reduce interference of signals from cell phone towers. This means fewer dropped calls and less "Are you still there? Can you hear me now?" moments for cell phone users everywhere. That steering mechanism could be handy for consumers, too. Figure out how to control a miniaturized plasma antenna array with your wireless laptop, and it may one day be possible to share your computer data wirelessly with just one other user in a crowded room filled with wireless laptop users. You'd just need to close or create "windows" as needed in order to direct the signal to your targeted user.

If one didn't understand the underlying science at work, that last bit would sound suspiciously like mental telepathy: you know, the ability to direct your thoughts in such a way that only the intended recipient can "hear" them. There is, needless to say, no scientific basis for such a thing, or for its partner in Pseudoscientific Crime, telekinesis. This is something I discussed at length in one of the chapters in The Physics of the Buffyverse: for starters, electromagnetic brain waves aren't remotely powerful enough to send a strong enough signal to reach a distant recipient, even assuming you could focus the mental signal like a laser beam -- or "steer" it to one's intended target.

Sure, we can measure brain waves using electroencephaolography (EEG), but these signals are so faint they have to be amplified thousands of times over before our cutting-edge machines can detect them and turn them into readable data. You then must figure in the inverse square law, which basically says that a signal will be strongest at its source and spread outward rapidly until it dissipates entirely, because the same amount of energy (it's conserved!) is being divided across a greater area. And it decreases fast, y'all! If the intended recipient of the telepathic signal is twice as far from the person trying to send the "communication," the energy of the signal will decrease by four times as much. So just as with an EEG, you'd need to amplify the signal a lot. (Incidentally, this is an issue also faced by the plasma antenna.)

The upshot is that unless a telepathic (and/or telekinetic) person had a means of amplification, not to mention embedded implants to serve as transmitter and receiver, plus that all-important directivity (i.e., "steering ability), such a feat is quite frankly impossible. Sorry if that bursts anyone's bubble. I know, I know, the idea of mental telepathy is so very seductive, but wishful thinking does not good science make. The world is not magic, remember?

That said, science offers something better than magic. Mental telepathy might be a silly notion, but the concept has inspired some pretty exciting new technology. Researchers really are experimenting with implanted computer interfaces, such as the folks at Duke University who outfitted macaque monkeys with the things in 2000 so the monkeys could move a robotic arm from a remote location, or the Emory University researchers who implanted a transmitting device into the brain of a stroke patient, linking the motor neurons to silicon so he could move a cursor on a computer monitor just by thinking about it. Plus, we have just learned -- courtesy of Greg Laden -- that some wacky researchers at the University of Arizona have built robots driven by the tiny brains of moths.

Wow. That's some pretty awesome stuff right there. Add in breakthroughs like the configurability, switchability, and directivity potentially offered by plasma antennas, and one day we might not have telepathy and telekinesis, per se, but the technological equivalent. And it will have nothing whatsoever to do with magic.

We've all become so accustomed to seeing CT scans and MRI procedures performed on TV medical dramas, or in film, that we rarely give much thought to what happens when the patient, for whatever reason, can't have one of those standard imaging procedures. Babies, for instance, are small, squirmy, and especially vulnerable to the ionizing radiation associated with many medical imaging procedures. Fortunately, researchers all over the world are hard at work developing alternative imaging techniques all the time, many quite similar in concept, but tailored for specific applications -- like imaging the infant brain.

Let's just take the three most common medical imaging technologies as a reference point: X-rays, CT scans, and magnetic resonance imaging (MRI).Conventional X-rays have been around for more than a century, after Wilhelm Roentgen first discovered the invisible rays in the 1890s and realized they could be used to image bone. It took awhile for folks to realize the rays were, in fact, quite dangerous in high doses, but thanks to improved technologies and shielding measures, for normal, healthy adults, X-rays aren't too problematic. CT scans are similar, in that they rely on X-rays, except instead of imaging the outlines of bones and organs, a CT scan machine builds a fully 3D computer model of the inside of a patient's body, by taking a series of images one narrow "slice" at a time and then reconstructing those slices into the final image.

When MRI came along in 1977 -- the first human scan took place on July 3rd of that year, and the machine that took the image, dubbed "Indomitable," is now on display at the Smithsonian Institution -- it became possible to take clear and detailed images of internal organs and soft tissues for the first time, without the use of ionizing radiation. A quick summation (for those who've never taken the time to peruse this handy explanation at How Stuff Works): MRIs use radiofrequency waves combined with a strong magnetic field to take images by manipulating the "magnetic moments" of hydrogen atoms -- one of the most abundant atoms in the human body because of the body's high water content. The rf waves get the protons in the hydrogen atoms all excited (okay, not "excited" in the colloquial sense, but that is the technical term for it), but eventually they relax back into their normal state, and when they do so, they emit powerful radio signals. These are detected and fed into a computer, which translates the data into a high-contrast image showing differences in the water content and distribution in various bodily tissues.

There are some drawbacks to MRI, though, namely the powerful magnetic fields: on the order of 0.5 to 2 tesla (5000 to 20,000 gauss, just to confuse the issue with more scientific units). So any metal object can become a dangerous projectile if it finds its way into the scanning room: paper clips, pens, keys, even medical equipment like stethoscopes, IV poles, or oxygen tanks. In this day of implants and prosthetics, metal objects can actually be inside the body of the prospective patients: certain older dental implants, for example, or aneurysm clips in the brain, or pacemakers. The latter are used to treat arrhythmia (abnormal heart beat), which affects as many as 2.2 million Americans alone, but older models can be made of magnetic materials. There is also a risk of burning heart tissue, because some devices use leads: electrical components capped with metal to connect the device to the heart muscle.

A year or so ago, a team at Johns Hopkins University figured out some ingenious ways to safely perform MRI scans on people with implanted defibrillators and pacemakers. It wasn't a single solution, but rather a combination of methods: some very simple and obvious, like turning off a defibrillator's shocking function for the 30 to 60 minutes it takes to complete the scan; lowering the strength of the electromagnetic field and the amount of electrical energy used at peak MRI scanning; and figuring out how to "blind" the implanted devices to their external environment, making them impervious to misfiring from the MRI machine's rf field.

But babies still present a sticky wicket when it comes to medical imaging. X-rays are just too potentially dangerous to infants because of the ionizing radiation (ditto for pregnant women), so doctors are reluctant to use that tried-and-true technology. Babies tend to find CT scanning equipment big and loud, and therefore upsetting; also, again, the x-ray exposure isn't really a good idea for their tiny developing bodies. MRI is equally big and loud, made more difficult because babies tend to squirm in distress at being confined in the Big Magnetic Donut From Hell. (Heck, plenty of adult patients find the MRI procedure a bit upsetting, and/or have trouble holding still long enough to take a decent set of images.)

If the infant is premature and confined to an incubator, getting a decent brain scan to check for possible damage or irregularities is especially daunting. Doctors understandably want to minimize any time the "preemie" must be handled or exposed to loud noises (or to the germ-laden environment outside the incubator), since either can cause irregular heart rate and breathing patterns -- potentially life-threatening to such a fragile creature. And yet, as many as half of early premature babies suffer from very subtle abnormalities of the brain that can be linked to later developmental problems -- which often don't manifest until at least 10 months of age, at which point it may be too late to medically intervene. It's quite common for premature babies to fall victim to infection from the wall of the uterus or placenta, or have an inflammatory response at birth that affects the brain. Sometimes the damage is caused by something so simple as an under-developed cardiovascular system that just isn't strong enough to pump enough blood to the brain during those crucial few weeks after premature birth.

So getting an MRI scan early on could alert doctors to such conditions much more quickly. Fortunately, a couple of years ago, scientists at the University of California, San Francisco (UCSF) collaborated with General Electric to design a special incubator compatible with MRI machines. It's made entirely of plastic, aluminum or brass -- i.e., non-magnetic materials -- and small enough to be easily transported. We're talking about a double-paned Plexiglass capsule, with fresh air piped in from the outside. The infant is usually sedated, since the MRI scan can take an hour or so. Also, it's still very noisy. At least it's now easier to get the images needed to identify potential problems while effective treatment is still an option.

But what about imaging the brain when we're awake? For decades, even after the invention of CT scans and conventional MRIs, there was no way to image or observe the brain in action. Then came functional MRI. It's pretty much the same technology, with a twist: it identifies those regions of the brain where blood vessels are expanding and other chemical changes are taking place, or perhaps a few extra shots of oxygen are being delivered. That's important information, because it's an indication of metabolic activity: that region of the brain is processing information and issuing "commands" to the body. So by studying which areas show increased activity, scientists can learn which areas of the brain are activated as the patient performs any given task. There have been rather large numbers of fMRI studies performed in recent years, on everything from glossalia (speaking in tongues) and how the brain reacts to chocolate, to the rather breathlessly reported news of an fMRI study of political affiliations that threw some folks in the scientific blogosphere into a tizzy this week over the questionable reporting.

Electroencephalography (EEG) has been the mainstay for infant brain imaging because it's safe and registers changes in brain activity very quickly. Alas, the technique can't reliably and precisely pinpoint where in the brain the activity originates. There have been a few fMRI brain scans of infants performed while said infants were asleep or sedated, but this doesn't tell researchers as much as they'd like to know about the developing infant brain. Ideally, they would like to be able to scan babies' brains as they sit comfortably on a parent's lap and/or interact with their environment. Then they could really see the baby brain in action. Infancy, after all, is a critical developmental period. That's when connections between neurons first form and break, nerve cells branch out to form networks, and various parts of the brain take on their specialized roles in vision, language, and other complex cognitive functions. When that early processing goes wrong, it can lead to learning disabilities of language impairments, among other complications.

In more recent years, high-density diffuse optical tomography (DOT) and its cousin, diffuse optical spectroscopy (DOS) have come into favor for infant brain imaging. This approach was originally developed in the 1990s by various research groups in the US, Europe, and Japan, and innovations have come fast and furious ever since. Last year, researchers at the Washington University School of Medicine in St. Louis (WUSTL), for example, announced they had developed a DOT system specifically to study the infant brain. Their system is much smaller --about the size of a small refrigerator -- quieter and more portable than MRI or CT scanning machines, and the goal is shrink the size down even more, to about the size of a microwave. It's not just useful for basic research, either: the WUSTL system makes it possible to monitor infants with brain injuries in their incubators, making it easier to keep track of their progress and provide better treatment.

Instead of ionizing radiation like x-rays, DOT uses harmless light from the near-infrared portion of the electromagnetic spectrum. Unlike x-rays or ultrasound, near-IR light passes through bone with little attenuation, so scientists can use the diffusing light to determine blood flow and oxygenation in the blood vessels of the brain. When these characteristics increase, it indicates that the particular area of the brain being scanned is contributing in some way to the mental task at hand.

Ah, but how does one scan an infant? You can see the system in action here. It's as simple as attaching a flexible cap to the baby's head, covering whatever area the doctors want to image. (I should note that this nifty image isn't the WUSTL system, but a similar approach using near-infrared spectroscopy developed at Helsinki University of Technology in Finland. It's just such a cool, slightly creepy photo, I had to use it.) The cap might look simple on the outside, but inside it contains fiber optic cables. Some of those shed light on the head; by determining how the light is diffused or scattered, researchers can glean useful information about brain activity.

Light passes out of one fiber optic cable, diffuses through the tissue, and is received by another cable. Yes, light does diffuse through tissue, as anyone who has ever held a flashlight up to his hand can attest. According to Joseph Culver, an assistant professor of radiology at WUSTL, "The flashlight's white light becomes visibly reddened because there's a window in the near-IR region of the spectrum where human tissue absorbs relatively little of the light." Anyway, based on this diffusion data, the machine's computer creates a 3D tomographic image based on whether the hemoglobin in the blood is oxygenated or deoxygenated to determine brain activity.

I mentioned the Helsinki NIRS technology above; that's certainly another strong contender in the drive for safer infant brain imaging, although it's pretty much diffuse optical spectroscopy, as far as I can tell. (A rose by any other name, and all...) It's already commonly used to monitor cerebral blood flow in preemies, and now it's moving into studies of brain activity during specific cognitive or sensory tasks. Here in the US, researchers at Texas A&M University worked with neuroscientists at Massachusetts General Hospital to develop a DOT imaging apparatus that also uses a cap or headband. Theirs employs a number of light-emitting diodes and light-sensing detectors to emit and detect near-IR light, looking, as always, for changes in blood oxygen levels to form the images. (Massachusetts General is also developing a DOT system designed to work in conjunction with conventional x-ray mammography to detect breast cancer earlier, with fewer false positives and less need for follow-up biopsies, but that's a bit off-topic for this post.)

The WUSTL system is a little different from other approaches because it combines diffuse optical imaging with tomography -- i.e., a computerized approach to the data analysis that makes it possible to image sections at greater depths. It's thanks to the greater density of the fiber optic cables they used that this is possible. All this is great news, but the WUSTL system isn't quite ready for prime time: it's still being tested for safety and effectiveness. The researchers did conduct one study on human volunteers to demonstrate their system could achieve sufficient resolution for functional brain imaging. They were able to link stimulation of parts of the visual field to the activation of corresponding areas in the brain's visual cortex -- a classic functional brain imaging technique called retinotopic mapping that was also used to test the validity of earlier technologies like fMRI and positron emission tomography (PET).

I think it's safe to say that the innovations will continue to develop. Some day, all those parents wondering, "What is my baby thinking?", might be able to turn to cutting-edge, non-invasive, optical imaging technology for the answer.

There was exciting news from the Lawrence Berkeley National Laboratory last week, as researchers announced that they had performed the world's smallest double-slit experiment and determined that quantum (subatomic) particles will start behaving in accordance with classical (macroscale) physics at the size scale of a single hydrogen molecule. Quantum physicists are no doubt excitedly discussing these marvelous results with a passion most people reserve for Super Bowl Sunday. But the average reader's eyes probably just glaze over with incomprehension, leaving him/her to wonder what all the fuss is about. Truthfully? It's tough to grasp the significance of this latest quantum wrinkle without a bit of background about Thomas Young's original 1802 experiment (now the poster child of the quantum concept of particle/wave duality), as well as the historical scientific debate that raged around the nature of light. Hence today's Monster Post.

Particle or wave? That was the question. It proved to be an especially contentious issue; the debate raged for millennia, in fact. Pythagoras, in 5th century BC Greece, was staunchly "pro-particle," while Aristotle (who lived a couple hundred years later) was ridiculed by contemporaries for daring to suggest that light travels as a wave. The confusion was understandable, because empirical observations of the behavior of light contradicted each other. On the one hand, light traveled in a straight line and would bounce off a reflective surface. That's how particles behave. But it also could diffuse outward, and different beams of light could cross paths and mix together. That's undeniably wave-like behavior. In short, light had a split-personality disorder.

By the 17th century, many scientists had generally accepted the wave nature of light, but there were still holdouts in the research community -- among them no less a luminary than Sir Isaac Newton, who argued vehemently that light was comprised of streams of particles that he dubbed "corpuscles." In 1672, colleagues persuaded Newton to publish his conclusions about the corpuscular nature of light in the Royal Society's Philosophical Transactions. He seemed to assume that his ideas would be greeted with unanimous cheers, and was rather put out when Robert Hooke and the Dutch physicist Christian Huygens were reluctant to jump on the Isaac Bandwagon. The result was an acrimonious, four-year debate. Huygens differed with Newton on such key points as how the speed of light changes as light goes from a a less dense medium like air to a denser material like glass: Newton said it should increase; Huygens said it should decrease. The issue remained largely untested because at the time there was no good way to measure the changes in speed.

Ultimately, Newton's stature as one of the greatest physicists of all time ensured that his notion of streams of corpuscles won out over the wave theory of light -- until that cheeky over-achieving upstart, Thomas Young, appeared on the scene almost a century later. Young was the oldest of 10 children born to a Quaker family in Somerset, England, and proved to be alarmingly precocious. He could read by the age of 2, learned Latin by age 6, and by the time he was 14, he'd added Greek, French, Italian, Hebrew, Chaldean, Syriac, Samaritan, Arabic, Persian, Turkish, and Amharic to his linguistic repertoire. His facility with languages served him well later in life, when he became fascinated with Egyptian hieroglyphics and played a key role in cracking the code of the Rosetta Stone by deciphering several Egyptian cartouches.

Young first studied medicine at Cambridge, then earned a physics doctorate in Gottingen before setting up shop as a physician in London. By age 28, he'd been appointed a professor of natural philosophy at the Royal Institution, delivering lectures about his experiments in everything from optics, acoustics, climate, and the nature of heat, to electricity, hydrodynamics, astronomy, gravitation, and measurement techniques. The term "polymath" hardly seems to do him justice; his fellow students at Cambridge used to call him "Phenomenon Young." No wonder his epitaph at Westminster Abbey salutes him as "...a man alike eminent in almost every department of human learning."

Ah, but could this brilliant young phenomenon take on The Goliath of Physics and win? Young was actually a huge fan of Newton and based his early work on color and vision on the insights Newton gleaned from his experimentum crucis. But that didn't mean he accepted the Great Man's conclusions without question. His pivotal experiment didn't start out as the poster child for the quantum concept of wave/particle duality; like every other scientist of his day, the fact that light might be both was simply inconceivable to Young. So he designed an experiment he believed would determine the matter once and for all.

Naturally, a darkened room was involved, along with a light source (probably a candle, or sunlight, this being the early 19th century). Young shone the light onto a barrier in which he'd cut two narrow, parallel slits, about a fraction of an inch apart. On the other side was a white screen. He reasoned that if light were made of particles, as Newton claimed, the screen would show two bright parallel lines where the light particles had passed through one slit or the other. But if light were a wave, it would pass through both slits, separating into secondary waves that would then recombine on the other side -- i.e, they interfere with each other.

It's a bit like water waves, which have crests and troughs. As the secondary light waves recombine on the other side, wherever two crests or troughs line up exactly, they produce a bright spot of light. Wherever a crest and a trough line up exactly, they cancel each other out, leaving a dark spot on the screen. The resulting "interference pattern" is thus a series of alternating dark and light bands. And that's exactly what Young observed, even making his own sketch of the interference pattern. Light, per his experiment, was undeniably a wave.

Young was understandably pretty chuffed at the success of this experiment, which offered the strongest evidence to date in favor of the wave theory of light. He applied his findings to explain the shifting colors found in thin films, such as soap bubbles, and even tied the seven colors of Newton's rainbow to wavelength, calculating what each color's approximate wavelength would have to be to produce that particular color of light.

Alas, his euphoria was short-lived: the pro-Newton crowd lost no time in bashing Young's experimental findings. One simply didn't question the Great One, even 80 years after Newton's death. Online encyclopedist David Darling memorably described it as "the scientific equivalent of hari-kiri." Young was too, well, young to known better. Newton's place in the pantheon ensured that the scientific community largely ignored Young's pivotal experiment for a good 10 years, bolstered by a simply savage review of his work in the Edinburgh Review (published anonymously in 1803, later revealed to have been authored by one Lord Henry Brougham, a big-time Isaac acolyte.)

Fortunately for the wave-friendly fans of light, French physicist Augustin Fresnel conducted a series of more comprehensive demonstrations of Young's basic experimental setup, succeeding (where Young had failed) in convincing the world's scientists hat light really was a series of waves, rather than streams of tiny particles. And in the mid-19th century, another Frenchman, Leon Foucault, proved that Huygens had been correct -- and Newton mistaken -- in his assertion that light travels more slowing in water than in air. Given the acrimony Huygens experienced from Newton for sticking to his guns on this issue, one would understand if the Dutch scientist indulged in a little "Nyah, nyah, nyah" type of gloating from beyond the grave. (It probably helped that the French were a bit less worshipful of Newton than the Brits. Jen-Luc Piquant urges us to remember that even the greatest scientists are often wrong. Huygens, in fact, was partially responsible for advancing the notion that light waves travel via an invisible substance called the luminescent ether, later disproved by the famed Michaelson-Morley experiment in the 1890s.)

There were a bunch of other breakthroughs going on at the same time, of course, and taken together, everything added up to strong support for the "light is a wave" school of thought. Case closed. Or so physicists thought as the 19th century drew to a close. But light had a few more surprises in store for them with the birth of quantum mechanics. It's too long a story to go into here, but Max Planck, Albert Einstein, and Arthur Compton were among the luminaries whose work led to the realization that light was both particle and wave: specifically, light is made of photons that collectively behave as a wave.

Sounds simple enough. Except quantum mechanics is never that simple. The revolution didn't end there. Quantum theory predicted that even the individual photon could behave like a wave, and essentially interfere with itself. For a long time, there was no way to test this prediction. But eventually technology and scientific instrumentation advanced to the point where they could emit and detect single photons. The modern version of the experiment looks like this. First, we need a researcher -- let's say, Paris Hilton, just to stretch your powers of imagination a little. Paris sets up a simple light source in front of a barrier with two small slits cut into it, with a light-sensitive screen on the other side to record the pattern of incoming light. Paris turns on the light source and is hypnotized by the shiny beams sends a series of photons, one photon at a time, toward the two slits in the barrier.

We're talking about single particles here, so the photons should only be able to go through one slit or the other, and just strike the screen like so many tiny ping-pong balls. Instead, Paris is stunned to find that the light forms that telltale interference pattern -- alternating bands of dark and light -- on the screen on the other side. What the heck? This means that those single photons are behaving like waves; each photon somehow travels through both slits and interferes with itself on the other side.

Now Paris wants to know more. This is a woman who reads Sun Tzu, after all; her natural curiosity drives her to repeat the experiment with an extra twist: she places particle detectors by each of the slits, so that she can verify that the photons do, in fact, each go through both slits at the same time. Except this time, she doesn't get the interference pattern; she gets the ping-pong ball effect, which means that the photon is now behaving like a single particle, passing through one slit, but the other. Are the photons just messing with her? Unable to cope with the quantum conundrum, Paris Hilton's head explodes. Millions rejoice. Tabloids mourn. And those mischievous photons give an evil cackle of delight at having claimed another victim.

The good news is, the photons aren't deliberately messing with our heads. There is an explanation for the two results, but it's an explanation that defies common sense. Instead of merely tweaking her first experiment, thanks to the addition of the particle detectors, Paris unwittingly performed a completely different experiment the second time around. In the first version, she's making a wave measurement; in the second, she's making a particle measurement. The kind of measurement she chooses to make determines the outcome of the experiment. Basically, if Paris just lets the photons travel from the light source to the screen undisturbed, they behave like waves and she sees the interference pattern. But if she observes them en route, she knows which path the photons took; this knowledge forces them to behave like particles, passing through one slit or the other. Paris can construct her experiment to produce an interference pattern, or to determine which way the single photons went. But she can't do both at the same time. Heisenberg's Uncertainty Principle rears its ugly head.

Hence the opening line of the UC-Berkeley press release: "The big world of classical physics mostly seems sensible: waves are waves and particles are particles, and the moon rises whether anyone watches or not. The tiny quantum world is different: particles are waves (and vice versa), and quantum systems remain in state of multiple possibilities until they are measured -- which amounts to an intrusion by an observer [Paris Hilton!] from the big world -- and forced to choose: the exact position or momentum of an electron, say."

There's a lot of really big ideas contained in those two sentences, more than we can even attempt to discuss intelligently in a single blog post. Vast tomes have been written about this, countless papers are published each year in academic journals -- including the one describing the latest version of the double-slit experiment that the Berkeley Lab group performed. (Actually, LBL collaborated with scientists at the University of Frankfurt in Germany, Kansas State University and Auburn University.) We were most impressed with the sheer ingenuity of how they constructed their experimental set-up. They used the two proton nuclei of a hydrogen molecule as the two "slits," separated by a mere ten-billionths of a meter.

The tricky part is to separate the component parts of the hydrogen molecules in the first place. How the heck did they manage that? It helps if you have access to a couple of x-ray beam lines at LBL's Advanced Light Source. All you need to do (are you taking notes, Paris?) is send a stream of hydrogen gas through the apparatus into an "interaction region" (the equivalent of an enclosed chamber, would be my guess), where some of the hydrogen molecules run afoul of that nasty x-ray beam, which has sufficient energy to knock off each hydrogen molecule's two negatively charged electrons. Without that negative charge to balance things out, the two positively charged protons that form the nucleus of the molecule blow apart from the powerful mutual repulsion. The LBL researchers then used an electric field to separate the particles according to charge, sending the protons to one detector and the electrons to a detector in the opposite direction. Genius!

LBL researcher Ali Belkacem calls this "a kinematically complete experiment," because it accounts for every single particle, enabling them to figure out all kinds of things, like "the momentum of the particles, the initial orientation and distance between the protons, and the momentum of the electrons."It's not just photons that exhibit wave/particle duality: electrons do, too. So even a single electron is capable of interfering with itself. Just like the classical version of the experiment, the scientists could study the electrons as particles, or as waves. For instance, they found that once the electrons were knocked off the hydrogen molecule, one was fast, and one was slow, giving them an assortment of both fast and slow electrons.

Mostly they were interested in the interference pattern, particularly at what point it disappeared. They essentially turned the slower electrons into teensy particle detectors by boosting their energy levels just a tad. For reasons that remain unclear to me -- I invite any quantum physicists to offer their explanation in the comments -- this turns the slow electrons into "observers." They are "big" enough to interact with the classical domain. So the interference pattern disappears and the electrons behave almost like a classical system. I say "almost," because apparently they still retain some signs of entanglement (what Einstein called "spooky action at a distance"). [UPDATE: Chad at Uncertain Principles goes into more of the technical details behind this new experiment, while the mysterious Statistical Deviations blogger offers a possible explanation of how boosting a slow electron's energy slightly makes it "big" enough to act as an "observer."]

So there you have it: the world's smallest double-slit experiment. And now we must go rest our poor aching head, perhaps by watching a couple of sitcoms or reading about Paris' latest tabloid exploits (aiding drunken elephants in Africa? I think not). She'd get into far less trouble if she'd just stick with her quantum physics research. Then again, when's the last time Larry King bothered to interview a quantum physicist?

Last year a close friend of mine lost his elderly father to complications from Parkinson's Disease. This was a once-vibrant, keenly intelligent, accomplished man who'd been ravaged by a steady decline in neural and motor function, plus the unpleasant side effects that accompanied some of the medications to treat his condition. The decline occurred over the course of 15 long years, and while losing his father was devastating for my friend -- as losing a loved one always is -- I suspect that his father's passing was also something of a relief, since it was tough to watch that slow inevitable decline, knowing the man his father had once been. [UPDATE: We join our Spousal Unit and countless others in the physics community who mourn the passing of Harvard physicist Sidney Coleman, who suffered from Parkinson's for several years, and was a truly incomparable human and scientist.]

It's safe to say that most people have heard of Parkinson's Disease, at least in passing, if only because actor Michael J. Fox (Family Ties, Spin City, the Back to the Future film franchise) suffers from it, and established an eponymous Foundation for Parkinson's Research to promote R&D to treat and hopefully one day cure the disease. To his credit, Fox hasn't shirked from going public with his condition, or from occasionally displaying the terrible impact it has had on his motor functions. Sure, for a guest TV stint, while he was still in the early stages, he kept his hands in his pockets to hide their shaking, but his controversial TV ad espousing stem cell research last year was a poignant reminder of just how debilitating the disease can be.

Sadly, people lost no time politicizing the personal, using as an excuse the fact that the ad was, technically, associated with a Missouri Democratic effort to pass a bill on stem cell funding. That belligerent blowhard, Rush Limbaugh, predictably wasted no time denouncing Fox's on-camera behavior as "exaggerated," concluding that either he hadn't been taking his medications, or he was "acting" to gain the sympathy vote. To which one can only respond (and many people did): oh, please. Wherever you stand on the issue of stem cell research (we are heartily in favor, for the record, and pleased that the state of California supports it, too), accusing someone who has such a debilitating disease of "faking it" to further your own political agenda is ethically beyond the pale. Because Parkinson's Disease is first and foremost a personal tragedy, not just for the sufferer, but for his/her loved ones as well. It's a bit like Alzheimer's in that respect -- another degenerative disease that could benefit from increased stem cell research.

Parkinson's is officially defined as a progressive, degenerative neurological disorder of the central nervous system that afflicts somewhere between .5 and 1.5 million people in the US alone, and as many as 6 million people worldwide. The name derives from the British physician, James Parkinson, who was the first to formally characterize the disease (then known as paralysis agitans) and its symptoms in 1817 in his treatise, An Essay on the Shaking Palsy.

There's a region of the brain called the substantia nigra, whose cells produce the chemical neurotransmitter, dopamine, which is responsible for transmitting signals within the brain that allow for coordination of movement. Without enough dopamine, neurons fire without the usual level of control, so sufferers are less able over time to direct or control their movements. Nobody knows what causes the loss of dopamine and, hence, the disease, exactly, but studies point to a combination of genetic and environmental factors. Early signs include the onset of tremor: a telltale slight shaking in the hand, sometimes even in the legs. There may also be some cognitive or speech impairment: mild memory loss, perhaps, or soft, mumbling speech.

One of the most common symptoms of advancing Parkinson's is that as the disease progresses, the sufferer experiences greater difficulty walking. Specifically, they tend to take small little stutter steps, exhibiting a shuffling walk, an unsteady gait, and a stooped posture. There's a medical term for it: akinesia (or bradykinesia, if you're referring to the whole range of slowed motion that accompanies the progression of the disease).

Usually this is treated with a drug called Levadopa, or L-dopa, a chemical precursor of dopamine that can be found naturally in plants and animals. The brain's nerve cells convert L-Dopa into dopamine, often reversing many of the more disabling symptoms of Parkinson's. (One cannot, alas, just administer dopamine directly, because it can't cross the protective blood-brain barrier -- that meshwork of tightly packed cells in the walls of the brain's capillaries that serve to screen out certain substances. L-dopa can penetrate that barrier, although it diffuses so much that only a small amount actually gets to the brain. Combining L-dopa with another drug, carbidopa, makes it a bit more effective, and also reduces some of the unpleasant side effects, which include involuntary movements, hallucinations, a drop in blood pressure when standing, and nausea.)

Then there's physical and/or occupational therapies, to retard the degradation of motor function. Along with akinesia, there's also an intriguing counter-phenomenon called kinesia paradoxa, in which Parkinson's patients begin to walk normally when triggered by the placement of physical obstacles at their feet. Walking up regularly spaced stairs, for example, can "unfreeze" someone who otherwise has difficulty walking with a normal gait. The same effect can be achieved with black and white tiles spaced evenly on a floor. The black tiles appear as objects to avoid, or as guides, thereby triggering a reflexive landing of the feet between the black spaces.

In general, though, most "normal" or "natural" environments are sadly lacking in these critical visual cues. This is where a California podiatrist named Tom Reiss comes in. He was diagnosed with Parkinson's at the age of 33 -- like Fox, struck down in his prime -- and wasn't content to merely accept the associated reduced motor functions. Instead, he spent 16 years working in his garage, using himself as the main test subject, trying to figure out adaptive methods for countering his impaired walking abilities. He noticed the kinesia paradoxia effect while shopping at Safeway, because the floors were lined with evenly-spaced black and white tiles, and also noticed the same thing happened when walking up the stairs. Reiss compares the visual cues to an array of ordinary playing cards, displayed like the rungs of a ladder, with the subject landing his feet between the rungs.

He built several prototype cueing devices over the years, some of them owing more than a little to quirky inventor Rube Goldberg's influence. He used coat hangers to attach playing cards to the tips of his tennis shoes, for example, because the cards appeared to be evenly spaced objects as he walked -- an approach he describes as "not too socially acceptable, but it worked!" Then he heard about an off-the-shelf Virtual Vision Sport Personal Viewing system, essentially a pair of "augmented reality" goggles that superimpose a virtual TV screen image onto the real world a few feet ahead of the wearer, in such a way that both were visible at the same time. So the wearer could watch TV while mowing the lawn, for example.

For Reiss, the system offered the perfect solution for his impaired walking abilities, and he ended up working with the University of Washington's Human Interface Technology lab to demonstrate the effectiveness of his glasses concept. The first time he donned the prototype goggles, he saw the real world augmented by a series of small, colored moving squares, projected into space at regular intervals in front of him. Wearing them, he could walk normally. So he founded his own tiny start-up company, HMD Therapeutics, to further develop his invention, which he called Central Field Cueing Device Glasses.

The glasses have an array of light-emitting diodes (LEDs) working in conjunction with a computer chip embedded in the sidebar of what would otherwise be a fairly typical pair of wraparound sunglasses. The LEDs produce a pattern of thin, horizontal virtual lines projected onto a transparent screen across the wearer's entire field of view. The lines appear to scroll towards the wearer in an even flow when s/he is stationary, but appears stationary when s/he is walking. Should the user look up, the lines disappear entirely. There are also built-in mercury switches to track head movements so the glasses can "choreograph" the appropriate visual cues.

The glasses are light, portable, relatively inexpensive, easily scaled up for commercial manufacture and offer hands-free control to the user. Most importantly, they're pretty stylin'. These are not the visual equivalent of orthopedic shoes, or some bizarre cyber-punk apparatus (although I suspect there might be a market for that, too); Reiss recognizes the need to make his product attractive. Parkinson's sufferers don't need to feel any more like outsiders than they already are.

Reiss's invention may soon be available to the public at large, thanks to a company called Enhanced Vision, which has licensed his patented technology to refine and perfect the prototypes into a viable commercial product. And he's continued making improvements to the original design, thanks to a Small Business Innovative Research (SBIR) grant. A more recent version, called Virtual Vection Glasses, uses LEDs to generate peripheral cues to not only promote a "normal" walking gait, but also to help suppress another complication of Parkinson's, dyskinesia -- the jerky, uncontrolled movements so painfully evident in Fox's televised plea on behalf of stem cell research, resulting from the body's attempts to process "apparently irrational peripheral movements," and from the excessive build-up of L-dopa in the body.

It's an inspiring story, and Reiss has deservedly received his proverbial 15 minutes of media exposure because of it. But ultimately, it's about reclaiming the life that Parkinson's Disease gradually takes away from sufferers. We fully expect one day soon to see Reiss, Fox and any other sufferers happily strolling along city streets and parks with fully normal gaits, outfitted with stylish sunglasses that only they know serve more purpose than merely blocking out the sun's blinding rays.

While the Spousal Unit and I were entertaining an out-of-town friend at Takami, (a great new downtown LA rooftop neo-sushi bar) on Friday night, one of our local pals was really getting into the spirit of things at a UCLA Physics Department bash. He allowed himself to be persuaded to walk across a bed of piping hot coals in a demonstration of firewalking. Now that's a committed physics professor! The upshot? As we were downing our second Lotus Blossom martini and savoring the spicy albacore tuna sashimi -- a specialite de la maison -- poor David was being escorted home with badly blistered feet, and spent the rest of the night soaking his very sore soles in cold water. His enterprising students caught the whole thing on video, and thoughtfully posted it on the Internets for your viewing pleasure. (Listen carefully, and you can hear David exclaim at the very end, "This actually hurts...")

I'm sympathetic to his suffering, unlike Jen-Luc Piquant, who revels in the misfortunes of others. (Schadenfreude is her middle name, or it would be, should she ever get it legally changed from the decidedly non-mellifluous "Marie-Evangeliste.") But some obvious questions arose in my mind. First, the UCLA party was supposed to be Tesla-themed, so what the heck does firewalking have to do with that? Tesla played with electricity, not actual fire, although both can burn, and Tesla did like showy demonstrations with a strong possibility of injury. Maybe that was the rationale. Second, and more to the point, shouldn't a guy smart enough to get a PhD in physics know better than to walk across hot coals?

Like many things in life, firewalking is a bit of a misnomer, since people are really walking barefoot over a bed of hot coals instead of actual flames. The earliest known reference to the practice can be found in an Indian story dating to around 1200 BC, but firewalking shows up in cultures all over the world, spanning thousands of years. Not surprisingly, it's often associated with religious rituals (eg, in certain Eastern Orthodox communities in Greece and Bulgaria), or done to demonstrate the mystical powers of, say, Indian fakirs. You'll also find firewalking in Polynesia, among Japanese Taoists and Buddhists, and performed by certain bushmen in the Kalahari desert as part of their healing ceremonies.

In modern-day America, the practice is more crassly commercial. Sometime in the 1970s, an enterprising snake oil salesman motivational author named Tolly Burkan started offering evening firewalking courses to the public, selling it as a kind of New Age way of confronting one's fears and asserting mind over matter, emerging with a stronger sense of empowerment as a result. Or something. Think "Fear to Power" instead of "Will to Power." He's since founded The Firewalking Institute for Research and Education, and describes the practice as "a method of overcoming limiting beliefs, phobias, and fears." But he doesn't claim that anything supernatural or paranormal is necessarily going on, which is smart, because the science behind firewalking has by now been pretty well documented.

David isn't the first scientifically minded sort to engage in firewalking: noted skeptic Mike Shermer has done it, as has Jearl Walker, a former columnist for Scientific American who has performed firewalking and other insane feats in classroom demonstrations, memorably commenting, "There is no classroom demonstration so riveting as one in which the teacher may die." (No doubt David's physics students would agree.) A physics professor in Pittsburgh named David Willey does it all the time, and has arguably done the most in recent years to spread the word about the underlying physics behind safe firewalking.

There's even been the odd scientific study of what's involved, scientifically, in the firewalking phenomenon, beginning with one performed in the mid-1930s by the University of London Council for Psychical Research. That study concluded that the secret of the successful firewalk is as simple as the low thermal conductivity of the burning wood-turned-to-coal, an insulating layer of ash, and the short time of contact between the hot coals and the soles of the feet. Per Willey: "What I believe happens when one walks on fire is that on each step the foot absorbs relatively little heat from the embers that are cooled, because they are poor conductors, that do not have much internal energy to transmit as heat, and further that the layer of cooled charcoal between the foot and the rest of the hot embers insulates them from the coals." (Check out some of the "firewalking" hyperlinks if you want more on the specifics of heat conduction, etc.)

Armed with that kind of background knowledge, it's not surprising David figured he'd try his hand at it. But if firewalking is supposed to be so safe, and un-magical, and rooted in sound science, why did he get blisters all over the soles of his feet? Well, like most scientific experiments, you have to set them up and perform them correctly to replicate successful results; there's not much margin for error. People do get hurt in such stunts; in 2002, about 20 managers with the KFC fast food chain in Australia were treated for firewalking-related burns. (Insert your own lame "fried chickens" joke here.) Guess that whole "mind over matter, confronting your fears, blah, blah, blah" mantra didn't work so well for them.

I wasn't there to witness David's firewalking, so I can only surmise about what might have gone wrong. The most obvious explanation is that David lingered a bit too long in one place while walking over the bed of hot coals. Except I did see the video, and he seemed to be moving across it pretty quickly. (I, personally, would have bolted across. While wearing protective, flame-retardant shoes.) So maybe there was something not quite right with the set-up. It's critical that the coals be allowed to burn down sufficiently so that they are at a relatively comfy 538 degrees Celsius or so (1000 degrees Fahrenheit), preferably with a thin layer of ash over them providing a bit of extra insulation. This process also burns off any excess water content in the coals; any remaining water would increase both the heat capacity and thermal conductivity of the coals. It's equally critical to make sure no bits of metal have found their way into the coals, because metal has very high thermal conductivity.

Perhaps David could have further lessened his risk of injury by dampening his feet beforehand (the so-called "Leidenfrost" effect, in which a thin layer of sweat or water instantly forms an insulating boundary layer of steam when exposed to intense heat). However, per Willey, this probably isn't a major factor. For one thing, it carries an added risk of coals sticking to your feet as you walk -- increasing exposure time and therefore causing the soles to burn more than if you just crossed with dry feet. (Willey prefers firewalking with dry feet, and also places a water-soaked carpet remnant at the end of the walk for immediate cooling.) Maybe it's something as simple as the fact that D. has very thin soles, and/or insufficiently calloused feet.

The upshot is that David put his faith in theory, trusting that it would be borne out by experiment, further bolstered by the knowledge that it had been borne out by experiment in the past (alas, conveniently neglecting to fully consider the numerous occasions when the experiment failed). It's always a sad thing when scientific experiments don't quite work. Consider the following exchange posted on Overheard in New York, which supposedly took place in a physics lab at City College of New York, after a less-than-satisfactory experimental result. A student points to the equipment and asks, "Um, is this broken?" And the professor (identified as being "Russian") sighs defeatedly, "No. Nothing is broken, except my heart."

For David, it's probably less the intangible pain of a broken heart over a failed experiment, and more the physical discomfort associated with "Yowza, these burn blisters hurt!" Nonetheless, I think he learned a lot from the experience, per his email after I told him I was planning a blog post on firewalking in his honor: "You can tell people that it really hurt, and no creams or sprays helped, not even 30% benzocaine. But ice water worked like a charm." His advice for any aspiring firewalkers? "It's a good idea to hoard Vicodin in advance, which I neglected to do." Heed his words, impressionable young people: David has suffered so you don't have to.

He also had a question of his own, namely, "Why does a burn feel hot even hours after the burn? It must have something to do with swelling, but why would that feel like a burn when other causes of swelling do not?" Good question. If pressed, I'd probably fall back on the first stage of wound healing: inflammation. Blood rushes to the wound site carrying new cells and other useful components for rebuilding tissue, then carries away dead cells, bacteria, and the like. Which in turn makes the wound site feel hot. But it's not the most satisfactory explanation, so commenters should feel free to weigh in with their own thoughts about why this is so. It's not like I've thoroughly consulted WebMD (a.k.a., "The Hypochondriac's Bible") on the subject. (I've been avoiding the site ever since it chastised me for being, like, the millionth person to search the terms "chronic headache" and "brain tumor." Quoth the site (in heavy underlined text): "Most headaches are not an indication of brain tumors." Implied message: "So stop asking us and take some Excedrin already!")

While David is waiting for his feet to heal, and contemplating his own folly while wondering where the firewalking experiment went so horribly wrong, we offer this funky YouTube video of a surfing rats experiment for his amusement. They're call The Radical Rodents, and we think they could give Tyson, the famous Skateboarding Bulldog, a run for his money in the YouTube "Most Downloaded" video awards category. If nothing else, they can take David's mind off the blistering pain. (UPDATE: He can also bask in the glory of being quoted in USA Today.) And once he's mobile again, we'll treat him to Lotus Blossom martinis at Takami, where he can regale the wait staff with tales of his derring-do.

It seems I moved away from Washington, DC, just in time: Earlier this week, the Washington Post announced the arrival of The Rumbler, a high-tech blaster now being used in conjunction with the traditional siren on a select few police cars in the District. Imagine: you're in the car, stopped at the corner of 16th and R during rush hour, waiting for the long line of cars (and one Metrobus) ahead of you to start moving now that the driver of the Hyundai has finally managed to make that left turn. Suddenly, the ground literally starts to shake. Then you hear the telltale siren. I don't know about you, but I'd be thinking "terrorist attack," sooner than I'd be thinking, "Hey, must be a police car -- I'd better try to move out of the way!" (Here in Los Angeles, we'd all be thinking "earthquake.")

There's nothing more high-tech about the Rumbler than some expensive woofers and an amplifier. It uses low-frequency sound waves to shake things up (even your real-view mirror),and it's being implemented because police officers have complained that they've been having trouble getting drivers to move out of the way. (Personally, I often found it impossible to budge in any direction during rush hour in DC, however badly I wanted to move aside for a police car or ambulance.) What with the rise in iPods, cell phones, and blasting car stereos, a lot of drivers claim they can't hear the sirens. With the Rumbler, even if they can't hear the siren, they can damn well feel it, at least for 10 seconds, after which it automatically turns off. That's a boon for the deaf or hard-of-hearing; I'm guessing it'll be more of a nuisance for everyone else. But hey -- it's always fun to get new toys, and I'm sure the DC police are well deserving of some spiffy upgrades.

Vibrations have been in the news all week -- must be something in the air. A less annoying buzz, rather than a powerful vibration, might hold the key to reducing fat and building up bone without resorting to drugs, according to an October 30 article in the New York Times by Gina Kolata. A SUNY-Stony Brook researcher named Clinton Rubin put a bunch of mice on a tiny buzzing platform for 15 minutes a day, five days a week. At the end of the experiment, 15 weeks later, those mice had 27% less fat than the mice who didn't get a buzz, along with more bone. His explanation? The fat precursor cells are turning into bone instead of fat. It's a responsible article; Kolata's a careful reporter. Everyone quoted cautions the public not to jump to conclusions, even those who think the findings are "provocative" and worthy of further research. Heck, Rubin himself, while sticking to his guns about his results, admits it sounds implausible.

Scientists now know that a stem cell in bone marrow can turn into either fat or bone, depending on the signal it receives. People suffering from osteoporosis have bones that are not just thinning, but also becoming lacy in texture, and those holes in the bone marrow fill up with fat. And there does seem to be an "exercise effect" that increases bone density, most noticeable in professional athletes -- professional tennis players, for example, who apparently have 35% more bone in their playing arm.

The assumption is that it arises from all those forceful impacts. But over the years, Rubin has found that the predominant signals affecting bone were "high in frequency but low in magnitude: -- a buzzing instead of a pounding. Hence the experiments with mice on tiny buzzing platforms. He even co-founded a company to manufacture the vibrating plates. And his findings do seem to support that hypothesis, although skeptics argue that it's also possible the mice lost the fat because they were burning more calories from the effort of trying to stay on a the vibrating platform. (What might be a buzzing to us could feel like major tremors to a mouse.)

The skeptics might very well have a point. After all, the latest rage in cutting-edge exercise regimes is the vibrating exercise platform, like the Power Plate, currently favored by several celebrities, most notably Madonna. The idea is that doing squats and such on a vibrating platform will strengthen muscles, increase flexibility and build bone. But those vibrations might not be all good, according to this October 29 article in the Los Angeles Times. The machines exceed occupational safety standards that apply to trucks and heavy machinery, according to researchers at the Johnson Space Center in Houston, who measured the direction and magnitude of the vibrations produced by the Power Plate and a second exercise device, the Galileo 2000.

Too much shaking, even low doses over a long period of time, have a cumulative effect, leading to things like spinal injuries, osteoarthritis, and visual impairment, among other complications. So are the exercise machines doing more harm than good over the long haul? The jury's still out. It might not be a fair comparison. Advocates of vibrational training say that proper body alignment in a controlled environment isn't the same thing as randomly applying vibrations to a passive seated body, like someone sitting in a helicopter or operating a tractor. One assumes their famous clients would agree. Although personally, the way Madonna's arms have been looking since she started her Power Plate regime, the results are just as scary as the prospect of osteoarthritis...

Physics Cocktails

Heavy G

The perfect pick-me-up when gravity gets you down.
2 oz Tequila
2 oz Triple sec
2 oz Rose's sweetened lime juice
7-Up or Sprite
Mix tequila, triple sec and lime juice in a shaker and pour into a margarita glass. (Salted rim and ice are optional.) Top off with 7-Up/Sprite and let the weight of the world lift off your shoulders.

Any mad scientist will tell you that flames make drinking more fun. What good is science if no one gets hurt?
1 oz Midori melon liqueur
1-1/2 oz sour mix
1 splash soda water
151 proof rum
Mix melon liqueur, sour mix and soda water with ice in shaker. Shake and strain into martini glass. Top with rum and ignite. Try to take over the world.