"Takes 1 part pop culture, 1 part science, and mixes vigorously with a shakerful of passion."
-- Typepad (Featured Blog)

"In this elegantly written blog, stories about science and technology come to life as effortlessly as everyday chatter about politics, celebrities, and vacations."
-- Fast Company ("The Top 10 Websites You've Never Heard Of")

[NOTE: This post originally appeared at our new home at Scientific American.]

A few years ago, soon after moving to Los Angeles, an old grad school buddy of the Time Lord came to town, Brian Schmidt, and we took him to a nearby tapas eatery for nibbles and pisco sours. I remember they were shooting a scene from a Will Smith movie that night, so nearby storefronts were riddled with fake bullet holes, and the odd fake gunfire and explosion interrupted our conversation. Unfazed, Brian regaled us with tales of his life in Australia, where he juggles research with running his very own winery -- hence his Twitter handle, @CosmicPinot. After saying farewell, I commented to the Time Lord as we walked back home how much I liked Brian: "You have really nice friends." (It's true; pretty much all of the Time Lord's pals are delightful, but then, I'm partial to physicists.) He agreed, and added, "And you know what else? He will absolutely win the Nobel Prize some day."

It's a fitting coda to what turns out to be just the beginning of an epic saga of the quest to unlock the mysteries of the cosmos. Because the most likely explanation we have (so far) for this observed acceleration is a mysterious thing called dark energy that makes up a whopping 73% of all the "stuff" in the universe.

Einstein's Fudge Factor

Once upon a time, physicists believed the cosmos was static and unchanging, a celestial clockwork mechanism that would run forever. When Albert Einstein was forming his theory of general relativity in 1917, his calculations indicated that the universe should be expanding. But all the observations up to then showed a static universe. So he figured his calculations were incorrect, and introduced a mathematical “fudge factor” into his equations, known as the cosmological constant, or lambda. It implied the existence of a repulsive force pervading space that counteracts the gravitational attraction holding the galaxies together. This balanced out the “push” and “pull” so that the universe would indeed be static.

Einstein should have trusted his instincts. Twelve years later, Edwin Hubble was studying distant galaxies, and noticed an intriguing effect in the light they emitted: it had a pronounced “Doppler shift” toward the red end of the electromagnetic spectrum. Basically, when a light source is moving towards an observer, the wavelength of its emitted light compresses and shifts to the blue end of the spectrum. When moving away from the observer, the wavelength stretches, and the light shifts to the red end of the spectrum. Hubble reasoned that this could only be happening if the light were traveling across space that is expanding.

The conclusion was inescapable. Einstein’s original equations had been correct, and there was no need for a cosmological constant. The cosmos was still expanding. That's why Einstein famously denounced lambda as his “greatest blunder.”

That discovery turned cosmology on its head. If the universe were still expanding, scientists reasoned, eventually the attractive force of gravity would slow down the rate of expansion. They spent the next 70 years trying to measure that rate. If they knew how the rate of expansion was changing over time, they could deduce the shape of the universe. And its shape was believed to determine its fate. Matter curves space and time around it and gives rise to what we recognize as gravity. The more matter there is, the stronger the pull of gravity, and the more space will curve – making it more likely that the current expansion would halt and the universe would collapse back in on itself in a “Big Crunch.” If there’s not enough matter, the pull of gravity would gradually weaken as galaxies and other celestial objects move farther apart, and the universe would expand forever with essentially no end.

A flat universe, with just the right balance of matter, would mean that the expansion will slow down indefinitely, without recollapsing. A flat universe was the favored option; scientists just needed to precisely measure the acceleration rate to confirm the prediction.

Faster, Faster....

Once again, Einstein was a bit too hasty in dismissing his work. In 1998, two separate teams of physicists measured the change in the universe’s expansion rate, using distant supernovae as mileposts: one led by Perlmutter, the other by Schmidt. The Time Lord shared an office with Schmidt back in the early 1990s. As he tells it in his book, From Eternity to Here:

I was the idealistic theorist and he was the no-nonsense observer. In those days, when the technology of large-scale surveys in astronomy was just in its infancy, it was a commonplace belief that measuring the cosmological parameters was a fool's errand, doomed to be plagued by enormous uncertainties that would prevent us from determining the size and shape of the universe with anything like the precision we desired. Brian and I made a bet concerning whether we would be able to accurately measure the total matter density of the universe within 20 years. I said we would; Brian was sure we wouldn't. We were poor graduate students at the time, but purchased a small bottle of vintage port, to be secreted away for two decades before we knew who had won. Happily for both of us, we learned the answer long before then. I won the bet, due in large part to the efforts of Brian himself. We split the bottle of port on the roof of Harvard's Quincy House in 2005.

Why supernovae? They're the best "standard candles" we've got. Because they are among the brightest objects in the universe, these exploding stars can help astronomers determine distances in space. By matching up those distances with how much the light from a supernova has shifted, the two teams could calculate how the expansion rate has changed over time. Light that began its journey across space from a source 10 billion years ago would have a red shift markedly more pronounced than the light that was emitted from a source just 1 billion years ago.

When Hubble made his 1929 measurements, the farthest red-shifted galaxies were roughly 6 million light years away. If expansion was now slowing, supernovae in those distant galaxies should appear brighter and closer than their red shifts would suggest. Instead, just the opposite was true. At high red shifts, the most distant supernovae are dimmer than they would be if the universe were slowing down. The only plausible explanation for this is that instead of gradually slowing down, the expansion of the universe is speeding up.

It was bizarre and completely unexpected. Since 1998, cosmologists have been grappling a whole new set of questions implied by that momentous discovery, the foremost of which is the makeup of the mysterious dark energy that appears to be winning the cosmic tug-of-war. And once again, the discovery turned cosmology on its head.

Now the story goes something like this: very early in the universe’s existence, dark matter dominated. Everything was closer together, so its density was higher than that of the dark energy, and its gravitational pull was stronger. This led to the clumping that formed early galaxies. But as the universe continued to expand, the dark matter density, and hence the gravitational pull, decreased until it was less than that of the dark energy. So instead of the expected slow-down in the expansion rate, the now-dominant dark energy began pushing the universe apart at ever-faster rates.

Where does this dark energy come from? That's the big question. But it’s a testament to Einstein’s genius that even his blunders prove to be significant. Remember his “fudge factor,” Lambda implied the existence of a repulsive form of gravity, and the simplest example of that is the vacuum energy. Quantum physics holds that even the emptiest vacuum is teeming with energy in the form of “virtual” particles that wink in and out of existence, flying apart and coming together in an intricate quantum dance. This roiling sea of virtual particles could give rise to dark energy, giving the universe a little extra push so that it can continue accelerating.

The problem is that the numbers don’t add up. The quantum vacuum contains too much energy: roughly 10120 times too much. So the universe should be accelerating much faster than it is. An alternative theory proposes that the universe may be filled with an even more exotic, fluctuating form of dark energy dubbed “quintessence.” Yet all the observations to date indicate that the dark energy is constant, not fluctuating. So scientists must consider even more possibilities. The dark energy could be the result of the influence of unseen extra dimensions predicted by string theory. Alternatively, the dark energy could be due to neutrinos – the lightest particles of matter – interacting with hypothetical particles called “accelerons.” Some scientists have theorized that dark matter and dark energy emanate from the same source – they just don’t know what that source might be. Yet it’s just as likely that there is no connection, and the two are very different things. Or perhaps there is no such thing as dark energy, and we need to revise Einstein's general theory of relativity, and/or devise a theory of quantum gravity.

Scientists love to explore the unknown, so these are exciting times for cosmologists. Congratulations to Schmidt, Reiss and Perlmutter for a well-deserved honor -- and here's to the future Nobel-worthy discoveries yet to be made!

[Note: This post originally appeared at our new home at Scientific American.]

One day in 1969, the Congressional Joint Committee on Atomic Energy convened in Washington, DC, to hear testimony from a number of scientists concerning a proposed multimillion dollar particle accelerator to be built in Batavia, Illinois. Physics had enjoyed strong government support for two decades in the wake of the Manhattan Project, which helped bring an end to World War II. But many in Congress simply couldn't see the point of spending all that money on a big machine that didn't seem to benefit US national interests in quite the same way.

During the testimony of physicist Robert Rathburn Wilson -- a veteran of the Manhattan Project -- then-senator John Pastore bluntly asked, "Is there anything connected with the hopes of this accelerator that in any way involves the security of the country?"

Wilson, to his credit, answered just as bluntly: "No sir, I don't believe so."

"Nothing at all?" Pastore asked.

"Nothing at all."

Pastore pressed further: "It has no value in that respect?"

And then Wilson knocked it out of the park. "It has only to do with the respect with which we regard one another, the dignity of man, our love of culture. It has to do with: Are we good painters, good sculptors, great poets? I mean all the things we really venerate in our country and are patriotic about. It has nothing to do directly with defending our country except to make it worth defending."

Needless to say, the proposed accelerator got its funding, and Fermi National Laboratory was born. Wilson took the lead on the design and construction of the facility, and proved more than up to the task: Fermilab, as it is known today, was completed on time, and under budget. And its scientists went on to make some of the most fundamental discoveries in particle physics, garnering quite a few Nobel Prizes along the way.

I've been thinking about Wilson's zinger of a response to Pastore a lot lately, as economic woes and corresponding budget cuts threaten some pretty major scientific projects. (*cough* James Webb Space Telescope *cough*) We seem to have lost our sense that science, just for the sake of science, adds something unique and valuable to society, beyond the technological advances that it enables. The emphasis these days is always on, "Well, what is it good for?"

It's a fair question, and I'm all for being practical. Those technological advances have been truly extraordinary and have revolutionized every aspect of our lives. But let's not, in the process, devalue the curiosity-driven pursuit of knowledge for its own sake. Science, Wilson realized, is part of what makes a country worth defending, and his life's work reflected that.

Pistol-Packin' Physicist

Wilson was born in Frontier, Wyoming, in 1914, and growing up in the wild west no doubt gave him his lifelong love of the great outdoors, not to mention that hint of a swagger that was among his many trademarks. "He always had big, wild tales about being a cowboy in Wyoming," Dale Corson, a physicist and longtime friend of Wilson, told the New York Times for Wilson's obituary in 2000. "Most of them turned out to be true." But he also loved to tinker with pumps and vacuum tubes, as a boy, and soon found himself fascinated by the fundamental building blocks of nature -- at least, those that were known at the time. "We only had electrons and protons, and you could put those together into atoms in various ways and make the whole universe," he later recalled. "It was a very simple theory that even a dope could understand. I decided then that I wanted to go into physics."

By 1932, he'd found a place in Ernest O. Lawrence's flagship cyclotron laboratory (the "Rad Lab") at the University of California, Berkeley, although he was infamously fired twice: once for losing a rubber seal right before a presentation to a potential donor, and once for accidentally melting a pair of pliers while welding. He was offered his job back both times, but the second time, he opted to leave the Rad Lab and go to Princeton instead.

That's where he was when Oppenheimer chose him to be part of the elite corps of scientists on thee Manhattan Project at Los Alamos National Laboratory, which opened in 1943 under the greatest secrecy. Wilson found himself heading the Cyclotron Group -- the youngest group leader in the experimental division. He was reluctant to take the job at first; he wanted to do science, not get bogged down in administration. Oppenheimer asked Enrico Fermi to intervene and persuade Wilson to head the new division.

When Wilson pointed out that Fermi himself would never accept such a position, and he was merely following his mentor's example, Fermi shot back, "It's something you have to earn, and you're not Fermi yet." In the end, Fermi convinced him to take the job by promising to meet with Wilson every Friday to talk about the physics being done. In his own account, Wilson admitted, "Sure I sold out -- but then everyone has his price, and mine was a few moments each week with Fermi."

Wilson sometimes chafed at the tight security around Los Alamos, occasionally teasing the security guards charged with protecting the spheres of uranium-235. The scientists were conducting delicate experiment to measure the rate at which neutrons multiplied in those sphere, along with a control sphere of regular uranium. Wilson proposed that he be issued a pistol so that he could guard the spheres himself. "After all, I came from Wyoming, where every red-blooded boy learned to shoot before he could walk."

Oppenheimer agreed, but Wilson had to first be certified to ensure he really could handle a pistol. So he was taken to a firing range, given a Colt .38, and subjected to a detailed lecture on how to properly handle and fire the weapon. His instructor then fired six shots at a target before handing the gun to Wilson to try. As Wilson recalls, "I had learned in Wyoming to 'roll' a pistol in order to get a lot of shots off accurately and rapidly. That's just what I did. Most of my shots were closer to the bull's eye than were his."

For all the hijinks, nobody forgot that the work they were doing at Los Alamos was both vital to national defense, and highly dangerous due to the radioactive substances involved. Wilson recalled his own brush with death while assisting a physicist in the Critical Assemblies Group with another experiment to determine when criticality was reached as one stacked a series of enriched uranium hydride cubes. He was surprised, and a bit dismayed, to find that the group didn't rely on the usual elaborate safety devices commonly used at cyclotron facilities at the time. Instead, the physicist arrived with a simple set-up involving a wooden table, a single neutron counter to monitor criticality, and a whole bunch of cubes of enriched uranium hydride.

Wilson watched, rapt, as the physicist started stacking uranium cubes, and then noticed with alarm that the neutron counter wasn't, well, counting. Upon inspection, he discovered that the voltage supply was burnt out. When the counter was turned back on, it lit up immediately, to Wilson's horror. "A few more cubes and the stack would have exceeded criticality and could well have become lethal," he recalled. Furious, Wilson chewed out the physicist, his division leader, and even raged about it to Oppenheimer himself, but he had to leave for Trinity the very next day, so he let the incident drop. Had he stayed and pursued the matter, Wilson believed, "I might have saved the lives of two people. To this day, the incident is on my conscience."

Those two people were Harry K. Daghlian, Jr. and Louis Slotin, both of whom died of radiation sickness after accidents that occurred while conducting critical experiments with a plutonium core -- known as "tickling the dragon." Daghlian's death was dramatized to great effect in the 1989 film Fat Man and Little Boy using a fictional character based on him named Michael Merriman (played by John Cusack). In Daghlian's case, the tungsten carbide bricks around the plutonium sphere -- designed to act as a radiation shield -- were improperly handled. The dose of radiation he received as a result was so high, he died within a mere 28 days of the accident. It's one of the dramatic high points of the film, along with the scene depicting the historical Trinity Test itself:

I was in a bunker 10,000 yards north of Ground Zero, and the wind was blowing in our direction. Minutes after the bomb went off, I began to get apprehensive because a section had peeled off from the mushroom cloud and was coming straight at us. Meanwhile, the doctor was reading that the radiation was much higher than he expected. We had about 10 trucks, so I ordered people to get in them and leave immediately. There were some soldiers stationed outside who told me they had to stay until they were relieved by a military officer, but using a vocabulary everyone could understand, I convinced them to get into a truck. As we left, that cloud of radioactive debris was right on top of us, and it was spooky. We were lucky though. About 25 miles later it came down on a bunch of cattle and turned their hair white.

Architect of Accelerators

After World War II ended, Wilson left Los Alamos to design accelerators at Cornell's Laboratory of Nuclear Studies, culminating in the university's flagship Electron-Positron Storage Ring (CESR). Based on his stellar reputation working with accelerators, in 1967, he took a leave of absence to become director of the National Accelerator Laboratory (renamed Fermilab in 1974), to oversee the construction of what would be the most powerful accelerator then in existence.

"Bob built accelerators because they were the best instruments for doing the physics he wanted to do," Wilson's Cornell colleague, Boyce McDaniel, recalled in 2000. "No one was more aware of the technical subtlety of accelerators, no one more ingenious in practical design, no one paid more attention to their aesthetic qualities. He thought of accelerator builders as the contemporary equivalent of the builders of the great cathedrals in France and Italy. But it was the physics potential that came first."

That aesthetic appreciation carried over into his design for Fermilab's main accelerator ring, although once again, its physics potential came first, with the most cutting-edge, forward-looking technology available. Yet it was intended from the start to be visible from the air, thanks to the construction of a 20-foot-high berm above the entire four-mile-long ring. There was no technical reason for that decision; Wilson just thought it would look nicer.

He also wanted Fermilab to an aesthetically pleasing work environment; he didn't want it to look like a typically sterile government lab. To that end, he made sure he restored part of the surrounding prairie, with ponds and a herd of bison, for good measure. Wilson designed the main building, now known as Wilson Hall in his honor, after being inspired by the medieval cathedral at Beauvais, France -- a kind of "cathedral of science," if you will. "When he created Fermilab, it certainly had style," Leon Lederman, who succeeded Wilson as director, recalled. "He was a showman in that sense. He took chances."

Wilson's style and personal creativity extended to abstract sculpture; his work is dotted all over the grounds of Fermilab, such as "Broken Symmetry," an orange-and-black three-span arch that spans one of the Pine Street entrance. It appears asymmetrical from any angle, except when viewed directly from below. He also sculpted "Mobius Strip" (self explanatory), "Tractricious" (made from scrap cryostat tubes from Tevatron magnets), and his most famous, the 32-foot-high "Hyperbolic Obelisk" at the foot of the reflecting pond in front of Wilson Hall. "If I wasn't being creative, I thought I was just wasting my time," Wilson once confessed.

Not that he wasn't first and foremost a pragmatist, mind you, despite his aesthetic sense. Wilson is also known as the "father of proton therapy," thanks to a 1946 paper he published, "Radiological Use of Fast Protons." He'd become interested in researching the effects of radiation damage on the human body as a result of his Los Alamos experiences -- especially the deaths of Daghlian and Slotin, which had affected him greatly. Most proton therapy facilities follow the tenets and techniques Wilson established in that groundbreaking paper in their treatment of cancer patients -- a peaceful use of a wartime technology, saving lives instead of taking them.

Wilson's name is not as well-known to the general public as that of Albert Einstein, Richard Feynman, Enrico Fermi, or J. Robert Oppenheimer, but he was very much a "physicist's physicist." Bring up his name in a gathering of physicists, and you'll be regaled with everyone's favorite Wilson anecdote -- that's how much love and respect he inspired in those who worked with him. And deservedly so: he embodied the perfect balance between aesthetics, curiosity, and pragmatism.

Science isn't just about winning wars, treating cancer, or devising revolutionary new technologies to boost economic markets -- although it can and does accomplish all of those things. It's also about the sheer joy of discovery, of pushing the boundaries of human knowledge, as essential a component of the human spirit as the greatest works of art, of music, of literature. And as such, it is worth defending.

Imagine, if you will, a secret community dwelling beneath the streets of New York City, its inhabitants never allowed to travel to the surface or to interact in any way with the dreaded "Topsiders." That's the premise of an award-winning 1999 YA novel by Neal Shusterman called Downsiders, exploring what happens when a 14-year-old Downsider named Talon defies the prohibition and ends up falling in love with a Topsider named Lindsay. Together, they uncover the mysterious origins of the Downsiders: a forgotten inventor named Alfred Ely Beach who created the array of tunnels over a century ago.

This is an instance where science fiction bumps up briefly against science fact, because Shusterman's inspiration for his subterranean world is based on an actual person. Alfred Ely Beach is best known for his invention of New York City's first concept for a subway: the Beach Pneumatic Transit, which would move people rapidly from one place to another in "cars" propelled along long tubes by compressed air. Beach was also the publisher of Scientific American starting in 1845, when he purchased it (at the ripe old age of 20) with a fellow investor, so it seems a fitting topic for my inaugural post on that magazine's fledgling blog network. (According to Wikipedia, inventor Rufus Porter actually founded the magazine, but sold it to Beach after a mere 10 months.)

Tunnels and pneumatic transportation systems are a staple of classic science fiction, starting with Jules Verne's Paris in the 20th Century, published in 1863, in which the author envisions tube trains stretching across the ocean. In 1882, Albert Robida described not only tube trains, but pneumatic postal delivery systems in his novel, The Twentieth Century. Those authors were quite prescient: versions of such systems were actually built, and some still exist today.

When I was a kid, I remember my mom using the banking drive-through teller to deposit checks, withdraw cash, etc., through a pneumatic system employing metal canisters. Some of those systems still exist, despite the proliferation of ATMs. Hospitals, factories, and large stores use internal pneumatic transport systems to rapidly move physical objects (drugs, documents, cash, even spare parts) from one location to another. And it all emerged from a vacuum -- specifically, vacuum physics.

Nature Abhors a Vacuum

(Note: This section adapted from a 2007 post.) The first recorded experiments on the existence of vacuum were apparently conducted by an Arab philosopher named Al-Farabi in the 9th century AD, using handheld plungers in water. That's when he realized that the volume of air would expand to fill any available space. Later scientists figured out how to create better and better artificial vacuums, thanks to the principle he delineated. It's pretty simple: by expanding the volume of a given container, pressure is reduced and a partial vacuum is created. It's temporary and is soon filled by air pushing inside by atmospheric pressure, but if the container is repeatedly sealed, the air pumped out, expanded again, and closed off, it's possible to create a sealed vacuum chamber.

Vacuum is measured in units of pressure. Technically, the standard unit of pressure is the Pascal, but scientists can't possibly let things be so simple, so they came up with a new unit for vacuum pressure, the Torr, named after 17th century Italian physicist Evangelista Torricelli, best known for inventing the barometer. He was trying to figure out how to raise water levels in a suction pump to more than 32 feet in height -- the limit pumpmakers had been able to reach using simple suction pumping.

It seemed that perhaps Nature truly did abhor a vacuum, but Galileo Galilei cheekily suggested that perhaps the abhorrence only extended to 32 feet. Galileo knew a little something about the weight of air versus other substances, and thought it might be possible to overcome the obstacle using something heavier than water.

Inspired by Galileo's insight, in 1643, Torricelli hit on the notion of using mercury, which is 14 times heavier than water, in a simple experiment: he filled a three-foot-long tube with mercury and sealed it on one end, then set it vertically into a basin of mercury with the open end submerged. The column of mercury fell about 28 inches, leaving an empty space above its level -- an early version of a sustained manmade vacuum. Torricelli further realized that (a) the mercury would rise to the same level regardless of how tilted the tube became because the pressure of the mercury would balance the weight of the air, and (2) the height of the column of mercury rose and fell according to changing atmospheric pressure. Voila! The first barometer.

Seven years later, a German scientist named Otto von Guericke built a contraption known as the Magdeburg hemispheres -- the world's first artificial vacuum. He took two large copper hemispheres with rims that fit tightly together, sealed the rims with grease, and pumped out all the air. To do so, he had to invent a vacuum pump; his version used a piston and cylinder with flap valves, powered by people turning a crank arm that was connected to the pump. Once all the air was removed from within the hemispheres, they were still held together by the air pressure of the surrounding atmosphere because the artificial vacuum inside provided no opposing pressure to balance things out.

It was a pretty powerful hold, too: von Guericke harnessed a team of eight horses to one hemisphere of the big coppery globe, and another eight horses to the other hemisphere, and then set the horses to pulling the two hemispheres apart by moving in opposite directions -- to no avail.

News of the experiment quickly spread throughout Europe, eventually reaching the ears of Robert Boyle, founder of modern chemistry, in England. Few scientists were able to replicate von Guericke's feat because it was an expensive apparatus. But Boyle had the 17th century equivalent of a trust fund, being the son of the Earl of Cork, so he cheerfully set about building his own "pneumatic engine," cost be damned. To do so, he enlisted the aid of Robert Hooke of Micrographia fame, then Boyle's humble assistant. Hooke had a gift for instrumentation, which is a good thing, because Boyle's design was a clunky, difficult to operate device, and sometimes Hooke was the only one who could get the thing to work properly.

Boyle conducted many different experiments to determine the properties of air, specifically how "rarefied air" affected things like combustion, magnetism, sound, barometers, and various substances. He carefully detailed his observations for posterity in a very thick book ponderously titled, New Experiments Physico-Mechanicall, Touching the Spring of the Air, and its Effects (Made, for the Most Part, in a New Pneumatical Engine).

He clearly lacked the gift of catchy titles. Jen-Luc Piquant would have called it something more dramatic, like Asphyxiated! Staring Into the Void of the New Pneumatical Engine.

Suck and Blow

It was only a matter of time before scientists and engineers figured out how to exploit vacuum technology in their inventions, most notably pneumatic tube transport systems to deliver messages or small parcels to various linked hubs. A Scottish engineer named William Murdoch first conceived of the notion in the early 19th century.

As the 19th century drew to close, most major cities used some kind of pneumatic tube transport system. One of the earliest linked the London Stock Exchange to the city's main telegraph station, built in 1853, followed by the London Pneumatic Despatch Company linking the Euston railway station to the city's main post office. Berlin, Paris, Vienna, Prague, Chicago, and New York City all built similar networks, many of which remained operation until the 1950s. The one in Paris was operational until 1984, and apparently the UK House of Commons still has a pneumatic tube system in place for its telephone and computer exchange. And you can find older office buildings in New York with the remains of internal pneumatic mail systems still in place.

Prague's pneumatic post is probably the last surviving such system in the world, housed in an annex to the city's Central Post Office. Completed in 1899, it's a complicated network of pneumatic pipes snaking out through the city's underground for roughly 34 miles. Initially it was used to forward telegrams from telegraph offices to postal offices, but the network was later extended to include government and other office buildings. This came in handy during the notorious Prague Uprising, when the city's pneumatic postal system helped bring supplies to a besieged Czech radio headquarters.

At its peak, in the 1970s, the system made over one million deliveries a year, although that number had fallen to a dismal 6000 or so deliveries per year -- hardly a profitable venture, but it's such an unusual piece of Czech history. Alas, massive flooding in Europe in August 2002 damaged the system and shut it down; it has yet to come back online. Part of the problem is that because the mechanical system has never been modernized, it's tough to find the component parts needed to repair it. (The Berlin factory that used to supply those parts closed down a good 60 years ago.)

Modern pneumatic transport systems can vary in their complexity, but fundamentally, the concept is quite simple. You have a "sending station" -- say, a cashier's checkout post -- linked to a receiving station -- perhaps a locked box in the store manager's office -- via a tube. There is an air compressor pump attached to the tube on the receiving end which has two basic modes of operation: "suck" and "blow."

If you want to send cash from the register from the sending to the receiving station, you'd simply load the it into the metal canister, place it in the tube, and close the door, effectively sealing off the tube. The air compressor would be set to "suck" mode, acting just like your average vacuum cleaner, sucking the air along the tube to create a partial vacuum in front of the canister. The canister can then be emptied, and returned to the sending station via the "blow" mode -- the air compressor literally pushes the canister through the tube by blowing air behind it.

We might have more efficient means nowadays of delivering messages (email, twitter, text messaging, etc.) but some folks still think pneumatic tube systems could be useful for, say, delivering food via pipeline. That's the concept behind Foodtubes, a UK-based project that proposes the creation of high-speed pneumatic pipelines connecting every major city in the UK. Food items would be placed in canisters and sent zipping along the nearly 2000 miles of pneumatic tubes. It would be a major capital investment, to be sure, but would definitely cut down on the number of delivery trucks currently clogging up London's roadways. I'd bet the consortium members are fans of Edward Bellamy's 1888 novel Looking Backward, which predicted a vast interlinked system of delivering goods via pneumatic tubes by the year 2000.

This sort of thing is not unprecedented: until this year, there was a McDonald's in Edina, Minnesota, that prided itself on being "The World's Only Pneumatic Air Drive-Thru." Customers would place their orders in the drive-thru -- located in the middle of a parking lot -- and their Big Macs, fries, and Chicken McNuggets would be delivered via pneumatic tubes. (One assumes sodas and shakes were delivered this way, too, but the risk of spillage seems rather high.)

Sub-Rosa Subway

In 1812, a man named George Medhurst speculated that perhaps it might be possible to blow carriages laden with passengers through a tunnel, but he never got around to building anything. He lacked a pump with enough power to generate the requisite air pressure. In the mid-1850s, there were several rudimentary "atmospheric railways" -- in Ireland, London, and Paris -- and while the London Pneumatic Despatch system was intended to transport parcels, it was large enough to handle people. In fact, the Duke of Buckingham and several members of the company's board of directors were transported through the pneumatic system on October 10, 1865, to mark the opening of a new station. And a prototype pneumatic railway was exhibited at the Crystal Palace in 1864, with plans to build a version connecting Waterloo and Charing Cross by running under the Thames.

Those early efforts in London inspired Beach back in the US. He first published an 1849 article in Scientific American suggesting building an underground subway along Broadway in Manhattan, employing horse-drawn cars to carry passengers. Then he discovered pneumatics: "A tube, a car, a revolving fan! Little more is required!" he exclaimed in 1870. The idea was to put people in carriages underground and propel them through the rubes using air pressure generated by gigantic fans.

He first built a prototype above-ground model, which debuted at the 1867 American Institute Fair. It was little more than large wooden tube (roughly six feet in diameter and 100 feet long) capable of holding a small vehicle with a ten-person capacity. That car was then pushed through the tube by air pressure created by a giant fan. But he couldn't get permission from the city to construct an underground system. (Accounts differ as to whether "Boss" Tweed or wealthy inhabitants of the neighborhood blocked his efforts.)

Was Beach at all daunted? He was not. He sneakily built the underground pneumatic subway anyway, pretending he was really building a pneumatic mail delivery system, and he did right under the noses of City Hall: beneath a rented store front across the street.

In February 1870, Beach unveiled his masterpiece, and it was an immediate novelty attraction for the public, especially given the luxury of the station: it boasted a grand piano, chandeliers, and a fully operational fountain stocked with goldfish. He charged 25 cents for a block long ride, and fought for the next three years to get a construction permit to extend the line uptown all the way to Central Park. Alas, while he ultimately succeeded on that score, a stock market crash (the "Panic of 1873") crushed his dream for good.

Beach's failure didn't keep others from speculating on the potential value of so-called "vactrains" (vacuum tube trains). The US government considered the possibility in the 1960s of running a vactrain (combining pneumatic tubes with maglev technology) between Philadelphia and New York City, but the project was deemed prohibitively expensive, and was scrapped.

An engineer with Lockheed named L.K. Edwards proposed a Bay Area Gravity-Vacuum Transit system for California in 1967, designed to run in tandem with San Francisco's BART system, then under construction. It, too, was never built. Nor was the system of underground Very High Speed Transportation conceived by Robert M. Salter of RAND in the 1970s to run along what we now call the Northeast Corridor.

Beach might not have lived to see his pneumatic subway system built -- he caught pneumonia and died on January 1, 1896 -- but his vision is still influencing engineers looking for transportation solutions in the 21st century, most notably researchers in the Chinese Academy of Sciences and Chinese Academy of Engineering. The claim is that traveling through networks of these vacuum tubes enables supersonic speeds without the drawback of sonic booms that plague supersonic jets, making the trip from London to New York in less than an hour. (Those of us who are increasingly disgruntled with the airline industry, and long for teleportation, might welcome such an alternative.)

And Beach's dream has been immortalized in a song by a Canadian progressive rock band called Klaatu: "Sub-Rosa Subway" (lyrics are here). Nearly three minutes into the tune, you can hear a bit of Morse Code in the background, which one bandmember has since helpfully translated for their fans: "From Alfred, heed thy sharpened ear -- A message we do bring -- Starship appears upon our sphere -- Through London's sky comes spring."

We are madly traveling about the world at the moment, en route to Doha, Qatar, for the World Conference of Science Journalists. So there is very little time to blog, and rather than make myself nuts, I'm likely to just take a hiatus for the next week or so. But in the meantime, in honor of the recent observation of neutrino oscillations at Japane's T2K experiment, here's a bit of particle physics history for your reading pleasure: the discovery of the tau neutrino!

“Neutrinos, they are very small/ They have no charge and have no mass/ And do not interact at all,” John Updike wrote in his 1960 poem, “Cosmic Gall.” Neutrinos were a fairly recent discovery then, and within two years physicists would discover that they were only just beginning to understand this mysterious “ghost particle.” For instance, there was more than one kind of neutrino, and it would take physicists another 40 years to find them all.

Wolfgang Pauli first proposed the existence of neutrinos in 1930 while investigating the conundrum of radioactive beta decay, in which some of the original energy appeared to be missing after an electron was emitted from an atomic nucleus. He hypothesized that in order to abide by the laws of energy conservation, another, as-yet-undetected neutral particle might also be emitted, accounting for the missing energy.

Pauli was reluctant to publish a paper on this unusual hypothesis, but he penned a letter to a group of prominent nuclear physicists gathering for a conference in Tubingen, Germany in December asking for input regarding means of detecting such a particle experimentally. “I have done something very bad today by proposing a particle that cannot be detected; it is something no theorist should ever do,” he wrote, describing his idea as “a desperate remedy.”

Among the physicists who took Pauli’s idea seriously was Enrico Fermi, who developed the theory of beta decay further in 1934, coining the name “neutrino” (“little neutral one”) in the process. It became clear that if such a particle existed, it must be both very light – less than 1% the mass of a proton -- and interact very weakly with matter, making it very difficult to detect. But in 1956, Frederick Reines and Clyde Cowan succeeded in doing just that, sending a telegram to Pauli informing him of their discovery. “Thanks for message,” Pauli telegrammed back. “Everything comes to him who knows how to wait.”

Pauli died two and a half years later, and thus missed the discovery in 1962 of a second type of neutrino, dubbed the muon neutrino, corresponding to the surprising discovery of the charged muon lepton. (The latter caused I.I. Rabi to famously exclaim, “Who ordered that?”) In 1975, a third charged lepton, tau, was discovered, and subsequent experiments hinted strongly that there should also be a third kind of neutrino. While scientists at CERN uncovered further proof in 1989 of the tau neutrino’s existence, it would take another 25 years before the technology was available to detect this elusive particle directly.

In the 1990s, Fermilab designed the DONUT (Direct Observation of the NU Tau) experiment to search specifically for tau neutrino interactions. The scientists used the Tevatron to produce an intense neutrino beam, predicting it would contain at least some tau neutrinos. After deploying an elaborate system of magnets and iron and concrete to eliminate as many background particles as possible, the beam was fired at a three-foot-long fixed target: iron plates alternating with layers of a special emulsion sandwiched between them.

Those emulsions captured the tracks of any electrically charged particles produced by the extremely rare (about one in one million million) tau neutrino interactions, which were then electronically recorded by scintillators. The emulsions were then photographically developed so that scientists could analyze the data, looking for the telltale distinctive short track with a kink that indicates a tau lepton, the result of a tau neutrino interacting with an atomic nucleus. They were literally connecting the dots: small black dots left by particles passing through, which could then be connected to retrace the particles’ paths.

After the experimental run in 1997, it took three years of painstaking analysis to sift through all the data, winnowing some six million signatures down to 1000 candidate events. On July 21, 2000, scientists from the DONUT collaboration announced they had identified four tau neutrino signatures demonstrating an interaction with an atomic nucleus. The experiment also validated a number of new techniques for neutrino detection, most notably the emulsion cloud chamber, which significantly increased the number of neutrino interactions.

Leon Lederman, who shared the 1988 Nobel Prize in Physics with Jack Steinberger and Melvin Schwartz for the discovery of the muon neutrino, called the achievement “an important and long-awaited result. Important because there is a huge effort underway to study the connections among neutrinos, and long awaited because the tau lepton was discovered 25 years ago and it is high time the other shoe was dropped.”

Among the questions physicists were still pursuing was whether neutrinos might have a tiny bit of mass, which could dramatically alter scientists’ estimation of the overall mass of the universe, because they are so plentiful. This in turn has implications for estimating the rate of expansion of the universe. And if neutrinos do have mass, whether they could oscillate and change flavors over time as they traveled through space. For instance, would it be possible for a muon neutrino to change into a tau neutrino via oscillation?

That question was answered with a resounding yes in 2010. Scientists with the OPERA experiment at Gran Sasso National Laboratory reported that they had found four instances of the telltale signature of the tau neutrino among a stream of billions of muon neutrinos generated at nearby CERN -- the first direct observation of a neutrino transforming from one type into another. Experiments are ongoing to further explore this phenomenon and determine specific masses for neutrinos.

With the discovery of the tau neutrino, only one more particle remains to be found to complete the Standard Model of Particle Physics: the elusive Higgs boson. Fermilab’s soon-to-be-retired Tevatron is racing against the clock to make one more significant discovery before it exits the particle physics stage, competing with the Large Hadron Collider at CERN. It will herald the dawn of a new era in physics – and possibly yield a few more unexpected surprises.

There's a lot of celebration, a lot of sorrowful remembrance, a lot of analysis and political posturing, a lot of heated opinion, and far too much self-righteous judgement being tossed around today in light of last night's bombshell announcement. Emotions are running high all over, just like they did nearly ten years ago. I, too, have a lot of conflicting emotions. But mostly, I am overwhelmed with the surging onset of buried memories of one of the most truly horrifying days of my life -- a day spent weeping for hours on end as details slowly began to emerge. (I believe Method Actors refer to this phenomenon as "emotional memories." It's powerful stuff.)

I have never written about that day; it was just too painful. I write about it with great difficulty now, as snapshot memories keep replaying in my head despite my best efforts to block them. These are memories I share with countless others. There is the shock, horror, anxiety and dread over the safety of friends; the relief for those who survived -- and the grief for those who did not. There was the eerie quiet that descended on Washington DC as the entire city shut down -- a quiet broken only by the sound of military helicopters flying overhead. Then there are the memories of the aftermath: of exhausted, emotionally numbed friends pulling double (sometimes triple) shifts in the hospitals to sift through the gooey mess of mangled body parts in hopes of finding some clue to identification; of the thick cloud of smoke that hung over NYC, and the stench of decaying flesh that wafted from Manhattan deep into the outer boroughs for months after the tragedy. And of course, there was funeral after funeral after funeral. (One of my jujitsu instructors went to a funeral nearly every day for two solid months -- he had many close friends in the police and fire departments.)

So I am not feeling especially celebratory, or triumphant, nor do I feel "closure" -- although I totally understand why some people might justifiably have those feelings. Bin Laden was a symbol, the "face" of terrorism for many Americans, and fairly or not, whether we like it or not, in that respect, his death has a symbolic meaning. But as many others have said, it doesn't change the harsh reality of the last decade. It won't bring back those we lost, or wipe away the horror of that day, or undo the 10 years of war and accompanying limitations on civil rights that followed; we're still living in the same world as yesterday.

Sean very wisely writes about letting people have their moment, to react in the myriad ways they need to react, based on their own personal framework -- because naturally we can't help but view it through our own individual lens. And I agree. But I guess I'd rather put those ugly memories aside and celebrate human triumph, curiosity and exploration instead -- as much a part of our world today as terror -- because that's one of many reasons we persevere.

As it happens, today is also the birthday of Athanasius Kircher, a humble 17th century Jesuit scholar/priest who deserves to be rescued from relative obscurity. (I mentioned him in a prior post a few years ago, from which part of this has been adapted.) For awhile he had his very own eponymous society, and in 2002, New York University sponsored an entire symposium in his honor. There's also a permanent exhibit on Kircher lurking somewhere in archives of the Museum of Jurassic Technology here in Los Angeles.

Why do I love Kircher so much? His scientific reputation was a bit sketchy, in that he had a tendency to blend "traditional Biblical historicism and the emerging secular scientific theory of knowledge." While he published on magnetism, astronomy, optics, archaeology and linguistics (including Egyptian hieroglyphics), he also wrote treatises on the Tower of Babel and Noah's Ark. Sure, he conducted scientifically sound experiments, including one that essentially disproved Johannes Kepler's speculation that the sun was a giant magnet whose rotation around its axis caused the earth and planets to stay in their orbits. But he also took what he learned from that investigation and used it to invent a "magnetic oracle," a divination device that he dubbed "magnetic hydromancy."

Still, one certainly can't doubt the man's passion for scientific inquiry, nor his boundless curiosity about how the world works -- and that's where I find a kindred spirit. Even while tending to the sick when the bubonic plague hit Rome in 1656, he still took time to observe micro-organisms under a microscope in hopes of finding a cure. He didn't find one, but he did advance a germ theory of disease that was way ahead of the medical orthodoxy of his day.

Fans of Charles Babbage, take note: Kircher came up with his own machine for answering mathematical problems, although it was far from perfect, in that it required memorizing long poems in Latin in order to perform the most elementary functions, according to Michael John Gorman of Stanford University, one of the emerging scholars who are studying this fascinating personage. Fortunately there was also a cheat sheet for those with faulty memories: an 850-page instruction manual that makes the average Microsoft User's Manual seem like a model of concise clarity by comparison.

And what an adventurous life the man had -- a regular Indiana Jones of the 1600s. He skirted death on numerous occasions, beginning with a boyhood leg injury that turned gangrenous -- which, he claimed, was miraculously healed by the Virgin Mary when he visited one of her shrines while at seminary. (She also threw in healing of a herniated disc for no extra charge.) He was shipwrecked on an island while traveling to Austria, and was nearly hung by overly-ardent Protestant cavalrymen on another of his many travels. Just before Adolph, the Protestant king of Sweden, invaded Franconia and Wurzburg, Kircher fled his teaching post at a college in the latter town and eventually landed in Rome.

For his last great adventure, he traveled to southern Italy, Sicily and Malta, where he witnessed the eruption of Aetna and Stromboli, and even had himself lowered into the active crater at Vesuvius. Lowered... himself... into... an... active... volcano. The A-Man had some serious cojones. Naturally, the experience reminded him of eternal damnation: "The whole area was lit up by the fires, and the glowing sulphur and bitumen produced a intolerable vapor. It was just like hell, only lacking the demons to complete the picture."

Perhaps realizing he'd never top that experience, Kircher soon retired to a quieter life of scholarly contemplation, but he was just as prolific in his writing as his wandering, producing 11 full-length books in 20 years. Oh yes, he also established his own museum of strange artifacts (including a stuffed aardvark and an automaton) in Rome, one of the earliest recorded cabinets of curiosities. As one historian describes him:

The objects Kircher made were another sign of his ever-active curiosity and imagination; he never tired of figuring out how things worked or of designing some practical application of what he learned. One of his designs was for a projector that used candlepower to cast images from glass plates onto a wall.

He was interested in sound and music. Statues in his museum seemed to talk as he devised horns and tubing to bring street noise through the walls and out of the statues' mouths. The porter who kept the front door to the Roman College was able to speak to Kircher through tubes to let him know when visitors were waiting to see his museum. He also devised instruments that used water or wind power to create music. In one fanciful design, a keyboard extended back to a series of boxes. The keys had pins at their tips, under which were tails of cats arranged according to the pitch of their meows. Hitting a key would produce harmonized howling. There is no evidence that Kircher ever actually made such an instrument.

He died in 1680. Athanasius Kircher -- a manly man of science, and just enough of a mystic to keep things interesting. He probably endured more hardship and suffering than most of can even begin to imagine, but he let his science and his curiosity be the things that defined him.

Over at Wired, David Dobbs has a very nice post on famed pitcher Sandy Koufax -- excuse me, "the greatest pitcher ever" -- and the curve ball, a nasty (if you're the batter) trick whereby the baseball dives downward suddenly just before it reaches the plate, faking out the batter. It's fast, too, traveling at least 75 mph and spinning at an angle of around 1500 rpm. It only takes about 0.6 seconds to get from pitcher's hand to home plate -- not a lot of time to react. David's post focuses on the perceptual illusion created as the ball travels towards the batter, and it's that optical illusion that makes the curve ball so bloody hard to hit. Or, as the decidedly salty Mickey Mantle said after being struck out by one of Koufax's infamous curve balls in the 1963 World Series: "How the fuck is anybody supposed to hit that shit?" How indeed? I'll let David explain what's going on perceptually:

[T]he curveball kills you two ways: first, through actual movement; and second, through an extra perceived movement — illusory — that further complicates the task of getting the tiny strip of sweet spot on your bat onto the ball.

The extra perceived movement rises from a difference between the neural dynamics of central vision and those of peripheral vision. This effect of this difference is that a baseball that is rotating horizontally but falling straight down as it comes toward you will appear to fall vertically if you’re looking straight at it — but appear to move sideways if it’s in your peripheral vision. ... This in turn happens because your eyes simply can’t keep up with a pitch as it approaches you and effectively accelerates its path across your field of vision. The ball goes from moving at you to moving past you. At the crucial moment — the last few feet of the ball’s half-second, 60-foot trip to the plate — you must of necessity switch from seeing the ball with your central vision to seeing it with your peripheral vision.

To add to your troubles, it is in this tenth of a second or so that the curveball also moves the most in reality. ... So just as the ball’s real downward and sideways motion is greatest, the curve’s apparent break is exaggerated by visual dynamics.

But there's some terrific physics involved here, too, and Jen-Luc Piquant also uncovered a little-known slice of 20th century history where baseball and science came together for one shining moment to prove that the curve ball really does curve. (For links to everything you could possibly want to know about the physics of baseball, check out this site.)

The man who brought those worlds together was Lyman Briggs who served as director of the National Bureau of Standards (today it's known as the National Institute of Standards and Technology) from 1933 to 1945. Born on a farm in Battle Creek, Michigan, Briggs was an esteemed physicist, despite never attending high school; he won admission to Michigan State College by examination, graduating second in his class four years later. He was also a lifelong baseball fan, having played outfield on the college baseball team during the 1890s.

This was a period when the question of whether the curve ball actually curved was hotly debated. Among the true believers was St. Louis Cardinals pitcher Dizzy Dean. "Ball can't curve?" he famously declared during the 1930s. "Shucks, get behind a tree and I'll hit you with an optical illusion." But anecdotes aren't a substitute for scientific data. So once Briggs officially retired, he decided to do the experiments to settle the matter. And he was well-connected enough to enlist the aid of the pitching staff of the Washington Senators and their manager, Cookie Lavagetto, to do so. It wasn't just a question of baseball, either: the question related to NIST's ongoing research into ballistics and projectiles: the rate of spin is related to how much the ball (or projectile) is deflected at different speeds. Apparently the NSB (now NIST) conducted lots of experiments with golf balls and baseballs; one of Briggs' publications was a 1945 paper entitled, "Methods for Measuring the Coefficient of Restitution and the Spin of the Ball."

So really, Briggs was the perfect man to take on the question of the curveball. Based on his earlier research, he already knew that, in 1852, a German enigneer named Heinrich Magnus had accounted for the curved path of a cannon ball by describing a kind of "whirlpool of air" created around the projectile. This is now known as the "Magnus effect." (That said, Magnus wasn't the first to notice this. In 1672, Isaac Newton correctly inferred the cause after observing tennis players in his Cambridge college, while a British artillery engineer named Benjamin Robins used a similar concept to explain deviations in the trajectories of musket balls in 1742. Isn't the Internet an amazing place?) Per Wikipedia:

Generally the Magnus effect describes the laws of physics that make a curveball curve. A fastball travels through the air with backspin, which creates a high-pressure zone in the air ahead of and under the baseball. The baseball's raised seams augment the ball's ability to churn the air and create high pressure zones. The effect of gravity is partially counteracted as the ball rides on and into energized air. Thus the fastball falls less than a ball thrown without spin (neglecting knuckleball effects) during the 60 feet 6 inches it travels to home plate. On the other hand, a curveball, thrown with topspin, creates a high-pressure zone on top of the ball, which deflects the ball downward in flight. Instead of counteracting gravity, the curveball adds additional downward force, thereby gives the ball an exaggerated drop in flight.

Briggs took things a few steps further and set out to explore how much the curve of a baseball depends on its spin and its speed. Ideally, he wanted to study balls thrown by actual pitchers but it proved too difficult to photograph the flight path, even with a strobe camera flashing 20 times per second. So he switched to rotating a baseball on a rubber tree thereby giving it spin, and the ball was then struck by wood projectiles shot from a large mounted airgun. By doing so, he managed to to measure the speed and the curve; he just couldn't figure out how to measure the spin as well. Per his own notes, the images captured were so small "that the marks put on the ball to measure the spin could not be seen." And spin was looking to be the critical variable when it came to determining how much a ball curved.

To measure the spin of a pitched ball, he enlisted the pitching staff of the Washington Senators at Griffith Stadium. (Historical documents give us the names of those who helped: pitchers included Pedro Ramos and Camilio Pascual; Ed FitzGerald was the catcher for the experiments.) He attached one end of a light flat tape to the ball and then laid the rest of the tape -- a very long piece of tape, 60 feet or so -- loosely along the ground between the mound and home plate, making sure there were no twists. After the pitched curveball was caught, he simply counted the number of twists that had appeared in the tape. The number ranged from 15 or 16 down to 7 or 8 twists in the tape. Knowing the distance the ball traveled (60 feet), Briggs could deduce the pitch speed was 100 feet per second, and concluded that the maximum spin would be 1600 rpm.

But Briggs still wasn't satisfied; the dude was thorough. How could he mark the ball in such a way as to determine the spin most precisely? It just so happens that NSB had a wind tunnel -- maybe they still do -- specifically for the purpose of studying aerodynamics. This allowed him to precisely control most of the variables. He tossed baseballs into the wind tunnel, and let them freefall against the horizontal wind streams, which naturally caused the ball to curve. When a baseball finally hit the ground after the requisite 0.6 seconds, it bounced off a sheet of cardboard treated with lampblack, which left a smear on the ball, indicating point of impact. His conclusion: "An increase in the speed of the pitch beyond 100 feet per second reduced the curve only slightly and the important thing was the spin." Spin rather than speed was the critical factor in causing a pitched ball to break. And a curveball can curve up to 17-1/2 inches as it travels from the pitcher's mound to home plate.

NSB announced the results on March 29, 1959, and Briggs subsequently published his results in the American Journal of Physics. What does it all mean? Well, it's probably not going to help star pitchers perfect their curveballs, unless they're really fast at doing calculations in their heads. That kind of skill comes with practice, practice, practice. But knowing the precise relationship between all those variables and the forces acting on the ball is extremely useful in its own right -- if not for baseball, then most definitely for ballistics. And hey -- baseball players got to contribute to the forward march of science.

One of the coolest things about physics is that the same basic concept can pop up in several seemingly unrelated things -- say, a compact disc (CD), the antennae of seed shrimp, certain fossils in the Burgess shale, and scientific instruments used for spectroscopy. I'm talking about a diffraction grating, which Wikipedia helpfully defines as "an optical component with a periodic structure which splits and diffracts light into several beams traveling in different directions." Those period structures are usually dark lines, ridges, or rulings scratched onto plates that either reflect light, or are transparent so light can pass through. In either case, the light splits and scatters into its component colors.

You can see this effect simply by holding up a CD and shifting its angle a few times, so the light scatters off it in rainbow-hued flashes. (Vinyl records -- remember vinyl? -- also show this rainbow effect because of the way light reflects off the grooves in the vinyl, but it's harder to see.) It happens because of how CDs are made: one surface has many pits arranged in a spiral pattern, with a thin layer of metal over it to make those grooves more visible. And "information" -- in the form of the latest tune by your favorite band, or film or TV series -- is then encoded into those pits, and can be played back by the laser in your CD or DVD player.

The effect of a diffraction grating is similar to that created by photonic crystal structures: the source for that gorgeous flash of iridescent color in peacock feathers, opals, and the wings of dragonflies and butterflies (and the subject of prior blog posts on opals and butterfly wings). The color we see in those cases doesn't come from pigment molecules, but from the precise lattice-like structure of the wings (or shells, or feathers), which forces light waves passing through to interfere with itself, so it can propagate only in certain directions and at certain frequencies. It's similar to a 3D honeycomb, or an egg carton. Depending on the spacing between those building blocks, this creates a "photonic bandgap": certain frequencies of light are blocked, while others are preferentially let through.

In essence, they act like naturally occurring diffraction gratings, except photonic crystals only produce certain colors, or wavelengths, of light, while a diffraction grating will produce the entire spectrum -- much like a prism. In fact, it was Isaac Newton's experiments with the prism in the 17th century that first inspired a scientist named James Gregory to look more closely at bird feathers and outline some basic principles for what would become the modern diffraction grating. By 1785, Philadelphia inventor David Rittenhouse had figured out how to build the first diffraction grating by stringing hairs between two threaded screws. (In 1821, German physicist Joseph con Fraunhofer built a very similar device.)

But these early attempts at diffraction gratings were rough and imprecise, which limited their usefulness in spectrometers -- instruments that split light into its component colors and analyze the resulting spectra. Spectrometers are commonly used in astronomy, for example, and in many scientific labs here on Earth, to identify the signatures of specific elements in a given sample. It all comes down to how accurate the parallel lines drawn on the plate turn out to be: the smaller the distance between those parallel lines, the higher the resolution of the gratings. The man who did the most to improve the precision of diffraction gratings was a 19th century American physicist named Henry Rowland.

Born in Honesdale, Pennsylvania, he came from a long line of Protestant theologians and his family expected him to become a minister. But young Henry rejected the classics and had a passion from the start for science, particularly electrical and chemical experiments that he devised himself. When he was 17, his family relented and sent him to Rensselaer Technological Institute, where he earned a degree in civil engineering in 1870. After bouncing around in a couple of different jobs for the next two years, he ended up back at RPI as an instructor in "natural philosophy."

The professional respect of his peers was not immediately forthcoming; he had trouble getting his early scientific papers published in US journals. Frustrated, he sent a paper on his work in magnetic permeability to the eminent British physicist James Clerk Maxwell, who published it in London's Philosophical Magazine. The response was US scientists was, well, silence. I'm not saying Rowland was bitter about this, but years later, at a AAAS meeting in 1883, he declared:

"I here assert that all can find time for scientific research if they desire it. But here, again, that curse of our country, mediocrity, is upon us. Our colleges and universities seldom call for first-class men of reputation, and I have even heard the trustee of a well-known college assert that no professor should engage in research because of the time wasted."

(My, how times have changed! Nowadays, the priorities are reversed in many of the major US research universities, where teaching and -- heaven preserve us! -- public outreach are frequently frowned upon as distracting brilliant scientists from their research.)

But Rowland's European reputation snagged him a job in the end: in 1875 Daniel Colt Gilman was putting together a faculty for the newly established Johns Hopkins University, the first true research institution in the US (with graduate students and everything). One name that kept popping up when he chatted with European scientists was Henry Rowland, so he offered Rowland a position, which Rowland was all too happy to accept.

As a bonus, he got to take a tour of European laboratories to check out their set-ups and purchase any necessary equipment to reproduce similar world-class laboratories back at JHU. Things were definitely looking up for Henry. A highlight of his trip had to be visiting Heinrich Helmholtz's lab in Berlin, where he had a chance to work with the great physicist and conduct an experiment on the magnetic effect of a charged rotating disc -- something he'd never had the means to attempt before. And the experiment was a smashing success: he demonstrated unquivocally that a charged body in motion produces a magnetic field.

Once he got settled in at JHU, he attacked his scientific pursuits with renewed vigor, conducting a series of experiments to re-calibrate the value for the ohm -- the standard unit for measuring electrical resistance -- and re-creating James Joule's paddle-wheel experiment for measuring the mechanical equivalent of heat (i.e., how much energy it takes to increase the temperature of water by one degree). He avoided teaching and administrative duties as much as possible, and when he did teach, his students were often devastated by his withering critiques. So it was probably a good thing he focused mostly on research.

And then he became intrigued by the problem of imprecise diffraction gratings How did Rowland solve the problem? Why, by inventing a "ruling engine," of course, a machine which employed one main screw that could be used to shift the diamond tip that was used to etch the grating a very small distance between each line during the etching process, done on a concave surface. He went on to use his diffraction gratings in spectrometers to study the solar spectrum, producing an impressive photographic map of that spectrum in 1888. His gratings were so much better than any others available at the time, that he sold hundreds to scientists all over the world -- at cost, because that's the kind of altruistic guy he was, at least when it came to science.

Ultimately, Rowland's name became so strongly associated with diffraction gratings that one is featured in his official 1897 portrait by artist Thomas Eakins. And when the American Physical Society was founded in 1899, Rowland became its very first president. Alas, Rowland's health failed at a relatively young age: he died in 1901, of complications from diabetes. Maybe his name isn't synonymous with that of Albert Einstein, but the next time you catch a glimpse of that rainbow on your DVD, take a moment to think about Henry Rowland and his diffracting gratings.

It's Halloween! Jen-Luc Piquant has donned her usual vampire costume for the occasion, although she was tempted to dress up as Lady Gaga this year, just to mix things up a bit. But a Gaga outfit would have clashed with her stylin' beret, and let's face it: Jen-Luc was never meant to be a bleached blonde. Just in time for the spooky festivities, we stumbled across an amazing twist on the pinhole camera, via The Daily What (one of our must-read feeds). Artsy photographer Wayne Martin Belger constructs his own pinhole cameras, which are works of art all by themselves -- and in this case, he built a pinhole camera out of, well, a 150-year-old human skull of a 13-year-old girl.

It's called "The Third Eye," and his website claims he has used it "to study the beauty of decay." The camera is about 4 inches by inches, and has elements made of aluminum, titantium, brass and silver, with the occasional gem stone thrown in -- because accessorizing is so important. The light enters through the "third eye" (basically an aperture in the middle of the skull's forehead), and projects an inverted image of whatever scene is within the reference frame of the camera's field of view. (There is no lens.) I couldn't find any specific information on the exposure times Baker used, but with any pinhole camera, it can range from five seconds to several hours, and, in some cases, days.

Probably the earliet version of a pinhole camera was known as the camera obscura (Latin for “dark room”), the precursor to the pinhole camera. In its simplest form, the camera obscura is little more than a small hole in a shade or a wall, through which light passes from a sunlit garden, for example, into a darkened room, projecting an inverted image of the scene onto a wall opposite the hole. An artist could tack a piece of sketch paper to the wall and trace the key outlines of the subject, then complete the painting.

The phenomenon results from the linear nature of light, evidence of the “particle” side of its dual personality. Light reflects off each point of an object and travels out in all directions, and the pinhole acts as a lens, letting in a narrow beam from each point in a scene. The beams travel in a straight line through the hole and then intersect, so that light from the bottom of the scene hits the top of the opposite wall, and vice versa, producing an upside-down image of the outside world on the wall.

The visual effect can be quite striking. “When I first saw an image projected like this, I just thought I was seeing God,” New York City artist Vera Lutter told The New Yorker in March 2004. She actually creates life-sized silver gelatin prints using the camera obscura principle, tacking large photo sensitive paper to that wall and recording the image, often over a long period of time. She got the idea while living inan old high-rise loft in the Garment District (an illegal sublet, in fine Big Apple tradition). As she told BOMB magazine in 2004:

I was overwhelmed and incredibly impressed by the city, the light, the sound, the busyness of the streets. It was fantastic. Through the windows, the outside world flooded the space inside and penetrated my body. It was really an impressive experience on all levels, and I decided to turn it into an art piece: the space, the room inside which I had this experience, would become the container to transform that very experience. The room would become a transfer station from outside to inside, the window itself the eye that sees from inside out. I placed a pinhole on the window surface and replaced my body with a sensitive material, and that was the photographic paper. This setup was meant to record my experience, in place of myself. My intention was not to make a photograph as such but to make a conceptual piece that in its own way repeated and transformed what I had observed. Conceptual art was the spirit in which I was trained in art school. At the same time, I wanted to keep the process as immediate and direct as possible. That’s why I decided to work with the pinhole and not a lens, and to project immediately onto photographic paper and not use the intermediary of the negative, which conventionally is printed and editioned in photography.

There are accounts of camera obscura (or related phenomena) in writing from ancient Greece, China, and other places, but one of the most exhaustive studies of the science behind the effect took place in the 11th century, thanks to Arabian scholar Alhazen of Basra (a.k.a. Ibn Al-Haytham). The son of a civil servant, living during what we now know as the Islamic "Golden Age," Alhazen devised a plan to construct a dam to control flooding of the Nile River in Egypt. When he pitched his idea to Egypt’s ruler, Caliph al-Hakim, he wound up with a commission to do just that. Alas, al-Hakim was known as “the Mad Caliph.” He didn’t provide sufficient funds, materials or labor to complete the project, and yet failure was not an option. To avoid being summarily executed for his failure, Alhazen did the only sensible thing: he pretended to be mad himself until the caliph died in 1021. He ended up under house arrest.

Since he had a lot of time to kill, Alhazen figured out how to conduct simple experiments in optics. He darkened a room and made a hole in one wall, and hung five lanterns in the room next door. He noticed that five “lights” then appeared on the wall inside the darkened room. The lanterns and the hole were arranged in a straight line, so he concluded that light travels in straight lines.

And even though the light from all five lanterns traveled through the hole at the same time, it didn’t get mixed up in the process: there were still five separate “lights” on the wall. Since Aristotle, the common assumption had been that the eye sent out rays of light to scan objects. Alhazen determined that light was reflected into the eye from the things one observed. He also recorded the laws of reflection and refraction, correctly attributing the effects to the fact that light travels more slowly through denser mediums. His treatise on optics, Kitab-al-Manadhirn, was translated into Latin in the Middle Ages -- one of only a handful of his more than 200 works that survived.

The 17th century Dutch painter Johannes Vermeer (1632-1675) is believed to have used the camera obscura. American etcher and lithographer Joseph Pennell first speculated on the possibility in 1891, citing as evidence the “photographic perspective” of certain paintings. Others have pointed to how Vermeer seems to reproduce idiosyncracies of optical images and “out-of-focus” effects, such as the reflection of sources of light off shiny surfaces. More recently, Philip Steadman – a professor of architecture and town planning at University College London and author of Vermeer’s Camera – worked backwards from the conventional method of artists for setting up perspective views, reconstructing the geometry of the spaces depicted.

Steadman (somewhat controversially) concluded that as many as ten different paintings represent the very same room. Steadman maintains that his perspective reconstructions make it possible to plot the positions in space of the theoretical viewpoints of each of the ten paintings. He didn’t just rely on fancy mathematics; he tested his predictions, like any good scientist. Specifically, he built a one-sixth scale physical model of the room in question, outfitted it with a photographic plate camera in place of Vermeer’s camera obscura, then created photographic simulations of the paintings, testing light and shadow -- all of which Vermeer faithfully reproduced in paint, lending credence to Steadman’s theory. The BBC built a full-size reconstruction for a television film on the subject. It cast a full-size image of The Music Lesson onto a translucent screen, bright enough to show up on film, and certainly sufficient to have served as a drawing aid for Vermeer.

And now modern-day artists (like Lutter and Belger) are following in Vermeer's footsteps, giving rise to an entirely new field of photography more commonly known as solargraphy. Solargraphy is pretty much the most common use of pinhole cameras these days, which can be constructed out of very simple materials, enhancing their appeal to (frequently impoverished) artists. All you need is a box capable of shutting out all light, except for what enters through a small pinhole in one end. Need a shutter? Any old cardboard flap will do, taped over the pinhole in such a way that it can swing open or shut, just like a hinge. Or you can just adapt an old 35 mm camera, replacing the lens with a simple pinhole and keeping the shutter and film winding mechanisms.

But the world's largest pinhole camera, according to Wikipedia, was built in an old F-18 hangar at a defunct fighter base in Irvine, California. It took six photographers and a small army of assistants to block out all the light from the hangar, using black tape and black spray paint (40 gallons' worth!). Then they coated a large piece of muslin with gelatin silver halide to make it light-sensitive (Lutter would approve) and hung it from the ceiling. They had to develop the film in a gigantic tray the size of an Olympic swimming pool. The end result? A haunting image of the air station, with control tower and runways, with the San Joaquin Hills visible in the distance, preserved for posterity. It wasn't a small print, either, measuring 108 feet side and 85 feet high. I'm sure it barely fit into the exhibit space at Pasadena's Art Center College of Design when it debuted in September 2007.

But size isn't everything, right? Maybe Belger's skull-adapted pinhole camera can't compete with an airplane hangar on the size scale, but he certainly wins mega-bonus points from the cocktail party for having his camera not just be a tool to produce a photographic work of art -- but to transcend its utilitarian functions to become a work of art in itself.

Co-blogger Lee might not have time to blog much these days -- hey, she's swamped doing double teaching duty, with uber-long commutes -- but she did find time in between grading stacks upon stacks of student papers to send me an awesome video link called "Millimeters Matter," showing various insects being smacked with itsy-bitsy tarts launched by a mini-trebuchet. Her comment: "Someone is a major geek with too much time on their hands... and an Easy Bake oven."

She has a point. Check out the video below. Whoever made it (ad agency?) had to (1) build a tiny trebuchet, (2) calibrate said device appropriately to fling a plethora of tiny tarts, (3) make those tiny pastries (which may indeed involve the equivalent of an Easy Bake oven), and (4) corral a host of insects to suffer the ignomy of being hit with various pie fillings. Apparently it's an ad for Samsung electronics, and this isn't its first foray around the blogosphere: Cynical C posted it back in 2007. Jen-Luc Piquant has no idea if actual insects were harmed while making this video, but her attitude is, hey, they're insects. Nature will probably make more. (Except for bees -- colony collapse disorder is actually a serious problem, and apparently there's a viral cause. Who knew?)

But here's a burning question, raised by one of the commenters in YouTube, who claims the device in the video isn't "really" a trebuchet as technically defined: i.e, it has no counterweight. Honestly, it's difficult to tell for sure, since the actual catapult is only featured briefly at the start of the video. I guess it depends on what one expects said counterweight to look like.

For those who don't know much about catapults, that term applies broadly to many different devices: trebuchets, ballistas (which look remarkably like giant crossbows), and mangonels, among them. They all are designed to fling heavy objects over long distances, and historically were used as weapons of war -- pretty darned effective ones, too, when the machines were well-designed. They are distinguished by the projectile mechanism. Ballistas and mangonels are so-called torsion devices: a cord is twisted tightly and then released, causing the projectile of choice to fly off into the wild blue yonder. (With any luck, it hits its target.) In contrast, the trebuchet gets its power from, well, gravity, in the form of a counterweight that is released, causing the lever to whip around like a sling to launch a projectile.

It's simple physics: a lever and fulcrum principle, combined with what amounts to an amped-up slingshot. Think about a typical seesaw (lever) in a playground, carefully balanced on a point (fulcrum) about which it rotates -- or rather, moves up and down. Now think about what happens when a much heavier child sits on one end while a small child sits on the other. If the weight difference is significant enough, and the heavier child sits down abruptly, it's entirely possible that the unfortunate smaller child could be thrown from the seesaw, thereby becoming a projectile.

That's pretty much what's happening with a trebuchet, except the fulcrum is positioned a bit closer to the much-heavier counterweight for the most efficient energy transfer. At the other end of the lever is the sling holding a projectile. Dropping the counterweight sets the lever in motion, the sling is pulled along with it, and when it reaches a given angle, the projectile is released. (Finding the optimum angle is one factor among many when it comes to building a successful trebuchet.) In fact, according to the folks at Real World Physics, it's akin to the physics of a golf swing, performed upside down.

There are countless hobby enthusiasts today that love trebuchets and other catapults, even designing and building their own. The popular Punkin Chunkin competition, featured every year on The Science Channel, draws many such people into this arcane world. You can find online classroom activities based on comparing a trebuchet and catapult, courtesy of NASA, and plenty of instructions and design tools, in case you want to take a stab at building your own.

In fact, USC string theorist Nick Warner is such an enthusiast: he once told me of building a trebuchet in the backyard with his daughter -- and with that kind of brain power brought to bear on the challenge, I'm sure the resulting machine was quite effective. And that pretty much makes him one of the Coolest Dads Ever in my book.

When it comes to assessing performance, it's really all about range, which in turn depends on things like mass, shape and size; wind speed; the elevation of whatever's doing the hurling; and lots of other factors. Competitors who enter Punkin Chunkin can launch their pumpkins using any mechanical means, usually slingshots, catapults, trebuchets, or pneumatic air cannons. (The pumpkin has to be very firm, because the launch doesn't count if the pumpkin splatters in mid-air, dubbed "pumpkin pie in the sky.") An air cannon holds the current world record: apparently it chunked a pumpkin 4,623.43 feet in 2009, although Wikipedia tells me there's an even longer result pending verification: the same machine achieved a range of 5,545.43 feet just last month.

Small wonder that for several centuries, the trebuchet was the weapon of choice for laying siege to medieval castles. (The projectiles tended to be much larger than your average pumpkin, but the ranges achieved were still impressive.) The earliest version, called a traction trebuchet, was known in Ancient Greece and China around the 4th Century BC. It required many men to pull the lever instead of a counterweight, in order to launch a projectile. The counterweight version appeared in the 12th century, and was far more effective, flinging objects weighing as much as 350 pounds against medieval fortresses. Use two such machines in tandem, and it was possible to launch a projectile every 15 seconds. That kind of damage adds up over time. The trebuchet was a devastating weapon, and only fell out of favor with the advent of gunpowder-based weapons (like cannon).

For instance, in the summer of 1191, Richard the Lionheart used one his favorite trebuchets against the city of Acre during the Crusades. He called it "Bad Neighbor" (Malvoisine), and the accumulated impact of all those projectiles breached the city wall and supposedly shattered the "Cursed Tower." His other favorite trebuchet was nicknamed "God's Stone-Thrower" -- consider them the Fat Man and Little Boy of the Crusades. Another English king, Edward Longshanks, had a trebuchet built called "Warwolf" that he used in the siege of Sirling Castle in 1304. Innovation has always been a hallmark of such machines: around 1187, an Islamic scholar named Mardi bin Ali al-Tarsusi described an unusual hybrid trebuchet: as it fired, it also cocked a separate crossbow, most likely as a means of protecting the men operating the machine during battle.

Stones were by far the most common projectiles, although attackers on occasion would launch diseased corpses and manure over castle walls in hopes of infecting the people huddled behind them -- an early form of biological warfare. And if you're the henchmen of the French aristocrat, Louis de Lombard, in Monty Python's Holy Grail, you pitchez la vache -- hurl cows and other farm animals over the walls at one's enemies. Because the trebuchet isn't just for laying siege to a castle -- it's also an awesome defense against an invading army, or just a random group of knights in search of the holy grail.

In 2002, cyberpunk authors William Gibson and Bruce Sterling collaborated on a historical thriller, The Difference Engine, in which the 19th century inventor Charles Babbage perfected his steam-powered Analytical Engine. It's an awesome, visionary read, detailing an alternate history in which the computer age arrives a century earlier, Lord Byron becomes prime minister instead of dying of syphilis at 32, and a rebellious group of subversive, anti-technology Luddites -- the Insane Clown Posse of their day perhaps? -- conspires to overthrow the intellectual elite in power. We can only imagine how our daily lives, science, and the course of history may have been altered if Babbage’s ingenious engines had given us comparable computing power in the 19th century.

Except there is exciting news! Plans are afoot to raise funds to finally build Babbage's Analytical Engine, the last and most complex of his many designs for "thinking machines." This is not entirely unprecedented. In the 1990s, a team of scientists at London’s Science Museum, led by Doron Swade, built a working model of Babbage's second Difference Engine (the precursor to the Analytical Engine), using only the materials and tools that would have been available in Babbage’s day. And it worked. The machine is now prominently displayed in the museum, and Swade wrote a marvelous book about his experiences: The Difference Engine: Charles Babbage and the Quest to Build the First Computer. Here's Swade talking with Wired about his model Difference Engine:

As impressive a feat as building the Difference Engine was, the Analytical Engine is even more impressive. John Graham-Cumming, a programmer and science blogger, now hopes to realise Babbage’s vision. Various folks have assembled different elements of the engine over the last 173 years, but this would be the first complete working model. Graham-Cummings told the London Telegraph, "The big difference between it and machines which came 100 years later was that the programme was stored externally, in punch cards. It is basically a giant number-crunching machine–which is effectively what modern computers are today, it’s just that those numbers appear to us as words or images on a screen.”

Just who was this visionary whose designs were a century before their time? Well, Babbage's personal motto may have been "Born To TInker." The son of a wealthy London banker, he was one of those kids who loved to take apart his toys to see how they worked -- probably with the usual mixed results when it came time to put the toys back together again. He had a knack for math, too, teaching himself algebra before attending Cambridge University’s Trinity College. He loved numbers and found minute details endlessly fascinating, even compiling a collection of “jest books” to scientifically analyze “the causes of wit.” I guess nobody had the heart to tell him that explaining a joke never works. He even cracked the supposedly unbreakable “Vigenere” cipher around 1854, considered by many historians to be the most significant breakthrough in cryptoanalysis since the 9th century.

Babbage could be tiresome, even pompous, but he could also be quite charming. Once, as a “diversion,” he drew up a set of mortality tables, now a basic tool of the modern insurance industry. “A man with such a head for numbers and flair for flattery was bound to end up in life insurance,” historian Benjamin Woolley quipped in his biography of Ada Lovelace (a Babbage fan and collaborator), The Bride of Science.

He wasn't handsome, either: the poet Thomas Carlyle, who once described Babbage as “a cross between a frog and a viper.” But despite his physical shortcomings and pedantic tendences, Babbage had a strong romantic streak, marrying the woman he loved without his father's permission, and suffering the consequences: being cut out of the will (giving up his chunk of the substantial family fortune in the process).

So, he wasn't wealthy, but he loved his wife, and he was sufficiently well off financially to pursue his love of invention. Among other things, he invented a speedometer and a “cowcatcher,” a device that could be affixed to steam locomotives to clear cattle from the tracks.

Babbage first conceived of the idea of a calculating engine in 1821, when he was examining a set of mathematical tables with astronomer John Herschel (the son of William Herschel). Such tables were used to make calculations for astronomy, engineering and nautical navigation, but they were calculated by hand and were riddled with errors. So the answers were often wrong, no doubt causing any number of shipwrecks and engineering travesties. “I wish to God these calculations had been executed by steam!” Babbage exclaimed in exasperation after finding more than a thousand errors in one table. And so began his lifelong quest to mechanize the process.

Then Babbage learned of a novel scheme employed by the French mathematician Gaspard Riche de Prony. France had recently switched to the metric system of measurement. This gave scientists a much-needed standardized system to measure and compare results, but it also required a whole new set of calculating tables with which to carry out increasingly complex scientific calculations. So de Prony established calculating “factories” to manufacture logarithms the same way workers manufactured mercantile goods.

These human "computers" were mostly out-of-work hairdressers who had found their skills at constructing elaborate pompadours for aristrocrats much less in demand after so many former clients lost their heads (literally) at the height of the French Revolution. De Prony devised a rote system of compiling results based on a set of given values and formulae, and the workers just cranked out the answers in what must have been the world’s first mathematical assembly line. Babbage figured that if an army of untrained hairdressers could make the calculations, so could a computing “engine.” In fact, a mechanical calculator for adding and subtracting numbers, called the arithmometer, had only just been invented.

Suitably inspired, Babbage designed his first “Difference Engine,” which created tables of values by finding the common difference between terms in a sequence. Powered by steam, it was limited only by the number of digits the machine had available. His demonstration model was the size of a steamer trunk, with two brass columns supporting two thick plates sandwiching a complicated array of gears and levers. Small wheels with numbers engraved in their rims displayed the terms and results of the calculations. Babbage proudly displayed his prototype to the many visitors he entertained in his Dorset Street home, although most did not share his love of numbers.

In an effort to pique their interest, he put on a dog-and-pony show: he would announce that the machine would add numbers by two, then begin turning the crank. The wheels with the carved figures would begin displaying the predictable sequence: 0, 2, 4, 6, 8, and so on. The machine would do this for roughly 50 iterations. Just when the audience was becoming bored and restless, the number would suddenly leap to a new, seemingly random value and then continue the adding-by-two sequence from there. It seemed miraculous, but really, all Babbage had done was program the engine to perform one routine for a given number of turns of the crank, and then to jump to a “subroutine" for another few turns before returning to the original routine.

Why was a full-scale version of this amazing machine never built? Well, Babbage was a perfectionist who was never satisfied with his work, and continually revised his blueprints. He was more of a tinkerer than, say, a "finisher." He spent thousands of pounds of government funding to rebuild the same parts over and over. Eventually the British government lost patience with his lack of progress and decided to suspend funding for his work in 1832, terminating the project altogether in the 1840s.

Nor was the government inclined to look kindly on the successor to the Difference Engine: the Analytical Engine. This new, improved machine wouldn’t just calculate a specific set of tables; it would solve a variety of math problems. Babbage based his design on the cotton mill. There was a memory function called a “store”, and processing function called the “mill.”

But the Analytical Engine would use punched cards to control the cogs, based on the weaving cards developed to “program” looms to weave particular patterns -- essentially pre-Victorian floppy disks or CD-ROMs. When strung together, these cards would enable the machine to perform “loops,” whereby a sequence of instructions would be repeated over and over, or “conditional branching,” where one series of cards would be skipped and another read if certain conditions were met.

Babbage's design provoked a heated reaction from those who just couldn't grasp what he was after. Robert Peel, the head of England’s Tory administration at the time, denounced the Analytical Engine as a “worthless device, its only conceivable use being to work out exactly how little benefit it would be to science.” It would have required tens of thousands of parts, intricately assembled into a frame the size of a small locomotive. Some historians have speculated that Babbage’s machines were never built because they demanded a level of engineering sophistication that simply didn’t exist in pre-Victorian England. But it turns out it was mostly a question of money.

Frankly, it's still a question of money, since Graham-Cumming estimates it will cost about $640,000 to complete the project -- assuming there aren't any over-runs. He's sufficiently motivated that he's been soliciting donations from various sources. The idea is to raise the money by this January so he can start building his Analytical Engine. And he's off to a great start, with around 1,600 supporters so far who have pledged funds to the project. He will base his project on the original Babbage blueprints (and we don't envy him the task of sifting through Babbage's many revisions), which will be digitized the better to decipher the inventor's many annotations. Then Graham-Cummings and hs team will build a 3D simulation of the gadget on a computer before attempting to build the real thing. Once he's done, he'll donate the machine to a museum -- hopefully the same one that houses Swade's Difference Engine. Keep Babbage's rebuilt machines together, I say. That's how he would have wanted it.

Physics Cocktails

Heavy G

The perfect pick-me-up when gravity gets you down.
2 oz Tequila
2 oz Triple sec
2 oz Rose's sweetened lime juice
7-Up or Sprite
Mix tequila, triple sec and lime juice in a shaker and pour into a margarita glass. (Salted rim and ice are optional.) Top off with 7-Up/Sprite and let the weight of the world lift off your shoulders.

Any mad scientist will tell you that flames make drinking more fun. What good is science if no one gets hurt?
1 oz Midori melon liqueur
1-1/2 oz sour mix
1 splash soda water
151 proof rum
Mix melon liqueur, sour mix and soda water with ice in shaker. Shake and strain into martini glass. Top with rum and ignite. Try to take over the world.