"Takes 1 part pop culture, 1 part science, and mixes vigorously with a shakerful of passion."
-- Typepad (Featured Blog)

"In this elegantly written blog, stories about science and technology come to life as effortlessly as everyday chatter about politics, celebrities, and vacations."
-- Fast Company ("The Top 10 Websites You've Never Heard Of")

Happy Hour

Jen-Luc Piqunt stumbled across an intriguing science news story this morning: it seems that engineers at Ohio State University "have invented a new kind of nano-particle that shines in different colors to tag molecules in biomedical tests." The secret ingredient? quantum dots! We love quantum dots at the cocktail party, and they rarely make news headlines. This seems like a good time to indulge in a spot of self-plagiasm and adapt some information from my 2007 post on the subject.

Quantum dots are tiny bits of semiconductors -- sometimes called nanocrystals, which just doesn't carry the same panache -- just a few nanometers in diameter. It's like taking a wafer of silicon and cutting it in half over and over again until you have just one tiny piece with about a hundred to a thousand atoms. That's a quantum dot. Billions of them could fit on the head of a pin.

Size matters when it comes to semiconductors: smaller is usually better. Because they're so tiny, quantum dots have some unusual materials properties -- specifically, the all-important electrical and optical ones -- thanks to the quantum effects that kick in at smaller size scales, so they are of enormous interest to researchers. It's interesting physics fundamentally, and it offers an impressive sampling of potentially lucrative practical applications.

It helps to place semiconductors in general in the appropriate context, i.e., right smack between insulators and conductors. Insulator atoms hoard their electrons greedily, like misers or overprotective parents, and rarely part with them, while conductor atoms are like spendthrifts or exceedingly permissive parents, letting their electrons run amok all over the place (and a good thing, too, otherwise we'd never enjoy the benefits of electrical current).

Semiconductor atoms are juuuust riiiight. They don't fling their electrons around all willy-nilly, but neither do they hang onto to them too tightly. It takes a bit of an energy boost to knock an electron loose in a semiconductor, and when the electron breaks free, it leaves behind a "hole" in the atom's electronic structure -- a vacancy, if you will, that another electron, sooner or later, will come along to fill. So a photon strikes a semiconductor atom and creates an electron-hole pair. This enables the electrons to flow as a current. And current = power.

Back in 1990, European researchers managed to get porous silicon to emit red light, and figured it came about because of "quantum confinement" relating to the dot's small size. At 10 nanometers or less, the electrons and holes are being squeezed into such small dimensions that this alters the electronic and optical properties; it's the critical feature of most nanoscale materials, frankly. Things snowballed from there, with scientists making more silicon dots (and, later, germanium dots) that emitted light in lots of bright, pretty colors, especially the highly desirable green and blue ranges. The bigger the dot, the redder the light, and the emitted light becomes shorter and shorter in wavelength -- and higher in energy -- as the dots shrink in size. This is called "tunability" because you can pretty much tailor the dots to emit whatever frequency of visible light you happen to need for a given application, simply by altering the size of the dots

The most obvious application is using quantum dots as an alternative to the organic dyes used to tag reactive agents in fluorescence-based biosensors. You know, the dyes start to glow when, say, a harmful toxin is present. But the number of colors available using organic dyes is limited, and they tend to degrade rapidly. Quantum dots offer a broader spectrum of colors and show very little degradation over time. Having all those colors also means you can make light-emitting diodes (LEDs) from quantum dots, precisely tuned in the blue or green range. You can also build quantum dot LEDs that emit white light for laptop computers or interior lighting in cars. As for electronics, the possibilities are endless: all-optical switches and logic gates, for instance, with a millionfold increase in speed and lower power requirements, or, further in the future, quantum dots could be used to make teensy transistors for nanoelectronics.

This latest breakthrough -- described in the online edition of Nano Letters, in a paper by OSU's Jessica Winter and Gang Ruan -- involves sutffing tiny plastic nanoparticles with even tinier quantum dots for use in biomedical tagging applications. It's easier to see biological molecules under a microscope if they fluoresce, and quantum dots glow more brightly than other fluorescent molecules used for this purpose.

They also "twinkle", i.e., blink on and off, an effect that is less noticeable if there are many quantum dots congregated together. There are pros and cons to this behavior. Con: it "breaks up the trajectory of a moving particle or tagged molecule" that one is trying to track under the microscope. Pro: when the blinking stops, scientists know they've reached a critical threshold of aggregated quantum dots. What Winter and Ruan have done to address this is to turn that "con" into another "pro" by stuffing quantum dots of different colors into the same micelle (a polymer (plastic) based spherical container commonly used in lan experiments). Their tests showed that doing show caused the micells to glow steadily. To wit:

"Those stuffed with only red quantum dots glowed red, and those stuffed with green glowed green. But those he stuffed with red and green dots alternated from red to green to yellow. The color change happens when one or another dot blinks inside the micelle. When a red dot blinks off and the green blinks on, the micelle glows green. When the green blinks off and the red blinks on, the micelle glows red. If both are lit up, the micelle glows yellow. The yellow color is due to our eyes' perception of light. The process is the same as when a red pixel and green pixel appear close together on a television or computer screen: our eyes see yellow."

The continuous glowing makes it easer to track tagged molecules with no breaks, and they can also use the color changes to determine when said tagged molecules congregate. The new nanopartices would be great for microfluidic devices, and could one day be combined with magnetic particles to enhance medical imaging for, say, cancer detection. So it's nice to see quantum dots getting a little love in the public sphere again.

I've been culling through my bulging fodder file, discarding all those things I was sure would make for awesome blog posts, but never got around to writing them, and now, well, I probably never will. But there's also bits and pieces worth salvaging, such as the items related to the Casimir effect. Since just last week, Matt over at Starts With a Bang graced the blogosphere with a fantastic post detailing the basics of this unusual quantum phenomenon, I figured the timing was right to highlight some recent findings that make use of it, in some way.

Just what is this "Casimir effect"? Basically, it refers to the attraction between two objects should they come within, say, 1/5000 of an inch of each other. It's related to the energy inherent in the quantum vacuum. Empty space isn’t really empty. It roils and boils with quantum fluctuations, occasionally spitting out pairs of “virtual” elementary particles and antiparticles. These virtual particles annihilate and disappear back into the quantum vacuum so quickly that the apparent violation of energy conservation incurred by their creation can’t be observed directly.

So how do we know they exist? There is indirect evidence in the Casimir effect, named after Henrik Casimir, the Dutch physicist who discovered it in 1933. Normally two uncharged parallel metal plates would remain stationary because there is no electromagnetic charge to exert a force to pull them together (or push them apart). But Casimir found that if the plates are close enough, there is still a tiny attractive force between them.

Because the parallel plates are so close together, virtual particle pairs can’t easily come between the plates, so there are more pairs popping into existence around the exterior of plates than there are between them. The imbalance creates an inward force from the outside that pushes the plates together slightly. The smaller the separation between the plates, the fewer virtual pairs can get between them, and the greater the force of the inward attraction. The Casimir effect is quite small, equal to the weight of 1/30,000 of an ant.

So the Casimir effect is pretty cool, but it's fair to ask whether it has any relevance on the macroscale -- where most of us live our daily lives. Back in 1996, a Dutch scientist named Sipko Boersma claimed one could see the Casimir effect between two ships moored close together in a strong swell. He based that upon a reading of French nautical writer P.C. Caussee's 1836 book, entitled The Album of the Mariner, which supposedly warned of this effect. It's an oft-repeated story that, alas, is probably not true, according to a 2006 Nature article reporting on the investigation by physicist Fabrizio Pinto.

"A former NASA scientist, [Pinto] is both a keen sailor and president of InterStellar Technologies, a company that researchers practical applications of the Casimir force," Nature reported. And when he tracked down a copy of Caussee's book to verify the claim for himself, he found that Boersma misread the original text: "Caussee never claimed that two ships attract in a heavy swell. Rather, he said this happens when the sea is completely calm." Nor could Pinto find any experimental evidence for the Casimir effect between two closely moored ships.

But that doesn't mean the Casimir effect is worthless when it comes to practical applications! The Casimir force was finally measured accurately in 1997. And in 2009, MIT's Alejandro Rodriguez and some of his fellow physicists started playing around with combinations of different materials in different shapes, and hypothesized that some of those combinations should generate repulsive forces, akin to the Casimir Effect. Choose those combinations carefully enough, and you can devise a kind of stable "Casimir molecule," where the attractive and repulsive forces generated balance out.

These are complex calculations, so it's impressive that Rodriguez et al. managed to complete calculations for "combinations of infinite slabs made alternatively of silicon and silicon dioxide, for nanoparticles, and for alternating slabs and spheres." The team is most excited about their findings on the forces generated between Teflon and silicon nanospheres immersed in ethanol. Per an article in Technology Review:

"By choosing the radii of these spheres carefully they can be suspended against the force of gravity above an infinite slab. It turns out that the force between the particles is repulsive at separations closer than 100 nm but becomes attractive as the distance increases."

And voila! Under those conditions, you should get a stable Casimir molecule! Full Disclosure: I have no idea what is meant here by an "infinite slab," but right now, these calculations are purely theoretical, and infinities -- while mainstays of higher mathematics -- generally don't translate well into reality. That might be one reason why Rodriguez et al. have yet to measure experimentally the repulsive forces they claim should be generated by some of these combinations. The article dithers a bit on that score, claiming that doing the experiment should be easy, "provided the size of the nanoparticles can be controlled with the required difficulty," while backtracking in the very next sentence by claiming "these experiments will be fraught with difficulty." I'm guessing we won't be seeing practical applications for these structures any time soon.

But no worries, because Rodriguez was back last summer with another possible use for the new tool they devised for calculating the various forces that create the Casimir effect: as a kind of WD-40 (oil) to reduce "stiction" (a combination of stick and friction caused by the Casimir effect) in accelerometers, gyroscopes and other micro-electro-mechanical system (MEMS) chips. Rodriguez & Company discovered it should be possible to arrange all those tiny moving parts in such a way that the forces that normally attract, can be made to repulse, thereby greatly reducing "stiction." Per Smarter Technology:

The researchers proved their technique works by designing a prototype consisting of an ellipsoid plunger that gets inserted into a complementary hole in a flat plate. The shapes of the hole and the plunger ensure that the Casimir force is balanced until the plunger is moved, at which point the Casimir force causes the parts to repel, thus overcoming the forces of stiction.

MEMS are in everything these days, by the way. So don't say the Casimir effect never did anything for you.

A couple of weeks ago, an editor asked me to name my favorite science book from 2010 for a year-end round-up her magazine was putting together. My incredulous response: "You mean you want me to pick just one?" Because let's face it, 2010 has been a banner year for popular science books. Never mind the unstoppable juggernaut that is The Immortal Life of Henrietta Lacks (and kudos to Rebecca Skloot for bringing science back to the bestseller lists); 2010 also saw Maryn McKenna's Superbug; Deborah Blum's The Poisoner's Handbook; Misha Angrist's Here is a Human Being; Annie Paul's Origins; Jonathan Weiner's Long for This World; and Mary Roach's Packing for Mars. And that's just scratching the surface, based on a quick perusal of my groaning bookshelves. Did I mention the Spousal Unit's From Eternity to Here and my own humble offering, The Calculus Diaries? Consider them mentioned. Heck, Carl Zimmer even ventured into the world of e-publishing with a collection of his Discover columns on neuroscience, aptly titled Brain Cuttings.

The steady stream of science books hasn't stopped, either, so I thought I'd highlight just a few of the new offerings (mostly math and physics related) that came out this fall -- just in case you're looking for the perfect gift for the science enthusiast in the family. (Full disclosure: not only did I receive ARCs of most of the books below, I'm personally acquainted with several of the authors. The exception is Connie Willis; I'm a bona fide fangirl in that case.)

Written in Stone: Evolution, the Fossil Record, and Our Place in Nature, by Brian Switek. I get warm and fuzzy just thinking about this book, since I watched Brian struggle with it in the earliest stages of development. So yeah, there's some personal bias at play. But I'm delighted to say that it came together beautifully -- it's a commendable work by a promising young science writer with a bright future ahead of him. I write primarily about physics and math, so the subject matter of Brian's book -- evidence for evolution in the fossil record -- was largely new to me, making me the ideal reader for this insightful introduction to the topic.

He starts off with a bang, opening with the ruckus raised in May 2009 over the unveiling of the ancient and highly photogenic fossil affectionately known as "Ida" -- wrongly dubbed a "missing link" in the frenzied press coverage that introduced Ida to the world. That was actually as much the fault of those who discovered her -- the whole affair was carefully orchestrated for maximum exposure and, frankly, personal profit -- and Brian gives an excellent summation of the events leading to the media circus. (Fortunately for science, the general public probably remembers very little by now, save, "Hey, wasn't Ida that really cool fossil?" And gosh darn it, Ida is still pretty cute.)

But the real significance of Ida -- and the reason Brian chose to open Written in Stone with that story -- has to do with the "missing link" claims, and the public's misperceptions about evolution. The iconic image of evolution is the March of Progress, showing the progression from early primate to modern man -- a notion that Brian rightly points out has its roots in the Renaissance notion of the Great Chain of Being. And while Creationists love to spout off about how ridiculous it is to assume we came from apes, what evolution actually claims is that mankind and apes share a common ancestry. There is a difference between those two statements.

Evolution is far more complicated, and this forms the central thesis of the book. Our journey through the fossil record, and encounters with such fascinating historical figures as Nicholas Steno, the charlatan Albert Koch, and Athanasius Kircher (one of my all-time favorite historical figures), serve to illustrate one basic point: evolution is more of a branching process, often taking many different paths (even if the end result is similar), with one species evolving and another staying largely unchanged -- a constantly shifting dance. It's kind of messy, with progress occurring in fits and starts -- the furthest thing from the idealized March of Progress. I'll let Brian have the last word:

"For to ask 'What makes us human?' assumes that there was some glorious moment, hidden in the past, in which we transcended some boundary and left the ape part of ourselves behind. We forget that those are labels we have created to help organize and understand nature.... There was never an 'ascent of man,' no matter how desperately we might wish for there to be, just as there has not been a 'descent of man' into degeneracy from a noble ancestor. We are merely a shivering twig that is the last vestige of a richer family tree."

Proofiness: The Dark Art of Mathematical Deception, by Charles Seife. I used to hang out with Charles in the press room at American Physical Society meetings as a budding young science writer, and his classic book, Zero: The Biography of a Dangerous Idea (still in print!) made me realize that the world of numbers could be as fascinating as physics. With Proofiness -- and with that title, why has Charles not yet been on Colbert? Why? -- he tackles the myriad ways our cultural innumeracy blinds us to the many deceptions perpetrated by a misuse of numbers, particularly statistics and probability. There's a lot about election polling and census results, these being hot topics of the day, but even if you're not particularly interested in those, Charles has such an engaging style and wry wit that his prose is bound to draw you in. Also? The cover design is really cool. In this case, you really can judge the quality of the book by its cover.

The Amazing Story of Quantum Mechanics, by Jim Kakalios. The author of The Physics of Superheroes is back with another installment, this time exploring how quantum mechanics changed the world and ushered in a future very different from the one envisioned by the classic comics of the 1950s. We were promised jet packs and flying cars, dammit! And I'm still bitter about the lack of progress on human teleportation. I was struck by a comment Jim made this past summer when we were both on a science panel at CONVergence/Skepchicon in Minneapolis. Someone asked what he thought would be the technological breakthroughs of the next 50 years, and he replied that anything requiring huge breakthroughs in energy would probably not transpire -- but anything related to the explosion in information? Now that would be something capable of transforming the future.

That's kind of the underlying premise of The Amazing Story of Quantum Mechanics: we didn't get jet packs or flying cars, or unlimited supplies of free energy, but we got tons of amazing things we weren't expecting at all. We got atomic bombs, nuclear magnetic resonance (and MRI), lasers, death rays, MP3 and DVD players, spintronics, and the World Wide Web. This is a fantastic primer on the intricacies of the quantum world, using entertaining examples from -- yes -- classic comic books to illustrate his points. Along the way, we are treated to a broad overview of some of the coolest things quantum mechanics has given us, and a sneak peek at what might be in store.

Massive: The Missing Particle That Sparked the Greatest Hunt in Science, by Ian Sample. Good news for fans of objectivity! I don't know Ian Sample personally! So when I tell you that Massive turns the dry-sounding hunt for the Higgs boson into the equivalent of a scientific detective story that you can't put down, you know it's not coming from a biased perspective. Also? There's only one mention of the dreaded "god particle" -- a nickname, coined by Leon Lederman (who co-authored the popular book), that is universally loathed in physics circles, and badly misunderstood by the general public as claiming it holds the answer to spirituality. Of course, it has nothing to do with religion, or the existence (or lack thereof) of a god.

In an intriguing side anecdote -- one of many -- Sample writes that Lederman originally wanted to call his book The Goddamned Particle because it proved so difficult to find, but it was shortened to The God Particle. For Lederman, the name is apt because the Higgs (writes Sample), "is critical to our understanding of matter, yet deeply elusive." (More literal-minded sorts miss the subtlety.) That's the kind of vivid detail and backroom chatter that makes Massive such a compelling read: it's about science as that science is being done, and we don't yet have all the answers -- the Higgs continues to elude us. But for anyone curious about the story of the Higgs so far, you're not likely to find a better book than Sample's on the subject.

How I Killed Pluto and Why It Had It Coming, by Mike Brown. You might know Mike by his Twitter handle, @PlutoKiller (it's an entertaining feed; you should follow him). Clearly, he takes a certain amount of pleasure in his role demoting this smallest of planets -- or, in this case, former planet -- even though it means he gets a steady stream of hate mail and a surprising number of obscene phone calls. People have an unusually strong passion for Pluto. But Brown didn't actually set out to cause such a ruckus; he was just going about his business, hunting planets, and what he found was Eris, briefly touted as a "10th planet" before astronomers decided it didn't really meet the criteria -- and if Eris didn't qualify, neither did poor Pluto, or any of the large number of similar objects that have come to light in recent years.

Like Sample's Massive, Brown's book gives us that rare glimpse behind the curtain, a peek at how science is actually done. The guy can spin a yarn, that's for sure, and he's got some great material, and a great sense of humor (and perspective!). Even those who champion Pluto's eventual return to planetary status -- yes, the debate rages on -- will find it pretty difficult to continue hating Brown after reading this book; he's just too damned likeable. As James Kennedy wrote in his Wall Street Journal review, Brown's book presents "the scientist neither as madman nor mystic, but mensch."

Blackout and All Clear, by Connie Willis. Finally, what holiday book list would be complete without a spot of science fiction, specifically of the time travel/chaos theory variety? This is a sprawling, two-book epic, mostly set in London during World War II, when the residents suffered rationing and nightly air raids/bombings at the height of the Blitz, yet still managed to carry on some semblance of a normal life -- unsung heroes, every one, and Willis brings them vividly to life. I've been a fan of Willis' work since I first read The Doomsday Book many years ago. It was the first set in her futuristic world of time-traveling historians, following the invention of something called "The Net."

Any lover of history has fantasized about what it would be like to actually visit past eras, and in this world, they can do just that. But there are rules, most notably, the historians can't affect the course of events -- or, as Lost's doomed physicist, Daniel Faraday, phrased it, "Whatever happened, happened." The spacetime continuum has a number of ways of protecting itself from such an occurence, including something called "slippage": the Net won't send a historian to a time and place where s/he could affect the outcome, and will basically over-ride the programming, sending the historian to the nearest time and place where s/he can have no impact. Oh, and you also can't take objects from the past through the Net into the future -- unless they were destroyed in the past, a twist in Willis' fictional world rules that showed up in her second novel set in this world, To Say Nothing of the Dog.

Blackout and All Clear give us another twist on Willis' rules of time travel, and it's a doozy: the slippage factor is getting progressively worse, and seems to be centered on the critical events in World War II London between 1940 and 1944. Temporal physicists are beginning to worry that perhaps their assumptions about time travel have been wrong, and it is possible to affect the course of historical events -- something that would be disastrous for a period like the one in question, where the outcome of the war literally balanced on a knife point at several junctures over that four-year period. Could one of their historians inadvertently have altered the outcome of World War II? When four historians find themselves trapped in the past, everyone's worst fears appear to be realized. And that's as much as I can say without spoiling the fun. Like all Willis' novels, there is humor, pathos, and gut-wrenching suspense, and at some point she will break your heart. There's a lot of disparate threads in these two books (actually one book split into two), but Willis is a master weaver and pulls it all together in the end.

Of course, even if we can't change the past, who can say what might happen if historical figures showed up in the future:

So, the Spousal Unit took off this morning for a conference somewhere in Wisconsin and left the Resident Feline and I alone with the brand new flat-screen TV. This is what happens when I ask the Spousal Unit to stop off at Circuit City on the way home from the office because I need a more advanced science-y calculator. Not that we're complaining, because the new TV is teh awesome! We played hooky from calculus, plopped ourselves on the couch and wasted the afternoon watching Witchblade on DVD. Anyone else remember that short-lived series on TNT ("We know drama!"), loosely based on the graphic novel series published by Top Cow?

Witchblade was one of my guilty pleasures -- guilty because, frankly, it was a very uneven production, with tacky symbolic imagery, major chewing of the scenery by the supporting cast, and some truly horrific dialogue at times. (There's an entire scene in the first non-pilot episode, "Parallax," where the characters literally speak in koans. While playing chess. It's cringe-inducing.) But the series also had a killer soundtrack, a genuinely compelling underlying "mythology," and Yancy Butler starring as Sara "Pez" Pezzini, a NYC cop who finds herself wielding a mysterious ancient bracelet that turns out to be pretty damn useful in a fight.

Butler made the series, frankly. She took a comic book character known more for her exaggerated pulchritude and skimpy outfits, and transformed her into a street-smart, tough, sexy, emotionally complex woman -- who just happened to play a mean game of pool in the bargain. Yancy Butler kicked butt, literally and figuratively.

My favorite scene in the two-hour pilot is Pez taking on every guy in the local bar in successive games of pool, and handily sinking every shot after the break in each game. She pockets a nice chunk of change, too. This, frankly, is a common fantasy among women. I am no exception: in my dreams, I can walk into any bar and wow the locals with my prowess.

Alas, far from being a skilled pool shark, I am utterly inept at the game. I'm not being modest. It's a thrill if I manage to hit the cue ball correctly, and if it also hits one of the object balls and gets it to move a tad, huzzah! Actually sinking one of the object balls pretty much makes my week.

Lots of people throughout the centuries have had a similar fascination with some form of pool, notably billiards. (For simplicity's sake, I'm not going to go into the many variations made popular all over the world. Follow the various links and you'll learn more than you ever wanted to know about cue-stick games.) The game has its roots in a lawn game resembling croquet, dating to 15th century Europe. Perhaps folks tired of having their games rained out or something, because eventually the game evolved into an indoor tabletop version, whereby balls were shoved (not struck) with wooden sticks called maces. Originally there were only two balls, as well as a wicket (hoop) and a stick as a target, but eventually people figured out that you really just needed the balls and cue sticks and a few pockets around the table to have a kick-ass game. There's even a reference to billiards in Shakespeare's Antony and Cleopatra.

The iconic image of pool or billiards (in the U.S., at least) is the 1961 movie The Hustler, starring Paul Newman. It's a dark, fairly gritty film, actually, but for some reason it inspired a billiards revival, even though pool was a game of ill-repute in many American communities in the 20th century. The game went highbrow again two decades later, when Newman played an aging pool shark mentoring Tom Cruise's ambitious young hustler in 1986's The Color of Money. And while the prevailing image is one of a boozy boy's club, women have always indulged in billiards, although they weren't officially organized until 1976, with the birth of the Women's Professional Billiards Association. Just a few years before, a grandmother named Dorothy Wise won five U.S. Open tournaments, proving once and for all that it wasn't just a "man's game."

There's a certain degree of practiced skill involved, even to become adept at the basics, even more so if one aspires to learn some of the more advanced shots, or tricks. And like most sports, there's a great deal of physics involved in the seemingly simple game of pool, as evidenced by the large number of online resources outlining the specifics in detail. It's standard classical Newtonian stuff, mostly: overcoming the cue ball's inertia, accounting for friction from the table's green felt surface, the transfer of momentum between the cue ball and the object ball when they collide (it's not a perfectly elastic collision, but close enough), and so forth.

The paths the balls take after colliding depends on the above factors, as well as the angle at which the cue ball hits -- which in turn depends on where the cue stick hits the cue ball, which depends solely on the player's skill and control (or the lack thereof, in my case). Draw and Follow shots, for example, involve (respectively) hitting the ball below center to put a backwards rotation on it, or hitting above the center to put a forward spin on it. If we can figure out how to measure the mass, position and velocity of each ball on the table at the time of collision, we should in principle be able to predict the path and outcome of the shot.

Ah, but that's just too easy for some people. I found this entertaining online tutorial via Google on Quantum Billiards: what might it be like to play pool at the subatomic level, with balls the size of protons? Things can change in an instant when an observation is made, you can't now both the position and momentum of any ball at the same time, and each event has many possible outcomes, not just one. You're pretty much just taking shots in the dark. And don't forget about quantum tunneling! Normally a bank shot lacks sufficient energy to hop over (or through) the cushioned barrier of the billiard table; instead, it is reflected off at predictable angles. Not so if the ball is the size of a proton. Because its tiny mass creates large uncertainties, there's a much higher probability it could go right through the cushioned barrier. Electrons do it all the time, why not subatomic billiard balls?

Of course, if you really want to make things interesting, you need a spherical cow model for billiards, and a recent paper accepted by Physical Review Letters apparently offers just that. Physicists at Boston University studied what would happen during the initial "break shot" of a billiards game in an ideal setting: namely, with no dissipation of energy (I assume this means a perfectly elastic collision, with nothing lost to heat, noise, etc.) and an infinite billiard table. Heck, if we can have billiard balls the size of protons, why not infinite tables? (Or even quantum versions of Cruise and Newman?)

Basically, they created an ideal gas and then sent the particles careening all over the place, from a central starting point. Their conclusion: "Just as in real billiards, progressively more particles become mobile as the collision cascade develops." But there was an interesting twist. The initial break is, naturally, asymmetric, with various balls flying off in different directions and speeds. But in the idealized model, as the balls (or particles) expanded outward, the region became nearly spherically symmetric around the initial point of collision. In fact, it looked for all the world like a shock wave generated from an explosion. Now that is freaky.

Shock waves do form when the speed of a gas changes by more than the speed of sound. Wherever this happens, according to Wikipedia, "sound waves traveling against the flow reach a point where they cannot travel any further upstream and the pressure progressively builds in that region, and a high pressure shock wave rapidly forms." Something similar happens with supersonic jets: parts of the air around the plan travel at exactly the speed of sound, along with the aircraft, but the plane leaves a pile-up of these sound waves in its wake. The waves are forced together and compressed -- sort of an amplification effect -- ultimately merging into a shock wave that spreads out sideways.

Thunder is a naturally occurring sonic boom, and yet another example of a shock wave. And of course, explosions generate shock waves, such as when a bomb goes off. It just hadn't occurred to me that colliding billiard balls might also produce a shock-wave phenomenon. But when the collisions are viewed in slow motion, as in the YouTube video below, it does seem a bit more explosively violent than when observed at full speed:

Here's one last bit of trivia to relieve the Monday morning doldrums. Apparently the cracking sound of a bullwhip is a tiny sonic boom. The end of the whip has far less mass than the handle, so swinging the whip sharply, energy is transferred down the length of the whip. The velocity of the whip increases as mass decreases, such that ultimately the end (called the "cracker") moves faster than the speed of sound -- one of the first human inventions to break the sound barrier. I'll bet Sara Pezzini swings a mean bullwhip, when she isn't shooting pool.

Let me clarify, for the benefit of any concerned readers, that my post over the weekend bidding a fond farewell was not a departure from the blogosphere, but from my tenure as Journalist in Residence at the Kavli Institute for Theoretical Physics (KITP), the pretty peach-colored building in the photograph. It was a terrific experience, although not 100% comfortable -- which I consider a good thing, because if one is not pushed beyond one's comfort zone once in awhile, one never makes any significant developmental progress. Not only was much of the subject matter unfamiliar (and often incomprehensible) to me, but I was compelled to crystallize my various random thoughts and approaches to science communication into a workshop-type format that would appeal to theoretical physicists (or at least some of them). Did I succeed? Sometimes. The only flat-out failure was my attempt to use PowerPoint Karaoke to jump-start a discussion about communicating across disciplinary boundaries. Talk about a deflating experience. In retrospect, I think I "framed" it incorrectly for my target audience. Next time, it will take place in a local bar and feature copious amounts of alcohol. That seems to have worked very well for the PowerPoint Karaoke event organized by this group of Australians from McCann Sydney.

Anyway, after my final workshop (a post on that is forthcoming later this week), I jumped into my shiny red Prius and navigated my way one last time from Santa Barbara to Los Angeles, just like a homing pigeon seeking to reunite with its avian equivalent of a Spousal Unit. I relied on past experience and my trusty GPS display to find my way home, but apparently, birds use the earth's magnetic field to help them navigate. According to a recent entry on the physics arXiv blog maintained by the mysterious "KFC," "A growing body of evidence points to the possibility that a weak magnetic field can influence the outcome of a certain type of chemical reaction in bird retinas involving radical ion pairs." In fact, it's possible to confuse the navigational abilities of birds by zapping them with magnetic fields that, apparently, disrupt this reaction.

KFC explains that while this proposed mechanism has substantial experimental evidence, to date, it's been a little incomplete theoretically. The ion recombination effect that gives rise to a preferred chemical reaction happens far too quickly to allow for any influence from earth's magnetic field -- and yet, the experiments indicate that this field does play a vital role. Hmmm. In a recent paper posted to the arXiv, Iannis Kominis at the University of Crete has outlined an intriguing idea about how to resolve the paradox, namely, by evoking another one: arguably one of the most famous paradoxes in quantum physics, known as the quantum Zeno effect. Per KFC, "It states that the act of observing a quantum system can alter its evolution in a way that maintains the state for longer than expected." A more colloquial phrasing would say, "A watched quantum pot never boils."

Say what? There are quantum teapots? Well, no, not literally. But it's a useful analogy if one takes a bit of extra time to bone up on the broader context. And that means hopping into the Way-Back Machine for a brief visit to ancient Greece. Zeno was a Greek philosopher who logically constructed an argument to prove the (clearly) nonsensical assertion that motion is impossible. (Philosophers often like to play devil's advocate and argue for the impossible.) Zeno envisioned an archer shooting an arrow from his bow. Imagine Legolas Greenleaf from The Lord of the Rings doing just that. Assuming he shoots directly in front of him -- it's tradition in physics to hypothesize idealized situations -- the arrow will travel in a straight line indefinitely until it is stopped by an opposing force, ideally, by piercing the heart of an evil Orc.

Zeno asked what would happen if you divided the distance the arrow must travel to its target into an infinite number of increasingly smaller increments, halving the distance every step of the way. He argued that this would mean the arrow would get closer and closer to its Orc-target but would never be able to reach the creature's heart. All motion would seem to stop. This sort of thing doesn't happen in the macroscopic world of our daily experience, of course: eventually Legolas' arrow will find its mark, and the Orc will perish. (Good riddance!) Zeno's abstract argument rests on the notion that the progression will continue for infinity, but in physical reality there is always some kind of limit. An endless series can still have a finite sum. There's lots of ways to describe the notion of a limit -- it's a key concept in modern calculus -- but just from a practical standpoint, the arrow has a fixed length (at least over the distance it travels). The distance the arrow must travel would eventually be subdivided to the point where the increments would be smaller than the arrow itself. And at that point, the arrow would hit its mark.

But the quantum world is a much weirder place, governed not by exact absolutes but by probabilities and uncertainty. On the subatomic level, something akin to Zeno's paradox actually happens. Physicists have argued for decades over the nature of a measurement or observation and its implications for quantum mechanics, ever since Werner Heisenberg first proposed his Uncertainty Principle. That's the one that says we can never know the precise momentum (or the precise velocity) associated with a particle, or we can know its exact location, but we can't know both at the same time. The very act of making the measurement changes the state of the atom.

It sounds like magic, but it's really not; it's the result of an actual physical force. We measure and observe atoms via electromagnetism, i.e., light of varying wavelengths. But how much we can see depends on the wavelength (and energy) of the light -- a photon's energy is inversely proportional to its wavelength, so the shorter the wavelength of light, the higher the energy of its constituent photons. And the smaller the object we wish to observe, the higher the energy of light we must use in order to get the resolution we need to see that object. An atom is really, really tiny. To locate its precise position, we'd need to hit it with a photon of such high energy that significant amounts of that energy would be transferred to the atom itself, thereby altering it (changing its speed or direction). Basically, we know where the atom was, not where it now is, because our ham-fisted "observation" has knocked it out of its prior position.

Ergo, Heisenberg concluded that the mere act of observation can determine the outcome of a quantum experiment. But experimental measurements are made in single, fixed, brief moments in time. What if it were possible to continuously observe an experiment? And at what point does observation become continuous? Scientists actually know the answer to both questions. Back in 1977, researchers discovered that a radioactive atom would never decay if it were "observed continuously." And the critical transition point is one measurement every four-thousandths of a second.

We have that precise figure thanks to the work of scientists at the National Institute of Standards and Technology (NIST) in Colorado. In 1989, they trapped 5000 charged beryllium atoms in a magnetic field and then tried to "boil" them by zapping them with a radio frequency field to raise their temperature. They expected the atoms to absorb the extra influx of energy and jump to higher ("hotter") energies. But this only happened if they didn't make any further measurements in the interim. The more often, they tried to measure the energy state of the atoms, the fewer of those atoms would reach the higher energy level. And at the rate of one measurement every four-thousandth of a second, no atoms at all jumped to the higher energy state. They just wouldn't heat up. It still happened even when the scientists used an automated measuring device.

Why does this happen? Blame it on uncertainty: the act of measurement interferes with the atoms' ability to absorb extra energy. The Spousal Unit once penned a classic blog post about this topic, employing quantum puppies to discuss the notion of quantum interrogation, which explains things beautifully even if the cuteness of the puppies tends to overpower all else.

I like to think of it in the more concrete terms of Legolas' arrow. Let's imagine that this arrow is imbued with some elfin magical property by which it can grow longer over short intervals of time. That's a pretty decent analogy for what's happening to the uncertainty associated with two atomic energy states. At some point the uncertainty becomes large enough to bridge the two energy states -- akin to lengthening Legolas' arrow to the point where it can reach an Orc's heart -- the atom shifts to the higher energy state (and the arrow downs the evil Orc). The "uncertain arrow" then collapses back down to its original length and the whole process starts over again.

But every time we make a measurement of an atom's energy, or the length of Legolas' magic arrow -- and no, that is not a euphemism! Get your minds out of the gutter and back onto the curb with the rest of us! -- we reduce uncertainty, so it can't increase. Every time someone tries to measure Legolas' magic arrow, it becomes just a little bit shorter (oh, stop it!), to the point where it's never long enough to reach the Orc's heart. That's what happens to the energy states of atoms in the quantum Zeno effect. Uncertainty gets smaller with every measurement, because each measurement yields new information about the atoms, reducing the "fuzziness" of their energy states. Make those measurements often enough, and uncertainty never becomes sufficiently large to enable to atom to heat up. So a "watched" quantum pot never boils.

I know -- it's really weird, and utterly counter-intuitive. That's quantum physics for you. By now you're probably wondering what the hell any of this has to do with birds and their navigation skills -- assuming folks have even read this far. But according to Kominis in Crete, it is indeed relevant! Let's recap: scientists think that a weak magnetic field (like that of the earth) influences "the outcome of a certain type of chemical reaction in bird retinas involving radical ion pairs," but the sticking point is that the ion recombination happens too quickly for earth's magnetic field to have an actual impact. And yet it really does seem to influence the avian navigational process.

Per KFC, Kominis knew that it's "possible to slow down the rate at which molecules convert from ortho to para isomers when they are constantly involved in collisions." Something similar, he believes, happens in birds, namely, "The presence of a geomagnetic field extends the lifetime" of that recombination process, thereby giving the magnetic field more time to influence the outcome of the recombination. This really could turn out to be an extraordinary insight, since it means that birds have a built-in quantum sensor -- roughly akin to a GPS chip, perhaps, or at least a compass -- that determines their macroscopic behavior (i.e., navigation). It would also explain why birds occasionally are afflicted by a 30-degree "heading error," and why these built-in "compasses" only seem to be sensitive to a certain type of magnetic field strength.

Kominis even speculates that a similar mechanism might play a role in photosynthesis. It could be a brave new world out there, indeed, if it turns out that quantum effects can impact macroscale behavior. As KFC rightly notes, in his trademark style: The quantum consciousness people are going to be all over this like freshmen at a sorority party." Let the arguments begin!

Stanford physicists are apparently marching to the beat of a nanoscale drummer these days, according to a paper in the February 8 issue of Science. Yeah, it's a bad pun, but Jen-Luc Piquant couldn't resist. And I couldn't resist writing about research that combines acoustics, scanning tunneling microscopes (STMs), and resonance at the quantum/nanoscale -- particularly in light of a special session at the upcoming APS March Meeting in New Orleans celebrating 25 years of the STM, now a workhorse technology in all kinds of scientific fields, even beyond physics.

But first: quantum drums! The Stanford experiment arose out of an interesting acoustical question: do drums of different shapes always produce unique sound spectra (in terms of the properties of the acoustical wave)? It would be great if they did, because then it might be possible to develop an acoustical version of spectroscopy -- another workhorse physics-based technology that, say, analyzes the various elements that make up a distant star by studying the spectrum of light associated with that star. (It can also be useful for more terrestrial experiments, such as determining chemical composition of substances. They're always performing spectroscopic analysis in the lab on C.S.I.)

Sometime in the 1990s, alas, mathematicians proffered definitive proof that two differently shaped drums could produce the exact same sound, thereby dashing hopes of what I will call, for lack of a better term, "acoustical spectroscopy." This makes it impossible to work backwards from the sound spectrum to derive information about the physical properties of the drum that made that sound, because it does not have a truly unique signature -- rare, perhaps, but not truly unique. Spectroscopy works because there's only one answer to the question, "What is this stuff?"

Some people might deem this a failure, and relegate the topic to the dustbin. But this is science, people, where even null results can yield useful insights. Such was the case for Stanford physicist Hari Manoharan, who saw not a failure, but an opportunity, in part because, as he said in the official press release, "This revolutionized our conception of the fundamental connections between shape and sound." And it could even be relevant to spectroscopy, "because it introduced an ambiguity." As systems get smaller and smaller, and move into the nanoscale realm, quantum effects hold sway, and that tiny degree of ambiguity -- unimportant in the classical world -- suddenly could have a significant effect on, say, nano-electronic systems of the future.

So Manoharan and his Stanford colleagues brought the problem down into the quantum realm, building tiny nanoscale "quantum drums" out of carbon monoxide molecules on a copper surface.
They constructed "walls" only one molecule high and then shaped them into nine-sided enclosures capable of "resonating" like drums. (Apparently this ability is related to particle/wave duality, but specific details about this aspect were hard to come by during my weekend Web surfing.) About 100 carbon monoxide molecules make up the exterior "drums" and inside are around 30 electrons.

And just like macroscale drums, these nanoscale versions of different shapes nevertheless could resonate in the same way. This is called isospectrality. You can see nifty pictures, video, and listen to cool sound samples here, although of course, the sounds have been converted into audible range for humans. In reality, the "sounds" are at frequencies far too high for humans (or even dogs) to hear.

By now, you might be figuring that this just means science failed twice at creating acoustical spectroscopy, both at the classical and quantum levels. And okay, that may be the case. For spectroscopy. But it turns out there is some practical value for being able to build two differently shaped nanostructures that nonetheless have identical properties, particularly as computer chip circuits continue to shrink into the nanoscale. Chip designers would have more than one way to get the same result, giving them extra flexibility, or, as Manoharan phrased it, "Now your design palette is twice as big."

That could turn out to be significant to the design of future quantum computers (assuming quantum computers ever become a reality). Based on their findings, Manoharan's team has also figured out a way to determine the quantum phase of the wave functions of the electrons inside the quantum drums, without directly observing them. The process is called quantum transplantation, and it involves taking measurements from two quantum drums and then mathematically combining that information, thereby enabling scientists to "cheat" the usual limitations of quantum mechanics "and obtain normally obscured quantum-mechanical phase information," according to Manoharan.

Manoharan's work builds on decades of scientific advancement in a wide variety of fields, but one of the most critical is the development of STMs. That's what enables nanoscientists to move around individual molecules on a substrate, for example -- you need a level of precision and control and imaging resolution that just can't be achieved using ordinary microscopy. That's because there are some fundamental physical limits to looking at tiny objects with light, as demonstrated in 1872 by the physicist Ernest Abbe. Basically, you can't see details smaller than the wavelength of the light you're using.

Creating a visual image of an object under a conventional light microscope requires light waves to pass through that object, which are diffracted into what I guess you could call interference patterns. We get our information about the object from those patterns. Ergo, the more diffracted light waves actually reach the instrument's objective lens, the better resolution you get. ("Resolution" describes the distance between two details of an image that can just be distinguished by the viewer. For a conventional microscope using visible light, the best resolution is on the order of 4000 angstroms, and Abbe's calculations limited the magnification factor to 1000 or so.) Diffraction is that thing that happens when light passes through the spaces in an object -- think of how water waves pass through spaces between reeds along the shore of a lake. If a gap is smaller than the wavelength of light, then waves can't pass through the gap. There's no diffraction, and that's bad for resolution.

What was needed was a form of energy with a shorter wavelength of light -- say, an electron. But an electron is a particle! You might say. True, but in 1924, Louis de Broglie proved that not just light, but also electrons -- indeed, all particles -- exhibit the same particle/wave duality. So an electron can also behave like a particle or a wave, which means an electron beam should, in principle, be useful as a means of "seeing" smaller objects than light waves. Enter Swedish physicist Ernst Ruska, the son of a professor of the history of science, who grew up fascinated by his father's instruments, particularly a large microscope. The young Ernst went on to study electrical engineering, and while still a student at Berlin Technical University in the late 1920s, applied de Broglie's equations to the notion of an electron microscope, concluding that it should be able to see smaller objects than light. He also figured out that he a magnetic coil could serve as a "lens" for electrons, and irradiating an object with an electron beam could produce a useful image. In 1933, he built the very first electron microscope, and subsequently helped commercialize the technology, which pretty much revolutionized science. That's why Ruska was honored in 1986 with the Nobel Prize in Physics, at the ripe old age of 80.

Ruska shared the prize with Gerd Binnig and Heinrich Rohrer, co-inventors of the first scanning tunneling microscope while both were employed by IBM's research group in Zurich, Switzerland, in 1981. It's not entirely accurate, scientifically, to call this technique a form of microscopy, since we're not really looking at the sample via light at all anymore. We're feeling the surface with a mechanical probe, much like a blind person reads Braille by touch. An STM is far more sensitive than a fingertip.

It's a pretty basic set-up: you have a stylus with a very sharp tip mounted on a flexible cantilever. As the tip moves across a surface -- without actually touching it, mind you -- there are going to be interaction forces between the tip and the surface, and these are going to affect the cantilever's movement in turn. Those movements can be detected by piezeoelectric sensors, and turned into images. It is the sharpness of the stylus and how well it can traverse the structure of the sample's surface that determines the resulting image resolution.

It should probably be noted right about now that STMs are just one kind of scanning probe imaging technique; there's an entire family of such techniques, including atomic force microscopy (AFM) which measures the interaction force between the tip and the surface. (The STM, in contrast, measures a weak electrical current flowing between the tip and the sample as they are held a very short distance apart.) There's pros and cons for every type of technique, and for the various "modes" in which they operate, which is why the "family" is still growing. But they're all based on the same basic principle.

Ironically, even though it's meant to commemorate the history of the STM, the March Meeting session will focus on some of the latest innovations in research using STMs. For instance, Sergei Sheiko of the University of North Carolina at Chapel Hill will talk about his work using scanning probe microscopy to image flexible polymer molecules whose sizes are beyond the limits of standard optical resolution. He's been able to get very high resolution of the molecular structure of those polymers, and has used AFM to study them as they move and react on surfaces. Sheiko's work could lead to better control over surface-activated changes in coatings, lubrication, catalysis, and biochemical assays, by revealing how those changes impact molecular structure and properties. And that's just the tip of the iceberg, or the cantilever, in this instance. It's a solid bet that further innovations in STM and related technologies will yield even more insight and unprecedented atomic-level control over the next 25 years.

There was exciting news from the Lawrence Berkeley National Laboratory last week, as researchers announced that they had performed the world's smallest double-slit experiment and determined that quantum (subatomic) particles will start behaving in accordance with classical (macroscale) physics at the size scale of a single hydrogen molecule. Quantum physicists are no doubt excitedly discussing these marvelous results with a passion most people reserve for Super Bowl Sunday. But the average reader's eyes probably just glaze over with incomprehension, leaving him/her to wonder what all the fuss is about. Truthfully? It's tough to grasp the significance of this latest quantum wrinkle without a bit of background about Thomas Young's original 1802 experiment (now the poster child of the quantum concept of particle/wave duality), as well as the historical scientific debate that raged around the nature of light. Hence today's Monster Post.

Particle or wave? That was the question. It proved to be an especially contentious issue; the debate raged for millennia, in fact. Pythagoras, in 5th century BC Greece, was staunchly "pro-particle," while Aristotle (who lived a couple hundred years later) was ridiculed by contemporaries for daring to suggest that light travels as a wave. The confusion was understandable, because empirical observations of the behavior of light contradicted each other. On the one hand, light traveled in a straight line and would bounce off a reflective surface. That's how particles behave. But it also could diffuse outward, and different beams of light could cross paths and mix together. That's undeniably wave-like behavior. In short, light had a split-personality disorder.

By the 17th century, many scientists had generally accepted the wave nature of light, but there were still holdouts in the research community -- among them no less a luminary than Sir Isaac Newton, who argued vehemently that light was comprised of streams of particles that he dubbed "corpuscles." In 1672, colleagues persuaded Newton to publish his conclusions about the corpuscular nature of light in the Royal Society's Philosophical Transactions. He seemed to assume that his ideas would be greeted with unanimous cheers, and was rather put out when Robert Hooke and the Dutch physicist Christian Huygens were reluctant to jump on the Isaac Bandwagon. The result was an acrimonious, four-year debate. Huygens differed with Newton on such key points as how the speed of light changes as light goes from a a less dense medium like air to a denser material like glass: Newton said it should increase; Huygens said it should decrease. The issue remained largely untested because at the time there was no good way to measure the changes in speed.

Ultimately, Newton's stature as one of the greatest physicists of all time ensured that his notion of streams of corpuscles won out over the wave theory of light -- until that cheeky over-achieving upstart, Thomas Young, appeared on the scene almost a century later. Young was the oldest of 10 children born to a Quaker family in Somerset, England, and proved to be alarmingly precocious. He could read by the age of 2, learned Latin by age 6, and by the time he was 14, he'd added Greek, French, Italian, Hebrew, Chaldean, Syriac, Samaritan, Arabic, Persian, Turkish, and Amharic to his linguistic repertoire. His facility with languages served him well later in life, when he became fascinated with Egyptian hieroglyphics and played a key role in cracking the code of the Rosetta Stone by deciphering several Egyptian cartouches.

Young first studied medicine at Cambridge, then earned a physics doctorate in Gottingen before setting up shop as a physician in London. By age 28, he'd been appointed a professor of natural philosophy at the Royal Institution, delivering lectures about his experiments in everything from optics, acoustics, climate, and the nature of heat, to electricity, hydrodynamics, astronomy, gravitation, and measurement techniques. The term "polymath" hardly seems to do him justice; his fellow students at Cambridge used to call him "Phenomenon Young." No wonder his epitaph at Westminster Abbey salutes him as "...a man alike eminent in almost every department of human learning."

Ah, but could this brilliant young phenomenon take on The Goliath of Physics and win? Young was actually a huge fan of Newton and based his early work on color and vision on the insights Newton gleaned from his experimentum crucis. But that didn't mean he accepted the Great Man's conclusions without question. His pivotal experiment didn't start out as the poster child for the quantum concept of wave/particle duality; like every other scientist of his day, the fact that light might be both was simply inconceivable to Young. So he designed an experiment he believed would determine the matter once and for all.

Naturally, a darkened room was involved, along with a light source (probably a candle, or sunlight, this being the early 19th century). Young shone the light onto a barrier in which he'd cut two narrow, parallel slits, about a fraction of an inch apart. On the other side was a white screen. He reasoned that if light were made of particles, as Newton claimed, the screen would show two bright parallel lines where the light particles had passed through one slit or the other. But if light were a wave, it would pass through both slits, separating into secondary waves that would then recombine on the other side -- i.e, they interfere with each other.

It's a bit like water waves, which have crests and troughs. As the secondary light waves recombine on the other side, wherever two crests or troughs line up exactly, they produce a bright spot of light. Wherever a crest and a trough line up exactly, they cancel each other out, leaving a dark spot on the screen. The resulting "interference pattern" is thus a series of alternating dark and light bands. And that's exactly what Young observed, even making his own sketch of the interference pattern. Light, per his experiment, was undeniably a wave.

Young was understandably pretty chuffed at the success of this experiment, which offered the strongest evidence to date in favor of the wave theory of light. He applied his findings to explain the shifting colors found in thin films, such as soap bubbles, and even tied the seven colors of Newton's rainbow to wavelength, calculating what each color's approximate wavelength would have to be to produce that particular color of light.

Alas, his euphoria was short-lived: the pro-Newton crowd lost no time in bashing Young's experimental findings. One simply didn't question the Great One, even 80 years after Newton's death. Online encyclopedist David Darling memorably described it as "the scientific equivalent of hari-kiri." Young was too, well, young to known better. Newton's place in the pantheon ensured that the scientific community largely ignored Young's pivotal experiment for a good 10 years, bolstered by a simply savage review of his work in the Edinburgh Review (published anonymously in 1803, later revealed to have been authored by one Lord Henry Brougham, a big-time Isaac acolyte.)

Fortunately for the wave-friendly fans of light, French physicist Augustin Fresnel conducted a series of more comprehensive demonstrations of Young's basic experimental setup, succeeding (where Young had failed) in convincing the world's scientists hat light really was a series of waves, rather than streams of tiny particles. And in the mid-19th century, another Frenchman, Leon Foucault, proved that Huygens had been correct -- and Newton mistaken -- in his assertion that light travels more slowing in water than in air. Given the acrimony Huygens experienced from Newton for sticking to his guns on this issue, one would understand if the Dutch scientist indulged in a little "Nyah, nyah, nyah" type of gloating from beyond the grave. (It probably helped that the French were a bit less worshipful of Newton than the Brits. Jen-Luc Piquant urges us to remember that even the greatest scientists are often wrong. Huygens, in fact, was partially responsible for advancing the notion that light waves travel via an invisible substance called the luminescent ether, later disproved by the famed Michaelson-Morley experiment in the 1890s.)

There were a bunch of other breakthroughs going on at the same time, of course, and taken together, everything added up to strong support for the "light is a wave" school of thought. Case closed. Or so physicists thought as the 19th century drew to a close. But light had a few more surprises in store for them with the birth of quantum mechanics. It's too long a story to go into here, but Max Planck, Albert Einstein, and Arthur Compton were among the luminaries whose work led to the realization that light was both particle and wave: specifically, light is made of photons that collectively behave as a wave.

Sounds simple enough. Except quantum mechanics is never that simple. The revolution didn't end there. Quantum theory predicted that even the individual photon could behave like a wave, and essentially interfere with itself. For a long time, there was no way to test this prediction. But eventually technology and scientific instrumentation advanced to the point where they could emit and detect single photons. The modern version of the experiment looks like this. First, we need a researcher -- let's say, Paris Hilton, just to stretch your powers of imagination a little. Paris sets up a simple light source in front of a barrier with two small slits cut into it, with a light-sensitive screen on the other side to record the pattern of incoming light. Paris turns on the light source and is hypnotized by the shiny beams sends a series of photons, one photon at a time, toward the two slits in the barrier.

We're talking about single particles here, so the photons should only be able to go through one slit or the other, and just strike the screen like so many tiny ping-pong balls. Instead, Paris is stunned to find that the light forms that telltale interference pattern -- alternating bands of dark and light -- on the screen on the other side. What the heck? This means that those single photons are behaving like waves; each photon somehow travels through both slits and interferes with itself on the other side.

Now Paris wants to know more. This is a woman who reads Sun Tzu, after all; her natural curiosity drives her to repeat the experiment with an extra twist: she places particle detectors by each of the slits, so that she can verify that the photons do, in fact, each go through both slits at the same time. Except this time, she doesn't get the interference pattern; she gets the ping-pong ball effect, which means that the photon is now behaving like a single particle, passing through one slit, but the other. Are the photons just messing with her? Unable to cope with the quantum conundrum, Paris Hilton's head explodes. Millions rejoice. Tabloids mourn. And those mischievous photons give an evil cackle of delight at having claimed another victim.

The good news is, the photons aren't deliberately messing with our heads. There is an explanation for the two results, but it's an explanation that defies common sense. Instead of merely tweaking her first experiment, thanks to the addition of the particle detectors, Paris unwittingly performed a completely different experiment the second time around. In the first version, she's making a wave measurement; in the second, she's making a particle measurement. The kind of measurement she chooses to make determines the outcome of the experiment. Basically, if Paris just lets the photons travel from the light source to the screen undisturbed, they behave like waves and she sees the interference pattern. But if she observes them en route, she knows which path the photons took; this knowledge forces them to behave like particles, passing through one slit or the other. Paris can construct her experiment to produce an interference pattern, or to determine which way the single photons went. But she can't do both at the same time. Heisenberg's Uncertainty Principle rears its ugly head.

Hence the opening line of the UC-Berkeley press release: "The big world of classical physics mostly seems sensible: waves are waves and particles are particles, and the moon rises whether anyone watches or not. The tiny quantum world is different: particles are waves (and vice versa), and quantum systems remain in state of multiple possibilities until they are measured -- which amounts to an intrusion by an observer [Paris Hilton!] from the big world -- and forced to choose: the exact position or momentum of an electron, say."

There's a lot of really big ideas contained in those two sentences, more than we can even attempt to discuss intelligently in a single blog post. Vast tomes have been written about this, countless papers are published each year in academic journals -- including the one describing the latest version of the double-slit experiment that the Berkeley Lab group performed. (Actually, LBL collaborated with scientists at the University of Frankfurt in Germany, Kansas State University and Auburn University.) We were most impressed with the sheer ingenuity of how they constructed their experimental set-up. They used the two proton nuclei of a hydrogen molecule as the two "slits," separated by a mere ten-billionths of a meter.

The tricky part is to separate the component parts of the hydrogen molecules in the first place. How the heck did they manage that? It helps if you have access to a couple of x-ray beam lines at LBL's Advanced Light Source. All you need to do (are you taking notes, Paris?) is send a stream of hydrogen gas through the apparatus into an "interaction region" (the equivalent of an enclosed chamber, would be my guess), where some of the hydrogen molecules run afoul of that nasty x-ray beam, which has sufficient energy to knock off each hydrogen molecule's two negatively charged electrons. Without that negative charge to balance things out, the two positively charged protons that form the nucleus of the molecule blow apart from the powerful mutual repulsion. The LBL researchers then used an electric field to separate the particles according to charge, sending the protons to one detector and the electrons to a detector in the opposite direction. Genius!

LBL researcher Ali Belkacem calls this "a kinematically complete experiment," because it accounts for every single particle, enabling them to figure out all kinds of things, like "the momentum of the particles, the initial orientation and distance between the protons, and the momentum of the electrons."It's not just photons that exhibit wave/particle duality: electrons do, too. So even a single electron is capable of interfering with itself. Just like the classical version of the experiment, the scientists could study the electrons as particles, or as waves. For instance, they found that once the electrons were knocked off the hydrogen molecule, one was fast, and one was slow, giving them an assortment of both fast and slow electrons.

Mostly they were interested in the interference pattern, particularly at what point it disappeared. They essentially turned the slower electrons into teensy particle detectors by boosting their energy levels just a tad. For reasons that remain unclear to me -- I invite any quantum physicists to offer their explanation in the comments -- this turns the slow electrons into "observers." They are "big" enough to interact with the classical domain. So the interference pattern disappears and the electrons behave almost like a classical system. I say "almost," because apparently they still retain some signs of entanglement (what Einstein called "spooky action at a distance"). [UPDATE: Chad at Uncertain Principles goes into more of the technical details behind this new experiment, while the mysterious Statistical Deviations blogger offers a possible explanation of how boosting a slow electron's energy slightly makes it "big" enough to act as an "observer."]

So there you have it: the world's smallest double-slit experiment. And now we must go rest our poor aching head, perhaps by watching a couple of sitcoms or reading about Paris' latest tabloid exploits (aiding drunken elephants in Africa? I think not). She'd get into far less trouble if she'd just stick with her quantum physics research. Then again, when's the last time Larry King bothered to interview a quantum physicist?

We're a bit late with birthday greetings, but still wanted to weigh in with well wishes as the Bardeen-Cooper-Schrieffer (BCS) theory of superconductivity turns 50. It first appeared in a paper published in The Physical Review in July 1957, and is considered one of the most important milestones in 20th century physics. Small wonder that there have been so many honorary conferences organized this year to commemorate the occasion, most recently an APS-sponsored conference held October 10-13 at the University of Illinois, Urbana-Champaign. Eight Nobel Laureates were on hand to give talks, including both Leon Cooper (the "C") and Robert Schrieffer (the "S"). John Bardeen (the "B") missed the festivities; he died in 1991. The APS presented a bronze plaque marking the old Physics Building at UIUC as a "site of historic significance." And the university chose this occasion to announce its new Institute for Condensed Matter Theory, making it a truly golden anniversary in the field of condensed matter physics.

"What's all the fuss about?" the average non-scientist is probably wondering. Well, back in 1911, a Dutch physicist named Heike Kamerlingh Onnes was studying a variety of materials at ultra-low temperatures (i.e., close to absolute zero). He found that supercooled mercury lost its resistance completely to the flow of electricity and dubbed the phenomenon superconductivity. Later experiments revealed the same effect in tin, lead, and other pure metals. It was truly a momentous experimental discovery, but it lacked a theoretical underpinning. Try as they might, physicists couldn't explain the actual mechanism behind superconductivity.

Things got weirder the more they looked into this mysterious effect. For instance, in 1933, a physicist named Walter Meissner found that superconductors would expel a magnetic field, making it possible to levitate a magnet -- the "Meissner effect." And around 1950, physicists found that mercury isotopes with lower atomic weight became superconducting at a slightly higher temperature -- the "isotope effect." This seemed to suggest that the motion of atoms in a material, and not just the electrons, was involved in superconductivity.

Felix Bloch became so frustrated with the knotty problem that he postulated his own eponymous "Bloch's Theorem: Superconductivity is impossible" -- even though it was clearly possible, since it had been experimentally confirmed again and again, in an ever-growing number of materials. Richard Feynman admitted that he'd "spent an awful lot of time in trying to understand it... I developed an emotional block against the problem of conductivity." In fact, when he first learned about the seminal BCS paper, "I could not bring myself to read it for a long time." It took a lot to stump a scientist of Feynman's caliber, and he wasn't the only big-brained physicist mulling over the problem.

Technically, Bardeen was an electrical engineer by training, at least early on in his career. He was born in Madison, Wisconsin; his father, Charles, as a professor of anatomy and helped found the medical school at the University of Wisconsin, Madison (UWM). His academic brilliance showed up early: in third grade, his parents moved him up into junior high, and he started college at age 15, majoring in engineering at UWM. A bit surprisingly, considering his low-key temperament, he was a frat boy, a member of Zeta Psi. (Wikipedia tells me that he played billiards to raise the membership fees.) Yet he was also a member of the Tau Beta Pi engineering honor society. Maybe fraternities were different in those days. He ended up earning both a BS and a master's degree in his five years at UWM.

Bardeen worked for awhile at Gulf Research Laboratories in Pittsburgh, but quickly became bored with the work, and decided to earn his PhD in mathematical physics from Princeton University and embark on a research career. His thesis work was in solid-state physics, working with Eugene Wigner, among others, giving him experience that would come in handy years later when he found himself at Bell Labs, struggling to invent a working transistor with two colleagues, William Shockley and Walter Brattain. They finally achieved the first point-contact transistor on December 23, 1947. As most everyone knows by now, the transistor revolutionized the electronics industry. We owe our computers, our MP3 players, indeed, the entire online Information Age, to these three men toiling away in a Bell Labs laboratory during the holidays, when everyone else was drinking eggnog and singing Christmas carols.

Global recognition was not long in coming. The morning of November 1, 1956, Bardeen was scrambling eggs for breakfast while listening to the radio. That's how he learned that he'd just been awarded the Nobel Prize in Physics for inventing the transistor, along with Shockley and Brattain. Apparently he dropped the frying pan in his excitement to inform his wife. A few fun behind-the-scenes Nobel factoids: just before the ceremony, Bardeen found his white vest and white tie had turned green in the laundry, and had to borrow replacements from Brattain. The two men were so nervous before receiving their awards that they split a bottle of quinine to settle their stomachs.

By 1951, the University of Illinois had managed to lure Bardeen away from Bell Labs with the promise of letting him research whatever he wanted. When news of the isotope effect appeared, Bardeen turned his attention back to the problem of superconductivity. He didn't crack it right away, but he and his colleague, David Pines, did supply a critical missing piece. They showed that electrons -- which normally show a strong electrostatic repulsion for each other -- nevertheless could have a sort of indirect attraction, namely by creating vibrations among the lattice atoms, and those vibrations could in turn affect other electrons.

The breakthrough began in the mid-1950s, when Bardeen teamed up with Cooper (then a postdoctoral fellow) and Schrieffer, who was still a graduate student. Cooper supplied the "C" part when he figured out that electrons in a superconductor don't behave as if they were individual particles, but as pairs, now known as "Cooper pairs." Apply an electrical voltage to a superconductor, and you'll find that all those Cooper pairs move as a single entity, creating an electrical current. Cut off the voltage, and instead of gradually dissipating, the current will continue to flow indefinitely because the pairs encounter no resistance to their motion. It only works at ultra-low temperatures: the Cooper pairs separate into individual electrons as the material warms up.

Now for the "S" part: Schrieffer had his own breakthrough insight in early 1957 while riding on a NYC subway. (Based on my years in the Big Apple, most subway riders are probably too distracted by the advertisements for local celebrity dermatologist "Doctor Zee," or the presence of an incontinent homeless individual two seats away, to come up with revolutionary breakthroughs in physics, but Schrieffer beat the odds.) You could emulate Wikipedia and say he "figured out how to mathematically describe the enormous collection of Cooper pairs in a superconductor with one single wave function." Or -- if you're like me, and this makes your eyes glaze over in bewilderment -- you can think of it this way: Instead of crystallizing into a lattice like when water turns to ice, at those very low temperatures, the electrons were organizing and condensing into what amounted to a weird state of matter that permitted the free flow of electricity. Schrieffer himself later compared the concept to a popular dance of that time called the Frug, in which dance partners could be separated by other couples on the dance floor, yet still remained a pair. In the same way, the Cooper pairs in a superconducting material were oblivious to other electrons and the lattice, which meant they could move without hindrance.

Schrieffer's insight provided the final piece of the puzzle, causing Bardeen to observe, in his typically quiet manner, "Well, I think we've explained superconductivity" -- probably in much the same tone of voice as one would say, "Well, I guess it's time for lunch." Their theory explained both the isotope effect and the fact that magnetic fields below a certain strength couldn't quite penetrate superconductors. it also explained why the superconducting phenomenon could only be observed at very cold temperatures near absolute zero: any warmer, and the thermal jiggling would break up the Cooper pairs, disrupting their elegantly balanced quantum dance. In short, Bardeen later recalled, "All the hitherto puzzling features of superconductors fitted neatly together like the pieces of a jigsaw puzzle."

And thus it came to pass that Bardeen found himself the recipient of yet another Nobel Prize in physics -- at the time, he was the first person to win twice in the same field. (Marie Curie, Linus Pauling, and Frederick Sanger all won two Nobel Prizes, just not in the same field.) Another fun behind-the-scenes anecdote: When he won the prize the first time, Bardeen only brought one of three children to the ceremony in Stockholm because his sons were both at Harvard and he was reluctant to interrupt their studies. Sweden's King Gustav IV scolded him for doing so, and Bardeen solemnly assured the king that the next time he won the Nobel Prize, he would bring his entire family. I'm sure Bardeen never expected to make good on that promise, but when lightning did indeed strike twice for him, he made sure all three of his children attended the second ceremony.

It's a bit astounding that the BCS theory hasn't really been refined that much over the ensuing 50 years. Apparently, they got it right the first time. High-temperature superconductivity, discovered in 1986, remains a bit of a puzzle: the effect still relies on electron pairing, but the BCS theory doesn't quite apply. Still, it's only been 20 years, compared to the 50-year lapse between the original observation of superconductivity in metals and the development of BCS theory to explain it. High-Tc theory still has some wriggle room.

There has been some innovation shedding further light on the inner workings of superconductivity. For instance, last year a University of Arizona physicist named Andrei Lebed caused a few ripples in the physics community with his discovery that strong magnetism changes the basic, intrinsic properties of the flowing electrons -- an "exotic" kind of superconductivity. He's interested in the physical nature of the Cooper pairs. Whereas in the past, they have been treated as behaving like elementary particles, with correspondingly fixed properties. Lebed asserts that, in fact, "[S]uperconducting electron pairs are not unchanged elementary particles, but rather, complex objects with characteristics that depend on the strength of the magnetic field." And in the presence of super-strong magnetic fields, exotic Cooper pairs are created that follow the weird laws of quantum mechanics: the electron pairs are both rotating and non-rotating at the same time. Hmmm. Curiouser and curiouser.

Superfluidity is an extension of BCS theory, in that it describes a state in which liquid, like current in superconductors, can flow without resistance -- it literally has zero viscosity. Furthermore, BCS theory has provided a useful model for physicists working on everything from the behavior of subatomic particles to the inner workings of ultra-dense neutron stars. Too esoteric for you? Superconductivity, which the theory explains, is responsible for such life-altering technologies as MRI, radio telescopes, and superconducting quantum interference devices (SQUIDs) -- the latter used to make very sensitive geologic measurements,, among other things. High-temperature superconductivity is especially promising in power transmission: its ability to send current over longer distances with fewer losses could result in major energy savings, although to date such a system has yet to be implemented.

No wonder Bardeen appeared on LIFE Magazine's list of "100 Most Influential Americans of the Century" in 1990, one year before he died. Yet for all the accolades he received over the course of his stellar career, Bardeen never let that sort of thing go to his head. Almost every colleague, friend and biographer describes Bardeen as a most ordinary man, who didn't behave like the stereotypical "genius" physicist. He liked to golf and go on picnics. He hosted cookouts for friends and family, some of whom weren't even aware of his remarkable scientific accomplishments. What made his impact on physics extraordinary was his gift of pinpointing interesting problems in physics, selecting the right collaborators -- making sure to bring both experimentalists and theorists to the table -- and keeping his eye focused on the ball, worrying away at the problem until he arrived at a likely solution.

Alas, his trademark humility and insistence on bucking the "crazy genius scientist" stereotype meant that "the public and the media often overlooked him," according to University of Illinois historian Lillian Hoddeson, who wrote a book about Bardeen. And that's a shame. So in addition to wishing BCS theory a well-earned golden anniversary, here's to men like John Bardeen -- truly the people's physicist. We reap the benefits of his work every day, even if few of us know his name.

Celebrity tabloids and similar gossip rags are filled with unnamed sources -- you know, "Sources close to [Insert favorite Celebrity (TM) here] report that...." I've always wondered who these loose-lipped people are: a source standing close to the Celebrity (TM) on line at Starbucks, perhaps? A casual diner at the next table in a stylin' Hollywood eatery? Or maybe a disgruntled former employee, or pathetic former hanger-on who's bitter because s/he didn't become the Celebrity's (TM) new BFF? Because anyone who was truly close to a Celebrity (TM), wouldn't maintain that position for very long by talking to unscrupulous tabloid reporters, now, would they?

Tabloids aside, we generally expect the average news story to cite its sources by name, with rare exceptions, like when Woodward and Bernstein broke the Watergate scandal. So imagine my surprise when I noticed this little news item on CNET a couple of days ago about a start-up company called Stion with the stated mission of developing thin-film solar cells, mentioning unnamed "sources" in the third graph down. They've received a whopping $15 million in venture capital, which means their technological approach must be pretty promising, but Stion's manager of business development was rather coy about what, exactly, that material might be. He had plenty to say about what it was not -- not silicon based, not cadmium telluride, and not a copper-indium-gallium-selenide compound, either -- but otherwise kept mum, promising to reveal all "in due time," i.e., around 2010, when the first products are likely to be announced.

Sheesh, what a media tease! Clearly, the frustrated CNET reporter, Michael Kanelios, had no choice save to turn to his own Deep Throat, or two. Which, BTW, is not a criticism: he did a commendable job (far better than the celebrity tabloids) of assembling various pieces of circumstantial evidence in favor of the new material being... (drum roll, please)... quantum dots! I'm not sure his evidence is solid enough to warrant announcing this in the headline ("Harnessing quantum dots for solar panels") without a question mark, unless his unnamed sources are very reliable indeed. But it is a matter of public record that Stion's Chief Technology Officer, Howard Lee, worked as a solar researcher (specifically using quantum dots) at Lawrence Livermore National Lab for several years, and accumulated numerous patents relating to quantum dots during his stint at another start-up called Ultradot. And Stion's CEO, Chet Farris, is a former president of Shell Solar. Hmmm. The plot thickens. Kanelios, for one, can connect the dots.

No doubt some of you are wondering what the heck these quantum dot thingies are, and why they're such a big fat deal. It just so happens that I wrote about quantum dots way back in 2003 for The Industrial Physicist magazine -- the closing of which I still mourn, because I got to cover so many cool cutting-edge topics as a contributing editor there. (I wasn't always about the pop culture physics; I used to work for the Dark Side, i.e., Very Serious Science Journalism, or VSSJ.) Quantum dots are essentially tiny bits of semiconductors -- sometimes called nanocrystals, which just doesn't carry the same panache -- just a few nanometers in diameter. It's like taking a wafer of silicon and cutting it in half over and over again (a semiconducting Zeno's Paradox?) until you have just one tiny piece with about a hundred to a thousand atoms. That's a quantum dot. (I think. It's not like you can actually see one with the naked eye. Billions of them could fit on the head of a pin.)

Size matters when it comes to semiconductors: smaller is usually better. Because they're so tiny, quantum dots have some unusual materials properties -- specifically, the all-important electrical and optical ones -- thanks to the quantum effects that kick in at smaller size scales, so they are of enormous interest to researchers. It's interesting physics fundamentally, and it offers an impressive sampling of potentially lucrative practical applications. Trust me, quantum dots are hot, even if they're currently simmering on the back burner in the news-hook-oriented media.

I must confess to finding it easier to write about applications of physics rather than the basic science. But when I started covering the quantum dots area, I learned some useful things about the "electrons and holes" effect that is critical not just to quantum dots, but also lasers and other semiconductor physics. This is not an easy thing for a lay person to visualize, although physicists toss those terms around like high school slang. So here's my attempt at the 411 on electrons and holes (scientific commenters, feel free to add your own take on the subject):

It helps to place semiconductors in general in the appropriate context, i.e., right smack between insulators and conductors. Insulator atoms hoard their electrons greedily, like misers or overprotective parents, and rarely part with them, while conductor atoms are like spendthrifts or exceedingly permissive parents, letting their electrons run amok all over the place (and a good thing, too, otherwise we'd never enjoy the benefits of electrical current). Semiconductor atoms are juuuust riiiight. They don't fling their electrons around all willy-nilly, but neither do they hang onto to them too tightly. It takes a bit of an energy boost to knock an electron loose in a semiconductor, and when the electron breaks free, it leaves behind a "hole" in the atom's electronic structure -- a vacancy, if you will, that another electron, sooner or later, will come along to fill. So a photon strikes a semiconductor atom and creates an electron-hole pair, which physicists call an exciton -- because we need more confusing technical jargon in physics, don't we? Anyway, this enables the electrons to flow as a current. And current = power.

Much of the excitement over quantum dots stems from a decades-long quest to make silicon emit light efficiently in the visible spectrum. Back in 1990, European researchers managed to get porous silicon to emit red light, and figured it came about because of "quantum confinement" relating to the dot's small size. Basically, at 10 nanometers or less, the electrons and holes are being squeezed into such small dimensions that this alters the electronic and optical properties; it's the critical feature of most nanoscale materials, frankly. (Special bonus for Physics-Philes: a 2003 paper in Nature reported that shape might matter as much as size when it comes to quantum confinement.) Things snowballed from there, with scientists making more silicon dots (and, later, germanium dots) that emitted light in lots of bright, pretty colors, especially the highly desirable green and blue ranges. Basically, the bigger the dot, the redder the light, and the emitted light becomes shorter and shorter in wavelength -- and higher in energy -- as the dots shrink in size. This is called "tunability" because you can pretty much tailor the dots to emit whatever frequency of visible light you happen to need for a given application, simply by altering the size of the dots. Believe me, high-tech industries go nuts for anything with tunability. Plus, colors = pretty! Check out the pic! Doesn't it make you want to buy some quantum dots?

The most obvious application is using quantum dots as an alternative to the organic dyes used to tag reactive agents in fluorescence-based biosensors. You know, the dyes start to glow when, say, a harmful toxin is present. But the number of colors available using organic dyes is limited, and they tend to degrade rapidly. Quantum dots offer a broader spectrum of colors and show very little degradation over time. Having all those colors also means you can make light-emitting diodes (LEDs) from quantum dots, precisely tuned in the blue or green range. You can also build quantum dot LEDs that emit white light for laptop computers or interior lighting in cars. As for electronics, the possibilities are endless: all-optical switches and logic gates, for instance, with a millionfold increase in speed and lower power requirements, or, further in the future, quantum dots could be used to make teensy transistors for nanoelectronics.

But the current news is about quantum dots and their potential application to solar cells. As I mentioned earlier, Stion's CTO, Lee, did a lot of work in this area during his stint at Livermore, as have many other researchers. (A more technical overview of this research area by Science News' Peter Weiss can be found here, for those who are interested.) As the CNET story reports, "Most solar cells on the market today extract electricity from sunlight with silicon and are integrated into glass substrates, which is relatively heavy." A company called First Solar uses a similar structure, but replaces the silicon with cadmium telluride, which is cheaper. As for the CIGS (copper, indium, gallium and selenide) version of the technology, there's several companies working on that, although products have yet to hit the market. It's expected that CIGS solar cells will be cheaper than silicon ones, but not quite as efficient: we're talking mid- to low teens, percentage wise, compared to 22% efficiency (and sometimes as much as 29%) for silicon solar cells. (A fossil fuel like gasoline can show 30-40% efficiency, so even silicon solar cells need to show some improvement in this area for broad practical application.)

So what can quantum dots bring to the solar cell table? The CNET article doesn't go into much detail:

"Partly because of their small size, quantum dots can be highly sensitive to physical phenomena and can be used to trap electrons. Since solar panels work by wiggling electrons out of sunlight and transferring them to a wire, quantum dots in theory could work well in solar panels."

This doesn't really say anything meaningful, to a non-scientist, about the actual process that's taking place -- because the process is very complicated and hard to explain in a short news article. Those electron-hole pairs known as excitons I mentioned earlier? Usually, photons from sunlight that strike the semiconductor material used in solar cells unleash only one electron. In theory, they should be able to loosen more than one, thereby giving rise to several excitons, but for reasons relating to heat loss in atomic collisions, or some such thing (paging Chad Orzel for a better explanation!), there's usually only a 1-to1 ratio. That's why solar cells are limited in their energy efficiency. But last year, researchers working with quantum dots made of lead selenide found they could produce as many as seven excitons (electron-hole pairs) from a single high-energy photon of sunlight. This could boost solar cell efficiencies to as much as 42% -- enough to be competitive with the more common fossil fuel energy sources.

So Stion is doing good work, and with any luck, they'll be rewarded with some healthy profit margins in due time. But all this hard-core quantum physics talk has made me long for the comparable simplicity of the celebrity tabloids. Intellectual stimulation is all very well and good, but sometimes my brain just needs a break. And the latest unscrupulous unnamed sources are telling me there's trouble in some Celebrity's (TM) fairytale paradise...

It's been a good year for confirming (and reconfirming) older theories in physics. Last week the scientists on the MiniBooNE experiment told us that the Standard Model, while imperfect, is holding up just fine as it approaches its dotage, thanks very much for asking. (Whether or not you're pleased by that news might depend on whether you find the prospect of operating a bit outside the theoretical box exhilarating or utterly terrifying.) Gravity Probe B just gave the thumbs up to Einstein for his 1917 prediction of the geodetic effect (and I can't believe my eagle-eyed commenters missed my inadvertent typos in Saturday's post). It's also thisclose to most likely (probably, maybe) affirming the frame-dragging effect as well. Last year saw more precise measurements of Lorenz invariance and E=mc<2>, as well as an experiment by scientists at the University of Twente in the Netherlands that yielded a plausible physical explanation for the mysterious "Kaye Effect."

Many of you have probably never heard of the Kaye Effect, but trust me, it happens all the time, and nobody except physicists ever takes much notice. That's what makes physicists/scientists so special, and I don't mean that in an ironic sense. Physicists are the ones who see something in nature that's not quite right, think, "Huh... that's funny," and -- here's what sets them apart from the Average Joe/Jane -- decide to try and figure out what's actually going on, rather than just forgetting about it and going on their merry way.

They're persistent, too. Sometimes it can take a good long while between the initial observation and a scientifically plausible (and experimentally confirmed) explanation. Case in point: back in 1963, a humble physicist named Arthur Kaye (he doesn't even have a stub on Wikipedia) was experimenting with complex mixtures of viscous fluids -- things with the consistency of honey, syrup or the like -- and noticed that when he poured these substances onto a surface, the downward stream would unexpectedly produce an upward-moving jet that then merged with the incoming stream.

Essentially, as the liquid flows, its viscosity decreases. The same thing happens with shampoo, liquid hand soaps, ketchup, yogurt and certain paints, it just happens so fast -- on the order of 300 milliseconds -- that most of us aren't aware of it. Michel Verlsluis, the Twente scientist, figured out to keep the effect stable long enough to study the underlying physical mechanism: by pouring the stream onto a sloping surface. I won't go into the full explanation he proposed for the Kaye Effect, because you can find the whole story here, and watch nifty videos of the Incredible Bouncing Viscous Liquid here and here. If you happen to run into Verlsluis during your next trip to the Netherlands, I'm sure he'd be happy to tell you how to perform a similar experiment in your own bathroom.

Sometimes it works the other way around: a theorist does a bunch of calculations and concludes that, under very specific circumstances, something ought to happen. And then experimenters try to create the right conditions to prove (or disprove) the theory in the laboratory. Here at the APS April meeting in Jacksonville, the focus yesterday morning was on the recent experimental observation of the elusive "Efimov Effect." Atomically speaking, it's what happens when two atoms that normally repel each other become strongly attracted when a third atom is introduced. Three's company, two's a crowd, which flies in the face of conventional wisdom.

(Jen-Luc Piquant observes that many a straight male's favorite sexual fantasy is based on a very similar concept: "Really, honey, inviting Fergie -- of Black-Eyed Peas fame, not the former Duchess of Windsor -- to bed with us can only bring us closer!" Thanks to physics, their partners now have a handy one-word rejoinder: shrinkage. The Efimov Effect is only observed in ultracold gases, like cesium, cooled way down to a billionth of a degree above Absolute Zero. That's colder than the furthest reaches of outer space, which hover around a comfy 3 degrees Kelvin. Those kinds of temperatures aren't likely to *cough* show a guy to his best advantage. She's just sayin'....)

The man behind the Efimov Effect is a Russian physicist named Vitaly Efimov. Back in 1969, he had a shiny new PhD in theoretical nuclear physics, along with sufficient youthful optimism to make a strange prediction: even though any two in a group of three atoms will normally repel each other, under just the right kind of conditions, it should be possible to create a state of matter in which they will experience an irresistible attraction, forming an infinite number of "bound states." This struck many of his colleagues as a bit preposterous, but the math bore young Vitaly out. Time and again over the years, theorists have tried to disprove the Efimov Effect, only to further verify it. But it still hadn't been seen in a laboratory.

Sometimes it takes so long for theories to find experimental verification because the technology just doesn't exist. That was certainly the case with Bose-Einstein condensates (BECs), a new state of matter first predicted by Albert Einstein and the Indian physicist Satyendra Bose in the 1920s. All matter exhibits wave/particle duality. At normal temperatures atoms behave a lot like billiard balls, bouncing off one another and any containing walls. Lowering the temperature reduces their speed. If the temperature gets low enough (billionths of a degree above absolute zero) and the atoms are densely packed enough, their wave nature kicks in. The different matter waves will be able to “sense” one another and coordinate themselves as if they were one big “superatom.”

Eric Cornell and Carl Wieman created the first BEC, using a combination of laser and magnetic cooling equipment. They created a laser trap by cooling about 10 million rubidium gas atoms; the cooled atoms were then held in place by a magnetic field. This can be done because most atoms act like tiny magnets; they contain spinning charged particles (electrons). But the atoms still weren’t cold enough to form a BEC, so the two men added a second step, evaporative cooling, in which a web of magnetic fields conspire to kick out the hottest atoms so that the cooler atoms can move more closely together. It works in much the same way that evaporative cooling occurs with your morning cup of coffee; the hotter atoms rise to the top of the magnetic trap and “jump out” as steam.

Wieman and Cornell made physics history at 10:54 AM on June 5, 1995, producing a BEC of about 2000 rubidium atoms that lasted 15-20 seconds. Shortly thereafter, an MIT physicist named Wolfgang Ketterle achieved a BEC in his laboratory. Wieman, Cornell and Ketterle shared the 2001 Nobel Prize in Physics for their achievement. And BECs turned out to be the key to experimentally verifying the Efimov Effect, since they spawned a huge new field of research into the properties of ultracold gases. Chris Greene of the the University of Colorado was the first (with a co-author) to predict that ultracold gases were just the ticket for achieving such an odd state in the laboratory.

Enter Austrian physicist Rudolf Grimm, who met Efimov at a workshop in Seattle in 2005, and was inspired to try his own hand at verifying the Efimov effect. Grimm's group at the University of Innsbruck took three cesium atoms, placed them in a vacuum chamber, and then used a combination of laser cooling and evaporative cooling to bring the temperature down to -459.6 degrees F. The technique is almost identical to how a BEC is created, and had BECs not become almost commonplace in physics over the last decade, Efimov's odd theory might never have been verified. Within a year of meeting Efimov, his team had created the Efimov effect in their lab. The trick is to get the gas to the very edge of condensation, without it ever turning into an actual BEC.

According to Grimm, the atoms in an Efimov state resemble something called a Borromean ring: three interlocking circles that are found on the coat of arms of a 15th century Italian noble family called Borromeo (I know -- duh). Among the many interpretations, it can be said to represent the inter-marriages that had bound the Borromeos inseparably to two other noble families. In physics, one could say the three rings -- or atoms, or particles -- are entangled, such that if you pick up any one of them, the other two will follow, and if you cut one, the other two will fall apart.

Possibly the most exciting thing about this experimental result is that it should be pretty much universal: we should be able to create this state out of any set of three particles at ultracold temperatures, and it's a harbinger of an emerging new field devoted to studying the quantum mechanical behavior of just a few interacting particles.

The Efimov Effect may even make it possible to engineer the most fundamental properties of matter way down at the subatomic level, giving scientists unprecedented control and the ability to create all kinds of new exotic molecules that couldn't otherwise exist. The University of Chicago's Cheng Chin, one of Grimm's collaborators, has said as much in various press releases: "This so-called quantum control over the fundamental properties of matter now seems feasible. We're not limited to the properties of, say, aluminum, or the properties of the copper of these particles. We are really creating a new state in which we can control their properties."

Pretty cool, huh? Even if it is awfully esoteric.... But the best part, in Jen-Luc's opinion, is the pretty pictures Greene showed, modeling the Efimov state of matter:

Physics Cocktails

Heavy G

The perfect pick-me-up when gravity gets you down.
2 oz Tequila
2 oz Triple sec
2 oz Rose's sweetened lime juice
7-Up or Sprite
Mix tequila, triple sec and lime juice in a shaker and pour into a margarita glass. (Salted rim and ice are optional.) Top off with 7-Up/Sprite and let the weight of the world lift off your shoulders.

Any mad scientist will tell you that flames make drinking more fun. What good is science if no one gets hurt?
1 oz Midori melon liqueur
1-1/2 oz sour mix
1 splash soda water
151 proof rum
Mix melon liqueur, sour mix and soda water with ice in shaker. Shake and strain into martini glass. Top with rum and ignite. Try to take over the world.