"Takes 1 part pop culture, 1 part science, and mixes vigorously with a shakerful of passion."
-- Typepad (Featured Blog)

"In this elegantly written blog, stories about science and technology come to life as effortlessly as everyday chatter about politics, celebrities, and vacations."
-- Fast Company ("The Top 10 Websites You've Never Heard Of")

Happy Hour

In the pantheon of the greatest sci-fi movies of all time, Demolition Man isn't likely to rate very highly. But I enjoyed the rampant silliness of a future Los Angeles where a totalitarian peace-loving society has traded in everything that could possibly be bad for you for a "happy joy-joy" kind of existence guaranteed to grate like fingernails on chalk -- not to mention a relentlessly perky Sandra Bullock in one of her earliest major roles, and Dennis Leary as an underground revolutionary giving a classic rant on why he wants to smoke, drink, watch pornography, and eat artery-hardening bacon cheeseburgers and french fries if he damn well feels like it, okay?

The premise is pretty simple: Wesley Snipes plays a psychotic criminal named Simon Phoenix, who is apprehended by Sylvester Stallone's John Spartan, a rogue cop known as the "Demolition Man" because of the rampant property destruction he leaves in his path. (Best line: he's marching from another epic disaster, all done in the line of duty to save an 8-year-old girl, and a reporter asks whether one little girl was really worth all the damage he'd incurred. The little girl glares at her and says, "Fuck you, lady!" As well she should....) Unfortunately, while Phoenix gets put into long-term Cryo-prison, so does Spartan -- a number of civilians were supposedly killed this time when the inevitable explosion occurred. Fast forward to 2032 or so, and Phoenix is un-thawed for his parole hearing... and escapes. The peace-loving society can't cope with his violence, so they un-thaw Spartan, too, and futuristic wackiness ensues.

I just happened to catch a cable re-run of Demolition Man recently while traveling with the Time Lord (he fell asleep before it was over; just didn't capture his interest). And then I stumbled upon an intriguing post over at io9 about "biological antifreeze": proteins used by the food industry in products such as low-fat ice creams and frozen yogurts, where water replaces the traditional fat. Mixing in antifreeze proteins with the ice cream keep these foods from turning into large blocks of ice. I decided now might be a good time to talk about the challenges of cryogenics when it comes to freezing and unfreezing a human being (or any other warm-blooded creature). Here's how Mother Nature tackles the problem:

Fish, insects, and bacteria in cold climates have antifreeze proteins (AFPs) in their bodies which keep their fluids from turning to ice. The AFPs in a living organism have to solve a tricky problem. Although they must bind to ice crystals immediately, before the ice spread and freezes the area around it, AFPs cannot bind to liquid water. If they do, they'll dry the organism from the inside out. Ice and water are made from the same recipe - they're both two hydrogen atoms and an oxygen atom - so the proteins have to distinguish water from ice using something other than their atomic makeup.

Studies done of the AFPs of arctic fish found that antifreeze proteins masquerade as ice in order to trap it. Although AFPs don't bind to water in the animal, they do make use of it. AFPs are made up of hydrophilic amino acids, which hold on to water, and hydrophobic ones which repel water. Their hydrophilic sites grab onto water molecules and form them into a cage. Six rings, each made up of several water molecules, are linked together by the hydrophilic parts of the AFP. The gaps between the water molecules fill with the hydrophobic parts of the AFP. These repel water molecules, but will hold on to ice. So the AFPs patrol the body of arctic animals, caging up any tiny ice crystals before they do any damage, while leaving liquid water alone.

Scientists have been taking note of Nature's ingenuity. In late June 2005, tabloid headlines in England and Australia screamed about the creation of "zombie dogs" by a group of Pittsburgh scientists. (It’s probably just coincidence that the classic zombie horror film Night of the Living Dead was filmed in the city’s Monroeville Mall.) As with most tabloid stories, the moniker was an over-hyped misnomer, but the results described were surprisingly accurate. Scientists have made substantial progress in achieving something called suspended animation with delayed resuscitation. Suspended animation is an outgrowth of the field of cryogenics and its cousin, cryonics: the storage of human bodies at extremely low temperatures in hopes of one day reviving them. It’s similar to how people who fall into icy water can survive for up to an hour despite the cold temperatures because their body goes into suspended animation: the metabolism and brain function slow down to the point where little to no oxygen is needed.

Researchers at Pittsburgh’s Safar Center for Resuscitation Research have developed a new technique for suspended animation that could help save lives of accident victims or soldiers on the battlefield – or anyone who has suffered a lethal hemorrhage. (Roughly 50,000 Americans die every year from hemorrhage, which is also the leading cause of death among troops killed in action.) They tested their technique on dogs. First, they drained the animals’ blood and replaced it with an ice-cold salt solution, mixed with small amounts of glucose and dissolved oxygen -- a kind of antifreeze. The dogs were clinically dead, with no heartbeat, respiratory functions, or brain activity, but their tissues and organs were perfectly preserved because the procedure lowers the body temperature to about 50 degrees F. After three hours, fresh warm blood was pumped in and the animals were revived with electric shocks. Two-thirds of the “zombie dogs” in the study suffered no brain damage from the procedure.

In reality, three hours is the upper limit for how long a body can remain in suspended animation and still return to normal function. That’s usually long enough to transport trauma victims to a hospital, and enable surgeons to locate and repair internal bleeding. The trick is, how are you going to keep the cells of living tissue from shattering once they’ve been frozen? This is one of the biggest challenges facing the field, and a big part of why, to date, no one in long-term cryonic suspension has been successfully revived.

That’s because of the physical changes that take place when something freezes. It’s a well-established scientific fact that the same substance will behave differently at various temperatures and pressures. Water (H2O) is the most familiar example. It can be a solid (ice), a liquid (water), or a gas (steam), but it is still made up of molecules of H2O, so its chemical composition remains unchanged. At sea level, water freezes at 32 degrees F (0 degrees C) and boils at 212 degrees F (100 degrees C), but this behavior changes at different altitudes because the atmospheric pressure changes. In fact, get the pressure low enough and water will boil at room temperature. The critical temperature/pressure point at which H2O changes from one form to another is called a phase transition.

Cells contain a lot of water. Water expands when it freezes into a crystalline solid, and this expansion causes the cell walls to burst or shatter. So in order to cryogenically freeze a body with any hope of resuscitation, one must first remove all the water from the cells and replace it with the equivalent of human antifreeze, like the Pittsburgh researchers did with their canine subjects. This keeps the organ and tissue cells from forming ice crystals at extremely low temperatures and puts the body into suspended animation, so that it can then be cooled down to freezing levels gradually. That’s not the only difficulty. The warming up process must be done gradually at very precise speeds – again, to prevent cells from shattering.

Which brings us to Mark Roth, a University of Washington researcher whose work has made the pages not just of Science magazine, but also Ripley's Believe It or Not!, not to mention a nice profile in Esquire. Roth's specialty these days is developing techniques to achieve short-term suspended animation -- just long enough to to transport trauma patients on their way to the hospital, or to stabilize a patient during surgery. Roth has a personal stake in the issue: he lost his one-year-old daughter, born with Down Syndrome, during a surgical procedure that got complicated; his daughter bled out and died on the operating table. He became obsessed with suspended animation. And that obsession eventually led him to a NOVA special on caves in Mexico filled with hydrogen sulfide, ten times more toxic than carbon monoxide. Those caves should be devoid of life; instead, they are teeming with fascinating creatures.

So, how does Roth do it? He replaces the oxygen one would normally breathe with the highly toxic hydrogen sulfide. It's the kind of stuff that should kill a human being outright but here's the thing: apply hydrogen sulfide in a very cold environment and it alters mammalian metabolism to the point where Roth can place lab animals into suspending animation and bring them back again hours later -- just like Pittsburgh's "zombie dogs." Apparently in such a suspended state, the body can better cope with oxygen deprivation resulting from shock, massive blood loss and the like. It's the kind of thing that could save soldiers on the battlefield, civilian trauma victims -- and might have saved Roth's baby daughter. He's founded a biotech company called Ikaria to further develop and market the technique. Wanna know more? I'll let Roth tell you all about it:

Jen-Luc Piqunt stumbled across an intriguing science news story this morning: it seems that engineers at Ohio State University "have invented a new kind of nano-particle that shines in different colors to tag molecules in biomedical tests." The secret ingredient? quantum dots! We love quantum dots at the cocktail party, and they rarely make news headlines. This seems like a good time to indulge in a spot of self-plagiasm and adapt some information from my 2007 post on the subject.

Quantum dots are tiny bits of semiconductors -- sometimes called nanocrystals, which just doesn't carry the same panache -- just a few nanometers in diameter. It's like taking a wafer of silicon and cutting it in half over and over again until you have just one tiny piece with about a hundred to a thousand atoms. That's a quantum dot. Billions of them could fit on the head of a pin.

Size matters when it comes to semiconductors: smaller is usually better. Because they're so tiny, quantum dots have some unusual materials properties -- specifically, the all-important electrical and optical ones -- thanks to the quantum effects that kick in at smaller size scales, so they are of enormous interest to researchers. It's interesting physics fundamentally, and it offers an impressive sampling of potentially lucrative practical applications.

It helps to place semiconductors in general in the appropriate context, i.e., right smack between insulators and conductors. Insulator atoms hoard their electrons greedily, like misers or overprotective parents, and rarely part with them, while conductor atoms are like spendthrifts or exceedingly permissive parents, letting their electrons run amok all over the place (and a good thing, too, otherwise we'd never enjoy the benefits of electrical current).

Semiconductor atoms are juuuust riiiight. They don't fling their electrons around all willy-nilly, but neither do they hang onto to them too tightly. It takes a bit of an energy boost to knock an electron loose in a semiconductor, and when the electron breaks free, it leaves behind a "hole" in the atom's electronic structure -- a vacancy, if you will, that another electron, sooner or later, will come along to fill. So a photon strikes a semiconductor atom and creates an electron-hole pair. This enables the electrons to flow as a current. And current = power.

Back in 1990, European researchers managed to get porous silicon to emit red light, and figured it came about because of "quantum confinement" relating to the dot's small size. At 10 nanometers or less, the electrons and holes are being squeezed into such small dimensions that this alters the electronic and optical properties; it's the critical feature of most nanoscale materials, frankly. Things snowballed from there, with scientists making more silicon dots (and, later, germanium dots) that emitted light in lots of bright, pretty colors, especially the highly desirable green and blue ranges. The bigger the dot, the redder the light, and the emitted light becomes shorter and shorter in wavelength -- and higher in energy -- as the dots shrink in size. This is called "tunability" because you can pretty much tailor the dots to emit whatever frequency of visible light you happen to need for a given application, simply by altering the size of the dots

The most obvious application is using quantum dots as an alternative to the organic dyes used to tag reactive agents in fluorescence-based biosensors. You know, the dyes start to glow when, say, a harmful toxin is present. But the number of colors available using organic dyes is limited, and they tend to degrade rapidly. Quantum dots offer a broader spectrum of colors and show very little degradation over time. Having all those colors also means you can make light-emitting diodes (LEDs) from quantum dots, precisely tuned in the blue or green range. You can also build quantum dot LEDs that emit white light for laptop computers or interior lighting in cars. As for electronics, the possibilities are endless: all-optical switches and logic gates, for instance, with a millionfold increase in speed and lower power requirements, or, further in the future, quantum dots could be used to make teensy transistors for nanoelectronics.

This latest breakthrough -- described in the online edition of Nano Letters, in a paper by OSU's Jessica Winter and Gang Ruan -- involves sutffing tiny plastic nanoparticles with even tinier quantum dots for use in biomedical tagging applications. It's easier to see biological molecules under a microscope if they fluoresce, and quantum dots glow more brightly than other fluorescent molecules used for this purpose.

They also "twinkle", i.e., blink on and off, an effect that is less noticeable if there are many quantum dots congregated together. There are pros and cons to this behavior. Con: it "breaks up the trajectory of a moving particle or tagged molecule" that one is trying to track under the microscope. Pro: when the blinking stops, scientists know they've reached a critical threshold of aggregated quantum dots. What Winter and Ruan have done to address this is to turn that "con" into another "pro" by stuffing quantum dots of different colors into the same micelle (a polymer (plastic) based spherical container commonly used in lan experiments). Their tests showed that doing show caused the micells to glow steadily. To wit:

"Those stuffed with only red quantum dots glowed red, and those stuffed with green glowed green. But those he stuffed with red and green dots alternated from red to green to yellow. The color change happens when one or another dot blinks inside the micelle. When a red dot blinks off and the green blinks on, the micelle glows green. When the green blinks off and the red blinks on, the micelle glows red. If both are lit up, the micelle glows yellow. The yellow color is due to our eyes' perception of light. The process is the same as when a red pixel and green pixel appear close together on a television or computer screen: our eyes see yellow."

The continuous glowing makes it easer to track tagged molecules with no breaks, and they can also use the color changes to determine when said tagged molecules congregate. The new nanopartices would be great for microfluidic devices, and could one day be combined with magnetic particles to enhance medical imaging for, say, cancer detection. So it's nice to see quantum dots getting a little love in the public sphere again.

Quick: what's the difference between an 'amu' (atomic mass unit) and a 'Da' (Dalton)? Answer: Nothing. They both represent one-twelfth of the rest mass of an unbound carbon-12 atom in its nuclear and electronic ground state, a.k.a 1.66×10−27 kg. This is very slightly less than the mass of a proton or a neutron (approximately 1.67x10-27 kg). When first invented, the Dalton was intended to be a fundamental unit such that one hydrogen atom had a mass of one Dalton. Helium would be two Daltons, lithium would be three Daltons, etc. Of course, then we realized that every atom had different numbers of protons, neutrons and electrons, which mean that there was no simple universal mass. It would be so much easier to memorize if everything on the periodic table was a simple multiple of a fundamental quantity.

Happily, the universe is not that simple. Protons, neutrons and electrons make things just a little more complex. So regardless of whether you prefer the amu or the Dalton, neither is actually fundamental.

I had to look up the difference after attending a seminar last Friday by Kris Noel Dahl from our Neighbors to the North, CMU. Her topic was the interaction of single walled carbon nanotubes (SWCNTs) with cells. The extremely high strength of carbon nanotubes make them ideal for applications such as high performance racing bike frames, tennis racquets, and space elevators - to name just a few.

Nanomaterials surprised the materials and biological sciences communities in multiple ways. Yes, nanoscale materials have amazing properties that the exact same materials in bulk can only dream about; however, they also have different types and degrees of toxicity. A material that is harmless in a centimeter-sized chunk becomes a killer when shrunks to nanoscale.

Carbon nanomaterials, especially, have engendered a lot of concern, with early high-profile reports of buckyballs being toxic to fish brains, for example. There was a lot of backstepping when people realized that as-synthesized carbon materials have a wide range of materials, including graphene, graphite, metal impurities from the catalysts used in some methods to grow the carbon nanomaterials, and even contamination from residual solvents that were used for dispersing the nanotubes in a fluid. Even though we have better methods for purifying carbon nanotubes and removing impurities, there reamin a wide variety of opinions on the toxicity of carbon nanotubes. Most of the research has moved from the 'is it or isn't it toxic' type to 'what specifically do carbon nanotubes change in a cell?' I learned one consequence on Friday that involves an interesting molecules called actin.

Actin is ubiquitous: If you're looking for a molecule that is fundamental to life, this is one to consider. It's a 42-kiloDalton (meaning big) globular protein that varies in structure by less than 20% across species from algae to people. Actin is found in all eukaryotic cells, which are the types of cells that have a nucleus. Eukaryotic literally means 'good nut' or 'good kernel', so defining is the presence of the nucleus.

The globular protein (called G-actin) is a monomer, which means that it assembles with similar momomers to form long-chain polymers. Thin filament-actin is mostly found in muscle cells, forming a scaffold on which myosin motors move - the mechanism by which the muscles contract.

Microfilament actin (also called f-actin) is a major component of cellular cytoskeletons. Two long-chain polymers twist together, like two-ply yarn, to form f-actin (shown to the left). The result is a helix about 7 nm in diameter, with a repeat distance for the twist of about 37 nanometers.

Confession time. My model of the cell is way dated. The model I still had in my head was from the last biology course I ever took: the required 9th grade general biology. We had filled a plastic bag (the cell membrane) halfway with jello (the cytoplasm), let it set awhile, dropped in a maraschino cherry (to represent the nucleus), and then filled it up with more jello.

I knew cells were slightly more complicated than that, but I didn't appreciate how much. On an educational note, jello cell models have also increased in complexity. The picture at right is from a homeschoolers blog. Sugar-coated gummy worms represent the rough endoplasmic reticulum, while smooth gummy worms represent the smooth endoplasmic reticulum (which folks in the know call the 'ER'). Gumdrop centrosomes, Sixlet lysosomes, raisin mitrochondria, Gobstopper vacuoles and sprinkle ribosomes complete the cell. Oops - I almost left out the fruit roll-ups folded accordian-style to represent the Golgi bodies.

I'm getting a sugar buzz just describing this rather colorful model that looks to me way too much like an eyeball to even think about eating it. Despite it's color and ability to keep kids busy, this model -- like almost all models -- has a flaw: You have to make your cell in a mold. Mother Nature doesn't need a mold. And cytoplasm isn't really quite as structurally sound as gelatin, but Mom Nature has a secret ingredient: actin. Actin provides a cell's skeleton. Actin is why red blood cells are flat and even why cells move.

The micrograph of the rat kidney cells below shows the actin cytoskeleton in green and the nuclei in blue. The image was taken by Christopher Turner's group at Upstate Medical University of New York using fluorescence microscopy. The actin filaments adhere to the membrane and provide structural support, but also provide the hard-wiring for cell functionality.

The cellular cytoskeleton is not permanent like our skeletons become: actin can polymerize and depolymerize, changing from long strands to shorter strands or even back to the original globular form. This joining and dissolving can even be used by the cell to move like a snail. The actin cytoskeleton provides mobility and preserves shape. If you change the actin cytoskeleton, you change not only the shape and structure of the cell, you can change the cell's function as well.

Since actin defines a cell's shape, you might infer that actin plays a very important role in cell division - and you'd be correct. The actin forms smaller fibers and distributes itself around the cytoplasm prior to and during cytokinesis (dividing). In the picture at left of dividing green urchin zygote cells (from the University of Washington Center for Cell Dynamics), the actin is in blue and the gold threads are microtubules. Cell division is the basis of life, of course, since it is how we (and most everything else on Earth) reproduce.

Changing the actin structure thus challenges the cell's ability to maintain it's shape, divide, and even to function, which returns us to the subject of the original seminar.

Dahl and her co-workers studied highly purified carbon nanotubes that had been length-selected to be 150 nm long, which is about the length of the f-actin in the cells they were studying. A cell, by contrast, is tens of microns in diameter. Normally, f-actin in HeLa cells concentrates in the cell's base. Dahl's group found that introducing carbon nanotubes changes the way actin organizes. Outside the cells, they found that carbon nanotubes make actin fibers bunch up into bundles like twigs tied up in a bunch. When they looked at the effect of the carbon nanotubes inside cells, the actin again formed clumps, but there was also more actin and the clumps weren't located only in the base of the cell - the clumps were distributed throughout the interior of the cell. The carbon nanotubes also impacted the ability of the cells to divide, producing defects like cells with multiple nuclei and cells that started the dividing process, but couldn't complete it.

This study reinforces a very important issue regarding toxicity. We sometimes think of toxicity as being when something causes cells to die in large numbers. In this case, the carbon nanotubes didn't kill large numbers of cells directly -- but they did hinder the cells from dividing. If we could target carbon nanotubes so that they only entered cancer cells, for example, we would have a technique to slow or stop the growth of cancer. Even slowing cancer cell growth would give us more time to treat it. Carbon nanotubes exposed to a dividing embryo would be bad.

The more I learn, the more I realize that toxicity is a much more subtle phenomenon than I initially appreciated. It's vitally important for us to understand those subtleties so that we can determine not whether nanomaterials are dangerous, but the conditions under which nanomaterials -- or any materials -- could be hazardous. The first step to preventing a potential hazard is to understand it.

Indiana Jones might be a swashbuckling, thrill-seeking archaeologist who once loftily claimed "I'm a scientist. Nothing shocks me." But there's one thing famously guaranteed to freak him out: snakes. Yes, Indiana Jones is an ophidiophobe (ophidiaphobia = fear of snakes). This was established in the opening frames of Raiders of the Lost Ark, when he meets his pilot's pet, "Reggie," and used to comic effect later on, when he and his loyal guide finally locate the Ark's secret hiding place, only to look down into the dark and see the floor, well, moving. "Snakes. Why did it have to be snakes?" Indy moans. His friend isn't exactly helpful: "Asps. Very dangerous. You go first."

So just imagine how Indy would react to Chrysopelea paradisi, a tree-dwelling snake commonly found in Southeast and South Asia, with a penchant for flinging itself off its lofty perch in the trees, flattening its body, and gliding to the ground, or another tree -- or YOUR HEAD! That's right: these snakes can fly, and not because they've invaded a passenger jet with the aim of freaking out Samuel L. Jackson. (We are all fed up with those snakes on a mother-f%$#-in' plane, amiright?) They use this ability to get to a new location faster, to hunt for prey (!), and sometimes as a defense mechanism. File under the "Damn, Nature, You Freaky" category. This one will give Indy and his fellow ophidiophobes a few nightmares.

But this genus of flying snake fascinates scientists like Jake Socha, currently a biologist (biophysicist?) at Virginia Tech. He's been studying these creatures for 13 years now, and finds the biomechanics of their unusual ability quite complex. He published his initial findings in Nature back in 2002, when he was at the University of Chicago, outlining the basic aerodynamics. Prior to launch, the snake pushes its ridge scales against the rough surface of a tree trunk to make its way up to the branch of choice. Then it hangs off the end, and the angle of its inclination plays a role in determining its flight path. Then it contracts its body into an upward thrusting motion to launch itself into the air, and uses its body to create a kind of "pseudo-wing" for maximum gliding distance.

Specifically, the snake sucks in its stomach and can use its ribs to flatten out its body shape to create a "Frisbee" effect: a Frisbee is designed with a cross-sectional "concavity" (i.e., its shape curves inward on the bottom), and this increases air pressure under the Frisbee, providing lift. The flattened body of the flying snake has a similar concavity. While a Frisbee spins to increase air pressure, the animal undulates mid-air to create the same effect, in an S-shaped motion akin to swimming or snapping a whip. (That's a photo of Chrysopelea pelias mid-flight below; you can watch videos of all Socha's flying snakes here).

While it might be visually arresting, it turns out that those undulations aren't as important as one might think when it comes to optimal performance in flight. For a later study, published in 2005, Socha looked closely at such variables as glide angle and horizontal speed, and correlated those with snake size and behavior variables, including mass, body length, and wave amplitude and frequency (the "wave" refers to the undulations of the snake's body while in flight). He found that wave frequency really wasn't an important predictor of flight behavior.

What factors did matter? Well, size matters -- except in this case, smaller is definitely better. Body length, mass and wave amplitude were predictors, and Socha found that smaller snakes can glide much further horizontally than larger ones. The undulation seems to serve the purpose of stabilizing the snake mid-flight.

Most recently, Socha decided to studyChrysopelea paradisi as they leapt off a "branch" attached to a 15-meter-tall tower, with four cameras recording the snakes' movement as they glided. Based on that footage, his team was able to create 3D models of the snakes' body positions mid-flight, coupled with an analysis of the various forces acting on their serpentine bodies and basic gliding dynamics.

What did they learn this time around? Well, for starters, the snakes can travel as far as 24 meters from their launch pad, although they never quite achieve what Socha describes as an "equilibrium gliding state." This is defined as a perfect balance of forces: that generated by the contortions of the snakes' bodies to give them a bit of "lift," and the force of gravity pulling them down. If the balance were perfect, the snakes would glide with constant velocity and at a constant angle.

But they are gliding: they certainly don't just fall to the ground and go plop."The snake is pushed upward -- even though it is moving downward -- because the upward component of the aerodynamic force is greater than the snake's weight," Socha explained. In fact, if this effect continued indefinitely, at some point the smake would start to rise, but as always, gravity wins in the end. There's only so far the snake can glide before hitting the ground. That, at least, should provide some comfort to Indiana Jones.

Dragon*Con is nigh, and I'll be heading out to Atlanta next Thursday to participate in several panels over that weekend, including one on "The Science of the Whedonverse," wherein I will join fine folks like JPL's Kevin Grazier (tech consultant for Eureka) analyzing the science behind Joss Whedon's most beloved series: Firefly/Serenity, Dollhouse, and of course, Buffy the Vampire Slayer and Angel.(I'll also be on a few panels on the Skeptic track, and talking about The Calculus Diaries, given that the book's release is -- yikes! -- this Tuesday.)

There's far too much to discuss in one hour (I wrote an entire book on Buffyverse physics, and you could easily do the same for Firefly and the neuroscience of Dollhouse), but I hope we'll touch on some of my favorite topics: the Gentlemen's exploding heads in the Emmy-nominated episode "Hush"; the time-tinkering physicist in "Happy Anniversary" (Angel, Season 2); the thermodynamics of magic in the Buffyverse; and some of the real-world biological counterparts to demons and monsters, like the Queller demon in "Listening to Fear" (Buffy, Season 5). As the episode opens, Willow and Tara are gazing at the constellations when they a meteor streaking across the sky. It’s not an ordinary meteor: this one has a soft chewy demon center, unleashing an overgrown, slimy lizard-like creature onto a community already overrun with demons. The Queller demon vomits a sticky, odiferous substance onto its victim’s face, which then hardens, suffocating said victim. Xander resents being forced to spend his weekend researching a killer snot monster, and an exasperated Giles upbraids him that it's important because "it's a killer snot monster from outer space!" I'm still waiting for the SyFy original movie on that one: Killer Snot Monsters from Outer Space would give Sharktopus a run for its money.

I was reminded of all things slimy and snot-like this past week, upon reading a short post by Brian of Laelaps about a new method for extracting DNA-rich tissue from dolphins. In the past, the standard techique has been dart biopsies: you shoot the creature with a small harpoon-like device and when you pull it out, there's a bit of tissue attached, ideal for genetic analysis. Jen-Luc Piquant sniffs that if she were a dolphin, she would find this very irritating, if not outright painful. And indeed, dart biopsies can't be used on very young dolphins for fear of injury.

Get a dolphin to blow in a tube, however -- the dolphin equivalent of a breathlayzer -- and you collect a sample of "dolphin blow" (oh, stop sniggering!): air infused with a mix of proteins and liquids to comprise a sort of "lung surfactant." Per Brian, prior work showed that dolphin blow contains traces of reproductive hormones, so why not genetic material? To find out, the University of Queensland researchers held polyproplene tubes -- which look disturbingly akin to a solidified condom in the photo below -- over the dolphins' blowholes and collected enough genetic material to produce DNA profiles that closely matched that obtained by analyzing the dolphins' blood. One more challenge remains: the dolphins used in the study were from the National Aquarium in Baltimore, who are far more likely to cooperate with such procedures than dophins reared in the wild. But I'm sure if we just explained to them that they can either be pinged with sharp darts and lose a bit of tissue, or breathe into a tube for a few minutes, they'll see reason.

From Brian's description, dolphin blow seems quite similar to what a materials scientist might term a "viscous colloid," a class of materials that includes mucus, a substance with which we all have had firsthand experience. When the Spousal Unit appeared on The Colbert Report this past March, I went with him to NYC, only to be felled by a nasy cold virus during our stay. For three days, my morning ritual included the ceremonial Clearing of the Mucus -- undertaken while the Spousal Unit was off getting his morning coffee to spare him the horror of witnessing something akin to a scene from The Exorcist. (A good marriage needs a bit of mystery.) Having witnessed what was expelled form my nasal passages, I can readily believe Wikipedia's assertion that "the average human body produces about a liter of mucus per day."

Usually, though, mucus is beneficial, helping ward off infection by trapping nasty paticles that enter through the nose (or mouth) before they can get down into the respiratory tract. It's a bit sensitive to temperature: in cold weather, for instance, our mucus can thicken, only to "melt" when we come in from the cold for a nice hot bowl of soup, thereby causing one's nose to run at the table in a most unappetizing fashion. I hate being "that person" at the dinner table. We have all been that person at some point.

Elsewhere in Nature, mucus and other slimy substances have some very desirable properties of great interest to scientists – not to mention celebrities of advancing years. Slug mucus is enjoying a renaissance of sorts as an anti-aging compound in high-end cosmetics. Beyond the pursuit of vanity, it also provides a useful model in the development of new synthetic lubricants, which could one day be used to combat friction in molecular-scale nanomachines. And then there's the lowly hagfish, a pretty darn ugly eel-like creature that excretes copious amounts of slime from pores all along its body when it feels threatened in any way. That slime mixes with saltwater to transform into a sticky goo.

The hagfish puts the human mucus production system to shame: it can churn out 1 liter of mucus in less than a second, according to hagfish guru Douglas Fudge, a marine biologist at the University if Guelph in Canada. Hence the creature's Latin name, Myxine glutinosa, from the Greek myxa ("mucus") and the Latin gluten ("glue"). The resulting slime bonds to the gills of an attacking fish and blocks respiratory flow: the victim perishes by choking on snot. Should the victim attempt to chew its way through the slime to escape, the stuff will just expand further, and the victim will suffocate that much faster. The hagfish gets out of its own mess by tying itself into a knot, then pushing the knot down the length of its body to scrape off the slime.

And that brings me to another gratuitious Buffyverse reference! The long-suffering Giles turns into a Fyarl demon in “A New Man” (Buffy, Season 4), gaining the ability to shoot a sticky mucus through his nostrils that hardens into a solid and immobilizes an opponent. Excreting large globules of snot it not as showy as, say, shooting searing laser beams from his eyes, but Spike declares the ability dead useful in a fight. Just ask the hagfish. Or Peter Venkman in Ghostbusters.

Mucus is what’s known as a “phase-change” material because it moves from liquid to solid. The change is usually triggered by temperature (hot to cold, or vice versa) or environmental factors (wet to dry, dry to wet). Mucus is made up of protein-and-sugar molecules (mucin), as well as lots of water, which gives the material its slippery texture. As the substance loses moisture, it becomes more rigid, undergoing a sort of phase transition, although scientists who study these strange materials prefer to describe the process in more vague terms: the substance goes from a “fluid-like” to a “solid-like” state. Once ejected, the substance rapidly cools down and begins losing moisture. As it dries out, it forms a hard shell.

Unlike other forms of mucus, hagfish slime doesn’t harden. It stays slimy even in very chilly water, in part because both the hagfish and its victim are immersed in salty seawater, so it never has a chance to dry out. But hagfish slime has a secret ingredient: the usual protein-and-sugar concoction also contains long threadlike fibers. The technical term is "intermediate filaments," and these fibers are finer than spider's silk, and as strong. The fibers form protein strands that expand rapidly once the mucins comes into contact with seawater, causing the substance to “blow up” into a sticky gel. The consistency is a bit like half-solidified Jell-O, or watered-down hair gel. The fibers are so stretcy, they can enlongate like taffy to three times their length before finally snapping. Fudhe designed his own apparatus to stretch the filaments: something akin to a ping pong paddle, except with a filament where the paddle part should be. (Diagnostic electronics are embedded in the handle.)

Intermediate filaments can be found in most animal cells, creating a kind of scaffolding so that the cells are rigid enough to maintain their shape, yet still flexible enough to have a bit of give and take. That's an interesting finding, because until quite recently, most biologists had assumed cell structure was rigidly inflexible. So they were initially skeptical of Fudge's model, until French researchers traced a 3D contour of the fibers using an atomic force microscope, and also found them to be stretchy rather than inflexible.

Fudge is one of the leading experts on hagfish, which might be a dubious distinction if the creature weren't so fascinating... and if its slime weren't so complex. There's still a lot to learn about hagfish slime. For instance, the goo is ejected as a mix of disc-shaped vesciles and wound-up protein fibers (just like balls of yarn), and the vesicles burst when they come into contact with sea water, and the fibers unwind. The resulting mixture traps sea water, and that's what causes it to swell. But what keeps those vesicles from bursting prematurely? There has to be a stabilizaing compound among the ingredients.

In 2003, Fudge thought he'd found the answer when an analysis revealed very high concentrations of methylamines, notably trimethylamine oxide. That's a compound often found in shark tissue, for instance, to keep salt water from leaching bodily fluids out of the shark through osmosis. But it turned out to be something of a red herring. His team actually "milked" the glands of drugged-up hagfish, releasing the substance into air instead of salt water -- and still there was an explosion of slime. The hagfish is full of surprises. Fudge surmises that the gland might be pressurized -- kind of like how Reddi-Wip doesn't foam up until it's released from the can.

All promising materials have potential applications and hagfish slime is no exception. Its unique properties could help save human lives by curtailing bleeding in an accident victim during surgery, for example. The mucus would expand upon contact with the blood (which is mostly water and salt), staunching blood flow. That stretchy property is another bonus for potential applications. Fudge compares them to the plastic rings tha hold together a six-pack of beer: pull them apart and they syart to loosen and deform; in the case of the fibers, they actually rearrange into new molecular formations, eerily similar to spider silk. So those amazing fibers could be used -- or synthesized -- to make ultra-light yet super-strong textiles ("bio-steel"), as well as biomedical devices, tissue engineering and biosensors. And as any hagfish could attest, mucus is a terrific defense mechanism, which is one reason why the U.S. military is investigating its properties for military applications.

More frivolously, a group of students in British Columbia figured out how to use hagfish slime as an egg substitute in scones; they believe it could also serve as a thickening agent in eggnog. They failed to include the recipe in their report, but an interpid blogger at the Museum of Awful Food adapted a recipe for just that purpose, which we reproduce here (duly credited) for those in need of some fresh-baked hagfish slime scones for Sunday morning. If you make them, be sure to let us know how they taste; we share the blogger's skepticism that hagfish slime will be an effective substitute, given that egg yolk plays a big role in emulsification and texture....

In a food processor, blend flour, baking powder, sugar, and salt. Cut in the butter using quick pulses until the mixture resembles coarse meal. Add cheese and cut in using quick pulses. In a small bowl, whisk together the cream and hagfish slime. With the food processor running, add cream mixture through feed tube. Process until dough just holds together Â– donÂ’t overmix!

Turn dough out onto a lightly floured work surface. Gather the dough together and divide into quarters. Pat each quarter into a round just short of 1 inch high (it should be about 6-7 inches in diameter). Using a clean, sharp knife, cut each round into six wedges. Transfer half the wedges to ungreased baking sheets lined with parchment paper, spacing them about 2 inches apart.

Bake the first batch of scones until the edges just start to brown and a toothpick comes out clean, about 20 minutes. Transfer them, still on their parchment paper, to a wire rack to cool at least 10 minutes, during which time put in the second batch of scones.

Serve warm or at room temperature. The scones will stand for about 8 hours. Do not refrigerate. If you want to reheat them, warm them in a 350F oven for about 5 minutes.

NOTE:Among those who left the SEED Science Blogs fold in the wake of PepsiGate is Eric Michael Johnson, proprietor of the excellent Primate Diaries blog. While he's casting about for a new home, he hit upon a novel idea: a Primate Diaries in Exile blog tour! We at the cocktail party are delighted to serve as one of many stops on the tour, and that Eric has written a fantastic historic account about Huxley, science and anarchy. Good times! You can follow other stops on this tour through his RSS feed or at the #PDEx hashtag on Twitter. In the meantime, welcome to Cocktail Party Physics, Eric! And if this is someone's first time visiting, feel free to browse our archives.

How East London defined "Darwin's Bulldog" and brought him into conflict with the world's most dangerous anarchist.

Applicants For Admission To A Casual Ward by Luke Fildes (1857) shows a crowd of East London poor waiting in the snow, trying to gain access to a homeless shelter. Source

The first thing you noticed was the smell. It was an oppressive, suffocating odor. It assaulted your senses day and night, at work, at rest, preparing a meal, or enjoying children’s games. It pervaded every aspect of your life and soiled the very experience of living, and dying. It was the birth of modern civilization. East London in 1841 was a society on the brink of collapse. Charles Dickens used the words “pestiferous and obscene” to describe what he experienced. However, a poor resident of Soho put it much more elegantly, "We live in muck and filth . . . all great and powerfool men, take no notice wasomedever of our complaints." Open sewers, garbage littered streets, contaminated water, and overflowing cemeteries had transformed the detritus of overpopulation into a veritable miasma and the result was simply repugnant. [1]

Ignored by politicians and abandoned by those able to escape its slums, East London during the latter half of the 19th century represents one of the most profound failures of urban planning the world has ever seen. Starting in the late 1700s, modern industry and agrarian capitalism had made the open-field farming system of feudal lords and their laboring peasants obsolete. Over six million acres, or a quarter of the country’s cultivated area, were enclosed under parliamentary acts between 1750 and 1850 (and most occurred during the Napoleonic war years from 1793 to 1815). What had previously been communal lands were now off limits. Without a means of subsistence people migrated to the cities en masse, nearly tripling the size of London in a single generation (from 675,000 to 1,945,000). [2]

This was a social reengineering project of massive proportions. In much the same way that privatization of land today (backed by free trade policies) has pushed the landless poor of Latin America into the great northern cities, so the Industrial Revolution sent millions of English tenant farmers flooding to urban centers at the ushering in of the 19th. Most of them ended up in the slums of East London. Steven Johnson cites one report from the time that estimated a densely packed 432 people per acre (even with our modern skyscrapers Manhattan only houses about 110 per acre). In many slum tenements large families or groups of laborers would crowd into a single room. Without any resources for public health or sanitation – aspects of social life that had yet to be invented – and with wages depressed from the legions of poor workers, these slum dwellers were forced to survive in any way they could. In this way, London in the mid-19th century parallels many parts of our world today: teeming cities of the impoverished, lacking resources and meaningful employment, left to suffocate in their own filth. [3]
It was into this environment that Thomas Henry Huxley emerged. If the city is an ecosystem, Huxley embodies the phrase “survival of the fittest.” Lanky and high-strung, estranged from his father at an early age, and the youngest of six children, Huxley was primed from birth to view life as a struggle. Born on May 4, 1825 above a butcher’s shop on London’s outskirts, Huxley was the son of a poor schoolteacher and a member of England’s newly emerging middle class (in culture though not in wealth). As such he was determined to separate himself from the ranks of the working poor. In the years to come he would claw his way out of obscurity and establish himself as a celebrated anatomist, President of the Royal Academy of Sciences, and evolutionary theorist widely hailed as “Darwin’s bulldog.” He would forge a path of his own and create a revolution in the way science was practiced. As his biographer, Adrian Desmond, would later put it:

The young hothead scrambled to the top of his profession; indeed he made a profession of science. With him the ‘scientist’ was born. [4]

An important theme that is found throughout Huxley’s life and work is one that can only be understood from his early experiences in East London: the brutal conditions of the poor. At the age of sixteen Huxley was apprenticed to a "lowlife doctor" named Thomas Chandler whose patients lived exclusively in the East London slums. In later years Huxley would describe in vivid detail the depravity he experienced:

Men, women, and children are forced to crowd into dens wherein decency is abolished and the most ordinary conditions of healthful existence are impossible of attainment; in which the pleasures within reach are reduced to bestiality and drunkenness; in which the pains accumulate at compound interest, in the shape of starvation, disease, stunted development, and moral degradation; in which the prospect of even steady and honest industry is a life of unsuccessful battling with hunger, rounded by a pauper's grave.

In one incident during a house call Huxley encountered a deformed girl nursing her ill sister. There was little he or his mentor could do for her and, out of compassion, Huxley suggested that the sick child needed a better diet than simply "bread and bad tea." In response, according to Huxley:

[The girl] turned upon me with a kind of choking passion. Pulling out of her pocket a few pence and halfpence, and holding them out, "That is all I get for six-and-thirty hours’ work, and you talk about giving her proper food."

Surrounded by such crushing poverty, Huxley anguished over the conditions that seemed to afflict both good people and bad without remorse. “I see no fault committed that I have not committed myself,” he wrote at the time, quoting Goethe. Already moved towards religious skepticism because of his voracious appetite for knowledge and his pursuit of science, Huxley now moved closer to the agnosticism that would define his life. After all, what sort of loving God could allow such horrors to persist? Where was the justice in a divine plan that forced more righteous men than he to a life of squalor, "I confess to my shame," he wrote, "that few men have drunk deeper of all kinds of sin than I."
Late at night, cackles emanated from the busy pubs along Paradise Street where Chandler's slum-row surgery was located. Prostitutes offered their wares to the drunken and downtrodden while knife-wielding gangs clashed in the dark alleys, sometimes leaving a fresh corpse for the “bone-pickers” to scavenge a few pennies worth of clothing from.

With so many of his days and nights spent in the ramshackle surgery, just a hundred paces from the festering Thames, Huxley experienced “gloom with every breath” and felt his ambitions stifled.
He had only one hope of advancement: a university degree. However, with only two years of formal education he was greatly outclassed by his social betters. And so, by the thin light of his lantern the young man sat grinding drugs late into the night, and reading. Humes’ History of Great Britain, Müller’s Elements of Physiology, Hutton’s Theory of the Earth. Whenever he could fit in time for personal study, Huxley maintained a punishing schedule. On Tuesdays and Thursdays he studied physiology, other days of the week he focused on “a chronological abstract of reigns”, he spent evenings devoted to mathematics, Saturday’s were for chemistry and physics with an hour of German daily. “I must get on faster than this”, he chided himself, “and let me remember this – that it is better to read a little & thoroughly than cram a crude undigested mass into my head.” He studied Latin and Greek and wrote his mother to ask for a copy of Euclid’s Geometry. The university entrance exams required a solid background in the classics and he had a great deal of catching up to do.

Interestingly, there is one title on Huxley’s reading list during this time that doesn’t seem necessary for college admission: Thomas Carlyle’s Chartism. Published a year earlier in 1840, the book was a passionate manifesto of the struggles that the poor experienced, explaining the backdrop of what would became the first major labor struggle of the Industrial Revolution. “To me,” Huxley reflected, “this advocacy of the cause of the poor appealed very strongly.” That August as Huxley ground drugs and studied anatomy, factory workers took to the streets outside demanding the right to vote, for decent wages, and a ten-hour workday. By 1841 the Chartist movement was already several years old but was still a profound mystery and a source of great anxiety to Victorian England. In 1837 six sympathetic members of Parliament and six working men wrote the first draft of The People’s Charter, a document that advocated universal male suffrage, annual elections, and an end to property qualifications for membership in Parliament. The Charter was then taken around the country to eventually be signed by 1.3 million people (nearly twice the number of propertied voters) before being presented to the House of Commons in 1848.

The Great Chartist Meeting on Kennington Common, April 10, 1848, photograph taken by William Kilburn. Crowd estimates range from 50,000 to 100,000 people. Source

The aristocracy viewed this movement as dangerous, if not outright seditious. In the year prior to Carlyle’s book, Lord John Russell (a liberal Whig in the House of Commons who would later serve as Prime Minister) had a letter sent to The Times of London requesting that reporters report on “any meeting convened by persons calling themselves Chartists” so that their “illegal transaction” could be prosecuted. Carlyle, however, referred to the democracy movement among the poor as, "the bitter discontent grown fierce and mad . . . of the Working Classes of England. It is a new name for a thing which has had many names, which will yet have many." [5]
For young Huxley this struggle, and the conditions which gave rise to it, became pivotal in his development and can be seen to have influenced his thinking as well as his scientific theories many years later. “I had had the opportunity of seeing for myself,” he wrote of the time, “something of the way the poor live. Not much, indeed, but still enough to give a terrible foundation of real knowledge to my speculations.” However, while it’s clear that Huxley sympathized with the plight of the poor and found the conditions of East London both shocking and unacceptable, he was already developing a distinctly middle-class sensibility. The destitute of East London were as strange to him as “the savages of Australia,” he would later write. Even so, no Aborigine was “half so savage, so unclean, so irreclaimable as the tenant of a tenement in an East London slum.” He went on to write:

I used to wonder sometimes why these people did not sally forth in mass and get a few hours’ eating and drinking and plunder to their hearts’ content, before the police could stop and hang a few of them. But the poor wretches had not the heart even for that. As a slight, wiry Liverpool detective once said to me when I asked him how it was he managed to deal with such hulking ruffians as we were among, "Lord bless you, sir, drink and disease leave nothing in them."

Morally offended by many of the vices that people turned to in an environment that offered little hope of social betterment, Huxley found inspiration in Carlyle’s missionary solutions. As Carlyle wrote in Chartism:

Light has come into the world, but to this poor peasant it has come in vain. . . Education is not only an eternal duty, but has at length become even a temporary and ephemeral one, which the necessities of the hour will oblige us to look after.

To teach the moral qualities that he viewed as central to his own future success, Huxley would emulate Carlyle in his own policy recommendations for the poor:

[W]hat dweller in the slough of want, dwarfed in body and soul, demoralized, hopeless, can reasonably be expected to possess these qualities?...[I]n a densely populated manufacturing country, struggling for existence with competitors, every ignorant person tends to become a burden upon, and, so far, an infringer of the liberty of, his fellows, and an obstacle to their success. Under such circumstances an education rate is, in fact, a war tax, levied for purposes of defence.

For Huxley then, as it was for Carlyle, the crisis of poverty was one of proper training. The reality of their economic condition may make them poor in material goods, but it was their poverty of mind that made them truly destitute and unable to rise in the world. It was only through proper education that the poor would be able to pull themselves up by their own bootstraps. The call from the radicals for economic and political change were ultimately addressing the wrong problem. As Huxley read and dreamed of escape into university life and upper-class respectability, Carlyle's sermon brought a glean to the young agnostic's eye.

Intellect is like light; the Chaos becomes a World under it, the discernment of order in disorder; it is the discovery of the will of nature, of God’s will.

Huxley had no use for God, but what did nature say on the question of order and disorder? It was something he would spend much of his life contemplating and his final years obsessing over. It would also bring him into direct conflict with one of the most influential political radicals in the world, a fugitive already wanted in three countries who was now intent on bringing anarchy to the UK.
The Scientist and the Anarchist - Part II will be published next week at Skulls in the Stars.

References:

[1] Roy Porter (2006). London: A Social History, Cambridge: Harvard University Press, 1995; Liza Picard (2006). Victorian London: The Life of a City, 1840-1870. New York: St. Martin’s Press.

[2] John E. Archer (2000). Social Unrest and Popular Protest in England, 1780-1840, Cambridge: Cambridge University Press.

It's Christmas-time, and guest blogger Danna Staaf is back with a nifty explanation of why Stephanie Meyers' emo-vampires in the Twilight series sparkle. Makes sense to Jen-Luc Piquant!

"Gran didn't know that I was in love with a
vampire--nobody knew that--so how was I supposed to explain the fact that the
brilliant sunbeams were shattering off his skin into a thousand rainbow shards
like he was made of crystal or diamond?"

--Chapter One, New Moon, Stephenie Meyer

Sparkling vampire skin both introduces the novel New Moon and
serves as a central plot point, when Edward tries to reveal himself in bright
sunlight in a crowded Italian plaza. Leaving aside the question of why Edward
decided on this drastic course of action, which in my opinion has to do with
the unhealthy view of high school relationships that comes from taking Romeo
and Juliet too seriously . . .Let's finally tackle the scientific question: Why do vampires sparkle?

"Why" questions can be answered with either proximate
or ultimate causes, even a simple "why" like "Why are you on
that motorcycle?" "Because I picked it up and climbed on" is the
proximate -- also known as the smart-aleck -- cause. "Because I'm mentally
unstable and looking for an adrenaline rush," is the rarely-confessed
ultimate cause.

In that case, Charlie Swan would probably be more interested
in his daughter's ultimate causes, but for scientific questions, both answers
are well worth exploring. For example: why are peacock tails so spectacularly
colored? The proximate cause is a mechanistic explanation:
because each feather bears rows of tiny barbs, and each barb bears even tinier
barbules, and still tinier structures on the surface of the barbules reflect
certain wavelengths of light to produce brilliant iridescence. The ultimate
cause is an evolutionary explanation (not uncontested):
because peahens prefer to mate with healthy peacocks, and tail ornamentation is
one proxy for health, so over many many generations sexual selection resulted
in peacocks with fancier and fancier tails.

This is all relevant to sparkly vampires, I promise! You
see, the most obvious proximate explanation for the brilliant sunbeams
shattering off his skin into a thousand rainbow shards is, of course, overwrought
writing iridescence. This shiny phenomenon is found throughout the natural
world, not just in peacock tails, but beetle backs, lizard bellies, and (you
knew I had to mention a cephalopod) squid skin, to name only a few.

Iridescent colors are structural colors, produced not
chemically, but mechanically. (Aha, a justification for putting this blatantly
biological post in a physics blog!) Usually, when we think of colors, we think
of chemical colors, or pigments--the melanin in our hair, for example, or the
ultramarine blue
in paint. The color of a pigment is determined by selective absorption. When a
full spectrum of white light hits the pigment, it absorbs some of that spectrum,
and reflects the rest. The reflected part is what hits our eyes, and we name
the pigment accordingly. Chlorophyll, that great workhorse of photosynthesis,
absorbs everything except green wavelengths, which is why plants look green.

Structural colors work very differently, based on the fact
that visible colors have an actual size, at a nanometer scale--blue light
vibrates at 475 nanometers and red light at 650. At this tiny scale, an object
can interfere with white light, separating it into its component colors and
scattering them in different directions. All kinds of nanostructures can cause
interference effects--nanogrooves, nanolayers, nanopits--and different animals
use different structures. To return to our friends the peacocks for an example:
the barbules on their feather barbs contain lattices of tiny rods, each rod
about 700 nanometers long. Different barbules produce different iridescent
colors by variable spacing between the rods, and variable numbers of layers of
lattices.

In peacocks as in most other iridescent animals, these
structural features are fixed. Blue barbules are always blue, and red barbules
are always red. But some fish and squid can actively change their structural
colors by altering the nano-structure of their skin with electrical signals!
The maiden episode of Creaturecast does a beautiful job explaining this
remarkable phenomenon and I've rambled
away over at Cephalopodiatrist about how intrepid entrepreneurs are already
looking for ways to turn biology into technology.
However, there has been no indication yet of vampires with a similar ability.

Iridescence seems like a satisfactory explanation for
vampire sparkle, but it's necessary to consider alternative theories. For
example, a deep-sea biologist friend of mine suggested that vampire skin may
contain symbiotic bioluminescent bacteria. (Abundant precedents for such symbioses can be found in
the animal kingdom.) He further hypothesized that the bacteria are
chemosynthetic, which means that instead of making food from light (that's what
green plants are doing with all that chlorophyll) they make their food from
chemicals. In this case, probably pheromones and angst molecules.

It would be quite straightforward to distinguish between
these two possibilities with a simple laboratory analysis of vampire skin.
Light microscopy should be sufficient for detecting the presence of symbionts,
and electron microscopy would be helpful for visualizing fine nanoscale
structures if it does turn out to be iridescence after all.

At this point, I figured it was time to do a little research
to see if anyone else has started experimental work on the subject.
Surprisingly, they haven't.

Almost as surprisingly, I found that Stephenie Meyer herself
offered a semi-scientific explanation of sparkling in her website FAQ. It's under the question
"Vampires and pregnancy: when did that idea occur to you? How does that
work?" about two-thirds of the way down the page.

Accordingly to Stephenie, "the cells that make up
[vampire] skin are not pliant like our cells, they are hard and reflective like
crystal. A fluid similar to the venom in their mouths works as a lubricant
between the cells, which makes movement possible (note: this fluid is very
flammable)."

For those who haven't been following along, yup, vampires
are venomous. Not poisonous -- an important distinction I'm glad she got right. Cell lubricant, strange as it sounds, is another thing
she got right, although it's hardly unique to vampires. In fact, all
multicellular animals, including humans, have cell lubricant-- it's called
interstitial fluid. Wikipedia
tells us that "On average, a person has about 11 liters of interstitial
fluid, providing the cells of the body with nutrients and a means of waste
removal." To put this in context, the average adult human contains only
about 5 liters of blood. Said another way: your body has more than twice as
much cell lube as blood!

Back to the point, which is sparkle. According to Stephenie,
vampire sparkle is neither iridescence born of nanoscale structure nor the
luminescent glow of a compliant symbiont. It's a large-scale property of skin
cells themselves. Maybe this whole post was unnecessarily complicated. Maybe
vampires are just made of little diamonds squooshing around in a matrix of
venom.

To which I say: ewww, but if that still sounds sexy, try
looking at it this way:

Hey, speaking of sex, next time I'll discuss ultimate causes
of vampire sparkle and delve into an evolutionary exploration of the
vampire/human relationship!

I love my three-year-old MacBook Pro, but it does run through the batteries, no matter how regularly I calibrate the darn things. I'm on the third replacement battery, and it's barely holding a charge anymore, lasting for, oh, 30 minutes at most. I expect it will give up the ghost any day now. The question now becomes, do I replace the battery one more time, or upgrade to a new MacBook Pro? The new version supposedly comes with batteries that last up to 8 hours, which would be pretty darned handy on a long flight.

The explosion of portable computers (laptops, smart phones, etc) has brought the problem of battery power to the forefront of technological concerns for those in the business of selling such devices. Computers keep getting smaller thanks to continued shrinking of chips and other microcomponents, but batteries necessary to operate them remain pretty clunky in comparison, and thus they add considerable weight to any product -- the largest portion of my laptop's weight is due to the battery. It's just the latest chapter in mankind's quest for the perfect power source.

Some historians believe primitive batteries were used in Iraq and Egypt as early as 200 B.C. for electroplating and precious metal gilding. Around the 1790s, through numerous observations and experiments, Luigi Galvani, an Italian professor of anatomy, caused muscular contraction in a frog by touching its nerves with electrostatically charged metal. Later, he was able to cause muscular contraction by touching the nerve with different metals without a source of electrostatic charge. He concluded that animal tissue contained an innate vital force, which he termed "animal electricity."

One person didn't think Galvani was right. Count Allessandro Guiseppe Antonio Anastasio Volta was born in Como, Italy, in 1745, and eventually grew up to be a professor at the Royal School, where he studied the chemistry of gases. In 1776 he discovered methane by collecting the gas from marshes, and experimented with igniting the gas in a closed container. And he devised an intriguing method for remotely operating a pistol: he used a Leyden jar to send an electric current all the way from Como to Milan via a wire insulated from the ground by wooden boards. Once the current reached its destination, it set off the pistol. One might think this was just an exercise in futility -- unless Volta had plans to become Dr. Evil and this was his way of killing a captive Austin Powers from a distance -- but in fact the experiment was one of many that laid the groundwork for the invention of the telegraph.

Volta also studied electrical capacitance, although at the time that term wasn't known. He set out to prove that Galvani had drawn the wrong conclusions from his frog experiments. Specifically, he wanted to prove that electricity did not come from the animal tissue but was generated by the contact of different metals in a moist environment. So in 1800 he replaced the frog's legs with alternating layers of brine-soaked paper and two metals, zinc and silver, and also detected the flow of electricity. This was the first voltaic pile, and the first electrochemical cell. It inspired a slew of similar devices, including the so-called "wet cell" or Daniell cell, which became the workhorse for operating telegraphs and doorbells. It's called a wet cell because it relies on liquids rather than dry solids for electrolytes.

To make a Daniell cell, a copper plate is placed on the bottom of a glass jar and then covered with a copper sulfate solution until the jar is half full. Then you hang a zinc plate and add a zinc sulfate solution to the jar. Since copper sulfate is denser than the zinc sulfate, the latter "floats" on top -- much like certain specialty cocktails require "layering" of liqueurs of varying density. The downside to the Daniell cell is that it has to be kept stationary: it's not good for powering portable devices, such as flashlights.

Nowadays, we use the common household battery for our portable devices. It's still the same fundamental concept. There is a positive and negative terminal; electrons are produced by chemical reactions inside the battery, and collect on the negative terminal because they are negatively charged. Connect a wire between the two terminals, and the electrons will flow to the positive terminal. This wouldn't be helpful all by itself, but the wire usually also connects a "load" -- a light bulb, a motor, a radio circuit -- and the energy is used to power said device(s).

So that's the old-school technology, which hasn't changed all that much since Volta's day. There's all kinds of interesting new twists on conventional batteries, mostly based on unusual fuel sources. Back in 2006, a team of MIT researchers led by Angela Belcher created a new battery technology based on a genetically engineered M13 virus -- fortunately harmless to humans. Their battery is flexible and small enough it could be used to power tiny sensors, useful as detectors for cancer or heart disease, among other implantable devices, not to mention existing lab-on-a-chip technology.For instance, implant a small device under the skin, powered by the virus, and it could light up a small visible LED if cancer proteins were present.

Among the many challenges is devising a cheap means of mass production of these microscopic batteries. Last year Belcher and her colleagues published a refined version of their viral battery, which involves stamping silicon film in such a way that the negatively charged 13 virus and a positively charged cobalt strip will self-assemble based on their charges and how the stamp is patterned. Now they've got a device that could potentially be woven into fabrics, for instance -- turning almost any surface into an energy-storing device. "We provide the surface and the ions, and the batteries build themselves," Belcher told Discovery News.

If viruses don't float your boat, Sony has a prototype biotech battery (technically a fuel cell) with a sweet tooth: it needs a sugar fix to operate the four-cell array, which is capable of producing up to 50 mW of power -- sufficient to power a desk fan, or speakers, or other small devices. And where does that glucose come from? Why, sugary sweet fruit drinks of course. That includes an unusual Japanese beverage called Pocari Sweat, which seems to bear a close resemblance to Gatorade: its ingredients include water, sugar, citric acid, sodium citrate, sodium chloride, potassium chloride, calcium lactate, magnesium carbonate, and "flavor." (According to Wikipedia, the name derives "from the notion of what it is intended to supply to the drinker: all of the nutrients and electrolytes lost when sweating." In other words, Gatorade.)

Know what else contains glucose? Blood! So it was only a matter of time before scientists started investigating blood as a possible fuel source. A few years ago, other Japanese researchers built a fuel cell that runs on blood, drawing electrons from glucose (blood sugar) to generate about 0.2 milliwatts of electricity. The feat has also been done by scientists at Rensselaer Polytechnic Institute, drawing on the naturally occurring electrolytes in bodily fluids -- not just blood, but tears or even urine. The RPI version is thin as paper-- in fact, it pretty much is paper, being made of 90% cellulose and 10% carbon nanotubes to make it conductive. You can imprint the nanotubes directly onto nanocomposite paper, and like the virus-based battery, the RPI device is flexible, thin enough to fit under the skin, with the potential for cheap mass production. It even has similar potential applications: it can be used to power medical implants such as pacemakers, artificial hearts, or prosthetics.

Most recently, a team of researchers at the University of British Columbia in Vancouver created yet another tiny battery that runs on human blood -- also useful for things like pacemakers. (Hey, it's an important energy application, and one for which we need biologically compatible and renewable battery sources -- hence the strong academic interest.) The core of this particular version of the blood battery relies on a small colony of yeast -- Saccharomyces cereyisiae, commonly used in brewing and baking -- that sets up shop inside the core, drawing energy from the glucose in blood flowing around it.

Energy is produced as the cells start to break down food, and a chemical called methyl blue (used to stain biological samples) serves as an electron mediator, stealing some of the electrons produced during the metabolization process and delivering them to the anode, thereby creating a small current. So now we've got an actual living source of power that can regenerate itself -- although it also produces waste products that must be removed before they leach into the bloodstream. So we won't be seeing this device hit the clinical market any time soon.

What might blood-based fuel cells be good for? How about a novel nightlight? An English designer named Mike Thompson was studying for his master's degree in the Netherlands, researching chemical energy, and was intrigued by luminol -- the chemical forensic scientists use to detect traces of blood at crime scenes. Basically, luminol reacts with the iron in red blood cells, producing a bright blue glow. That set Thompson to thinking that perhaps folks would appreciate the energy they consume a bit more if it cost them their own life blood.

And so he built a simple lamp that uses blood to create light as it reacts with luminol. For the strong of stomach, there is a video showing how to use the lamp. You mix in an activating powder, then break the glass, cut your finger on the edge, and let the blood drip into the opening. The result is a soft blue glow -- and it can only be used once. "You have to really decide when to use this lamp because it's only going to work once," Thompson told LiveScience, adding his project was intended "to challenge people's preconceived notions about where our energy comes from," forcing users "to rethink how wasteful they are with energy, and how precious it is."

All this cutting-edge research on glucose and blood-based fuel cells inspired a couple of other British designers -- those Brits are a morbid bunch -- to create a prototype flesh-eating clock. I kid you not. Fortunately, it eats the flesh of insects, but it's only a matter of time before it starts craving human flesh (fresh braaiiins). It's pretty ingenious, actually. James Auger and Jimmy Loizeau stretched some flypaper across a roller system; as flies are caught, the roller dumps them into a vat of bacteria that "digest" the bugs, and the resulting chemical reactions are used to power an LCD clock. There's another version that feeds on mice, and also an insect-powered lamp -- the creatures are lured to their doom by ultraviolet LEDs.

*Le sigh* Whatever happened to conventional alternative energy sources, like wind, solar, or even nuclear power? Heck, even Mother Nature has built her own nuclear reactors; there are about 16 of them, two billion or more years old, buried in the rocks beneath Gabon, according to Australia's Curtin University of Technology. (h/t: Geoff Manaugh of BldgBlog) The phenomenon was first predicted in 1956 by physicist Paul Kazuo Kuroda, who argued that a chain reaction could be set off in natural uranium deposits, thereby generating heat in much the same manner as a nuclear power plant.

In 1972, scientists discovered exactly that type of natural nuclear reactor in the middle of the Oklo uranium mines in Gabon. One even pulsed in a three-hour regular cycle, "running" for 30 minutes, then shutting down for two-and-a-half hours before running another 30 minutes, and so on for over 100,000 years. Who needs batteries with that kind of energy source? Well, (a) the "reactors" no longer operate, ad (b) the conditions necessary to produce them turn out to be pretty rare. According to Wikipedia, these are "the only known sites in which natural nuclear reactions existed. Other rich uranium ore bodies would also have had sufficient uranium to support nuclear reactions at that time, but the combination of uranium, water and physical conditions needed to support the chain reaction was unique to the Oklo ore bodies."

And yes, there would have been nuclear waste products, including plutonium. Fortunately, the radioactivity has long since decayed away. Even more interesting, the deposits of (formerly radioactive) waste products haven't shifted much in location -- the plutonium hasn't even moved 10 feet from the spot where it was first formed almost two billion years ago. So naturally the Department of Energy is studying the rocks at Oklo to figure out how Nature managed to contain her nuclear waste. It should help us figure out how to better contain our own nuclear waste products.

Jen-Luc Piquant, meanwhile, is far more intrigued by the notion of hooking up hamsters and other small rodents to power small generators. It's the sort of thing we used to joke about in college (my car was notoriously slow to accelerate, especially uphill), but Jen-Luc found the following YouTube video (via io9) by scientists at Georgia Tech showing a hamster running on its little wheel while connected to a generator via tiny nanowires. Okay, you'd need four nanowires to generate a measly 200 millivolts; that won't solve the global energy crisis. But it can power a tiny nanobot of the future. I say, let the Rodent Green Energy Nano-Revolution begin!

So, I was all set to blog last weekend and then came down with a nasty cold. But I recovered just in time to take the Spousal Unit to dinner and a movie for his birthday. The film: Ricky Gervais' The Invention of Lying. It didn't have the strongest opening weekend, but it's an excellent satirical film that is very funny yet also poses some interesting questions for viewers inclined to ponder the implications a bit more deeply. Jen-Luc Piquant is just relieved to see a film that gets its humor from actual ideas, rather than broad farce in questionable taste, and encourages everyone to support the film's flagging box office by going to see it -- twice. The Invention of Lying has its farcical moments, mind you -- Jen-Luc Piquant is not a prude -- but as anyone familiar with his standup routines (or the British version of The Office) well knows, Gervais is a master of understatement; he doesn't feel the need to try too hard to make us laugh. (Personally, I'd pay to see Gervais read aloud and comment upon just about anything, including Proust; the man is that clever and funny.)

The film's premise is simplicity itself. What would the world be like if nobody could lie -- not even a harmless little white lie? In the world envisioned by Gervais, brutal honesty is the order of the day. Nobody is capable of hiding disdain, dismay, insecurity, or outright hatred. "Movies" are dry, boring documentaries of great moments in history, narrated by "actors' incapable of pretense. Poor Mark Bellison (Gervais) is about to lose his job as a screenwriter; he was assigned one of the least interested and depressing historical eras: medieval Europe decimated by the Black Plague. And he's about to be evicted from his apartment, on the eve of a blind date with the girl of his dreams (Jennifer Garner), who enjoys his company but is frankly a bit out of his league, as everyone seems compelled to tell him -- including their waiter, who also announces how unhappy he is with his job, and that he sampled their drinks before serving them. But then the hapless Mark suddenly develops the ability to lie, or in his words, "I said something... that wasn't!" We are treated to an image of neurons in his brain firing in new ways at that pivotal evolutionary moment (see clip below).

In reality, lying is probably as old as humankind, and the elusive ability to tell when someone is lying has consumed a great deal of brainpower over the ages. Or, as David Thoreson Lykken phrases it in his classic book, A Tremor in the Blood: "If man learned to lie not long after he acquired language, we may assume that the first attempts at lie detection soon made their appearance....We are all human lie detectors; we must be to survive in our mendacious society."

So it's not surprising, then, that lie detection has a long and colorful history. It has its roots in instruments of torture, most notably during the European Middle Ages, when it was believed that subjecting the body to extreme physical agony would force the victim to blurt out the truth. (We now know that this is far from the case. An Italian Enlightenment thinker, Cesare Beccaria, wrote in 1764, “By this method, the robust will escape, and the feeble be condemned. These are the inconveniences of this pretended test of truth.”) In 1730, Daniel DeFoe suggested it might be possible to measure someone's heart rate to detect deception.

The evolution of the modern lie detector, or polygraph machine, began with the first tests to determine the physical responses of the body during the act of deception. In 1895, the so-called “ ‘Father of Modern Criminology,’” Cesare Lombroso used a device called a plethysmograph to monitor changes in the blood flow of a subject during interrogation; two years later, in 1897, B. Sticker developed a method of measuring the galvanic responses of an individual under interrogation: i.e, the amount of sweat they produced as determined by the electric conductibility of their skin. Finally, in 1914, Vittorio Benussi began to study the breathing rates of individuals, using pneumatic tubing wrapped around the subject’s chest to measure depth and rate of breath. He found that the “ratio of inspiration and expiration was generally greater before truth telling than that before lying.” So not only could blood pressure, pulse rate and sweat production be linked to the act of lying, but breathing rates as well.

All these components are combined in the modern polygraph machine, which measures physical responses such as respiration, heart rate, pulse, and electrical skin conductance to determine if a subject is lying, are notoriously unreliable. Its invention is largely credited to William Moulton Marston, an American psychologist who in 1915 began to demonstrate a lie detection test, which determined whether the subject was being deceptive using a blood pressure cuff, or sphygmomanometer, to take measurements of systolic blood pressure during interrogation.

(Interestingly, he also created the comic book character Wonder Woman under the alias Charles Moulton. Wonder Woman was known for her Lasso of Truth that compelled people to tell the truth when wrapped in its coiled -- clearly, Moulton had serious trust issues. Maybe it had something to do with his polyamorous lifestyle: he and his wife lived with a third woman, Olive Byrne, for many years.)

An American medical student and an employee of the Berkley police department,named John Larson, is credited with the first "polygraph" to be used in forensic science: he adapted the scientific procedure created by Marston in the Harvard Psychological Laboratory and adapted it to police procedure beginning in 1921. Like Marston, Larson recognized the importance of asking questions in the correct order, and wording them in specific ways, as being critical to lie detection -- the apparatus was just the supporting device. Larson called his invention a "cardio-pneumo-psychogram," because it documented blood pressure, pulse rate and respiratory rates, all on a drum of paper.

The problem with polygraph tests is that they are notoriously inaccurate and people can train themselves to beat the machine. Most notably, they only measure physiological responses; determining whether those responses indicate a lie is the job of the person administering the test -- which makes the results highly subject to interpretation. Or as the American Civil Liberties Union puts it: "The lie detector does not measure truth-telling; it measures changes in blood pressure, breath rate, and perspiration, but those physiological changes can be triggered by a wide range of emotions."

The most common countermeasures to beat the polygraph include sedatives, putting antiperspiranton the fingerprints, biting the tongue, lips or cheek, or placing tacks in one's shoe. In Ocean's 13, for instance, a character beats a polygraph test by stepping on a tack whenever he answers a question truthfully, skewing the machine's readings and making it harder to determine the difference between lies and truth. But it's not a simple matter either; it requires a bit of skill to beat the polygraph. The Mythbusters notoriously attempted to fool a polygraph in one of their episodes, and failed miserably.

Anyone still supporting the accuracy of the polygraph is going to have to come up with some seriously convincing solid evidence to win over the scientific community at this point. In 2003, the National Academy of Sciences released a report called The Polygraph and Lie Detection, concluding that most such research was "unreliable, unscientific and biased," based on the group's analysis of 57 research studies on which the APA bases its reliability.

Okay, those studies didn't completely bomb: the report concluded that polygraphs can detect a lie "at a level slightly greater than chance, yet short of perfection"; however, correct results were habitually over-stated, "almost certainly higher than actual polygraph accuracy of specific-incident testing in the field." Lots of devices work wonderfully in the carefully controlled conditions of the laboratory, but "slightly greater than average" accurate readings doesn't inspire a great deal of confidence in the technique.(A common misconception is that, when properly conducted, a polygraph is accurate 80-99% of the time. The NAS report contradicts that widespread belief.)

As recently as this past April researchers at the University of Florida conducted a study in the Journal of Forensic Sciences demonstrating the inaccuracy of standard lie detection techniques. The researchers hooked up 78 test subjects (men and women of all ages) to voice stress analyzes and used those devices to analyze vocal frequency of the speakers to determine when they were lying. The volunteers were instructed to lie while undergoing small electric shocks to simulate stress. The result? "[T]he 'true positive' (or hit) rates for all examiners averaged near chance (42-56%) for all conditions," the researchers concluded. "Most importantly, the false positive rate was very high, ranging from 40% to 65%." That was true even when representatives from the device manufacturers conducted the tests, as opposed to the scientists.

The shortcomings of traditional polygraph techniques were succinctly demonstrated in a scene from Lie To Me's first season, when the fictional Cal Lightman debunks a new handheld lie detector device under demonstration. There actually is such a device being used by US Department of Defense called the Preliminary Credibility Assessment Screening System (PCASS); apparently it relies less on the judgment of a polygraph examiner and more on a special algorithm to determine whether a subject is lying, based on the measured physiological responses. But we've just seen that those responses can be misleading, and are not always indicators of deception.

In the episode, the (male) test subject performs quite well when being interrogated by a bland male examiner, but then Lightman sends in a sexy young woman to ask the same questions in a more flirtatious, suggestive manner -- and the subject exhibits a physiological response similar to "lying" when he is really feeling self-conscious about his sexual arousal. Lightman likens the handheld device to a West African tribal custom in which a bird's egg is passed to someone suspected of a crime. If the suspect broke the egg, s/he was found guilty, because obviously they broke it out of nervousness, and if they were nervous -- well, then they must be guilty!

Lightman's character is based on real-life scientist Paul Ekman, who pioneered the use of so-called "microexpressions" to determine whether or not someone is lying; he calls his approach the Facial Action Coding System, and it classifies every human expression, including the unconscious body mechanics of decetion. For instance, there are telltale arm and movements, eye contact, and verbal contexts, all of which combined can reveal whether or not someone is being truthful (in theory, anyway). A liar won't make eye contact, and may compulsively touch his/her face, throat or mouth, or touch or scratch the nose or behind the ear. S/he will not be likely to touch the chest or heart area with an open hand. If someone says "I love you" while frowning, s/he is likely lying -- the gestures or expressions don't match the verbal statement. The timing and duration of gestures and expressions are also useful determining factors. And we've all met that person whose smile never quite reaches their eyes, making us feel like their cheerfulness is insincere.

It's an admittedly inexact science, something the show makes clear: Lightman and his team are not infallible. For instance, he mistakenly concludes a mother is not sufficiently grieving for her child and hiding some guilty truth because of the lack of the telltale microexpressions accompanying such emotion. Then Lightman realizes that she is hiding something: her age. The mother has received Botox treatments, which numb the tiny facial muscles that give rise to microexpressions in the first place.

Further complicating matters is the fact that people lie for various reasons and motivations -- not just because they are guilty of some crime. In the case of the aforementioned episode, "He Said, She Said," Lightman determines that a female soldier has made a false accusation of rape against a male colleague. But she is lying on behalf of another woman who is too terrified to come forward -- a noble impulse, even if the ends don't justify the means. Mark, the hapless screenwriter in The Invention of Lying, lies for all kinds of reasons: initially out of desperation to avoid being evicted, then to advance his career and romantic prospects -- although he can't bring himself to tell a lie to convince the woman of his dreams to be with him, even though it means she will marry his arch-rival. And, in the most heart-breaking scene, he lies to comfort his dying mother, who is terrified of the Void -- a truly altruistic lie that quickly spirals out of control.

Nonetheless, scientists are still looking for the equivalent of Wonder Woman's magical lasso of truth. Scientists have been looking into using functional magnetic resonance imaging (fMRI) to achieve a kind of "brain fingerprinting" as a means of lie detection. In fMRI, when certain parts of the brain are engaged during a specific cognitive activity, those areas light up in the brain scan -- and if a person happens to be "dissembling," it should be possible to tell that they are lying just by looking at the scan.

Brain fingerprinting seems to offer something closer to an objective analysis of whether or not not someone is lying. How can a brain scan lie, after all? Well, maybe the scan doesn't lie, but how we interpret those images is prone to human error, particularly since we don't fully understand how this complicated organ called the brain actually works. Chief among the naysayers of this new "mind reading" technology is Melissa Littlefield of the University of Illinois, who argues that the technique is based on fundamentally wrong assumptions, most notably "truth" is the baseline, the natural state of being, and lying is adding "a story on top of the truth." That might be true in Gervais-Land, but the real world is far more complicated.

An fMRI scan might reveal a lie if the person knew he or she was lying -- if it were a conscious decision. But "some people don't actually know that they're lying, or have a told a lie for so long that it becomes their subjective interpretation of reality," Littlefield explains. And just as with the polygraph test, it's possible to cheat and beat the machine: just clench your teeth or move your head slightly. FMRI requires the subject to hold perfectly still to get a usable image.

There are defenders of the technique's potential for lie detection as well. The most recent fMRI work on truthfulness comes to us via Joshua Greene of Harvard University, who published his results recently in the Proceedings of the National Academy of Sciences. He found that honest subjects showed almost no additional brain activity when telling the truth, as might be expected -- you're not inventing a lie, after all. But dishonest subjects did show extra brain activity... even when they were telling the truth. Greene's conclusion: "Being honest is not so much a matter of exercising willpower as it is being disposed to behave honestly in a more effortless kind of way."

Then again, compulsive honesty in all situations has its drawbacks. Would any of us really want to live in the fictional world envisioned by Gervais, where nobody has any kind of filter and hurtful truths are uttered on a daily basis? Lie To Me has its own similar character in Eli Loker (Brendan Hines), who has taken a vow of radical honesty in response to his work -- which includes admitting to his female love interest, Ria, that he's only slightly above average in terms of sexual performance. Ria gets in her own zinger when Lightman asks if she has any specific training in spotting deception: "Well, I've dated a lot of men." Little white lies are an integral part of our social fabric. As the Mitchell & Webb sketch below illustrates, sometimes one can be too much of a stickler for the truth.

I really could have used co-blogger Allyson's advice about skeptic etiquette earlier this year. At a recent party, a bunch of my non-science-geek friends and I got to moaning about all the time we spend in front of our computers. I've always thought one of the ironies of my life was that when I was growing up, my parents yelled at me to not sit so close to the TV and I now earn my living . . . sitting too close to what is, essentially, a TV. Lots of us older folks have eye fatigue from it, especially from the old CRT screens that really were pretty much the same as a TV. But one of my fellow partiers loudly declaimed that we were all being irradiated by our computers too, and that she'd bought this special amulet to turn back those rays and all the other irradiation modern life subjects us to and "other negative energies." The best part was that the woman she bought the amulet from is able to recharge it remotely and is always adding new protections to it. Best money she'd ever spent, she said.

Moments like this just make my brain hurt.

We'd already had the LHC-black hole discussion in which I'd successfully quashed that panic, but my bullshit meter was almost off the dial with this. I had to really bite my tongue not to say something scathingly sarcastic to someone I didn't know, and it was all I could do not to roll my eyes like a Kewpie doll. It was odd (and kind of heartening) how the room went really silent though. And then I was chagrined to realize I didn't really have enough hard data at my fingertips to rebut everything she said, and that was embarrassing. I know about the scientific studies refuting the dangers of overhead high voltage wires, but I only have a vague grasp of how CRTs and LCDs work. So I thought I'd use my ignorance (shameful, I know) to offer up a quick lesson in how to refute this particular species of New Age Quack, diplomacy not included. (Check with Allyson about that.)

Part of the problem is that people get freaked out by the idea of radiation in general, not realizing there are many types of it and that we are, indeed, being irradiated all the time, not just by our electronics. That darn visible light bombards us all the time! The electromagnetic (EM) spectrum is huge and only the higher end,
beyond the visible spectrum, is strong enough to be truly dangerous. The harmfulness of radiation depends on its energy, whether it's powerful enough to knock electrons off an atom's orbital shell, turning it into an ion: ionizing radiation.

For example, the wavelength in sunlight that tans you or, in excess, gives
you skin cancer is not the visible spectrum or we'd all have fried
eyeballs (which is what happens when you look directly at the sun
because the light is concentrated through the lens of your eye onto a
small spot on your retina, like a bug under a magnifying glass). It's
UV or ultraviolet radiation, just next door to the violet end of the
visible spectrum (hence the ultra prefix) that makes you sizzle. And that's the least of its effects. Over the long term, UV light can cause cellular and molecular changes that result in cataracts, permanent changes to skin and fibrous tissues, and skin cancer. Because it causes chemical changes in the body, the effects can be exacerbated by "birth control pills, tetracycline, sulphathizole, cyclamates,
antidepressants, coal tar distillates found in anti-dandruff shampoos,
lime oil, and some cosmetics."

The most common sources for ionizing radiation are nuclear reactions natural and man-made (fusion and fission) and natural radioisotopes. Don't know an isotope
from an antelope? An isotope is a version of an element that has a
different than usual number of neutrons (a different mass number) but
the same chemical properties. A radioisotope emits enough EM energy
to strip away the normally tightly bound electrons from an atom, making
it a charged particle, like x-rays. Only the
shortwave end of the EM spectrum has enough energy to do this—not the electrons
coming out of your CRT TV.

While it's true that every appliance in your house that uses an electric current is surrounded by an extremely low frequency (EMF) electrical field, that radiation is of the non-ionizing sort, even the microwave. The health hazards of this type of radiation are probably negligible in the low concentrations we're exposed to.There are some exceptions, of course. Stick your head in the microwave and you'll cook yourself, but that's because microwaves excite the water molecules until they give off heat. You won't be radioactive afterwards, just cooked. Likewise infrared radiation, which makes some beautiful photographs!).
IR filters allow viewers to see differences in ambient temperature, which is why
they're used in night scopes, but the radiation itself is not up to much more in
everyday concentrations than heating you up a little.

Much like the radiation that's coming off your computer and TV screens. CRTs are not so popular now that flatscreens have gotten cheaper, but the initials stand for Cathode Ray Tube, which is the source of the "ray" part that seems to freak people out. That ray is a stream of electrons produced by the heated cathode filament and steered by electromagnetic coils. Those electrons pass through a fine-mesh mask and strike a screen coated with phosphors, temporarily goosing them into a more excited state, making them glow. So that electron energy beam is mostly absorbed by the phosphors on the screen. Some of it does escape, but it's mostly hanging around the glass screen making it staticky and attracting dust motes. Because the first thing a free electron (not to be confused with a free radical) wants to do is bind to something. It's lonesome, just a little negatively charged particle looking for its love match in the orbital shell of some other atom. If there are enough atoms with odd-numbered electrons around (like copper's 29), they pass the electrons around like a swingers party and you get a current. That's the crackle you hear when the screen turns off, and why you sometimes get zapped when you touch a CRT display. The free electrons find you more attractive than the glass.

So, those pesky free electrons—how harmful are they? Um, depends on the source energy (see comments below). There is a type of radiation that involves free electrons (beta decay), but these are high-energy ionizing particles that come out of of the nuclei of radioactive materials like plutonium, not out of hot filaments in a cathode ray tube (which is essentially like a lightbulb). The truth is, you're exposed to more radiation risk in a routine x-ray or in the radon-choked basement of your house than from a cathode ray tube. That's because not all radiation is alike.

Because radiation
exists as both particles and waves, its potential for damage depends on how
much energy it has. The higher the frequency, the more energy it has and the more potential for damage. That's because some radiation—radio waves, for
instance—either passes harmlessly through you or is reflected off your skin, like some wavelengths of light; they're low energy, long wavelength. High levels of exposure to even low-frequency/low energy waves can have an effect on biological systems, but the that seems mostly negligible at everyday exposure levels. Some EM radiation, like x-rays, pass through the softer tissues of your muscles and organs but are
stopped by the denser bones, leaving those blank, unexposed spaces you
see on x-ray film that show your skeleton. When x-rays strike the atoms in your tissue, including those in your DNA molecules, they have enough energy to knock the electrons off, creating charged ions that
alter the usual chemical reactions in your body, or even break the DNA
up. The cells die or work in weird ways. If this happens in enough cells, mutations or other reactions that
we read as radiation sickness or cancer can occur. In small doses
though, the body is self-correcting enough to heal whatever damage is
done in the name of seeing that fracture in your tibia or the denser mass of cancer tumors.

LCDs are an entirely different fish in some ways, but operate similarly. These displays are built in kind of a sandwich style with the liquid crystals in
the middle between layers of glass, filters and electrodes that supply
the current. The material that gives LCDs their acronym are crystals that have lost some of their positional rigidity. Nematic liquid crystal molecules, which are usually rod-shaped, are still more organized than isotropic crystals, but both are semi-liquid. Application of a current causes them to realign or "untwist" themselves into various configurations that block or reveal the light which also passes through vertically and horizontally aligned polarizing filters. The pixels in this case are a substrate of indium-tin oxide. If you've got a liquid crystal display (LCD) monitor, you may have noticed that it attracts significantly less dust to its display surface than your old CRT. This tells you that there are fewer free electrons floating around looking for other atoms to bind to (or that your house is a lot cleaner than mine is). That's because instead of spraying electrons gatling-gun style, in LCDs the current is channeled in a grid to different parts of the screen in a closed circuit.

Plasma displays have a similar structure to LCDs, but the medium that's excited by the current is, obviously, plasma: charged gas confined in "cells" sandwiched between several layers of glass and other materials. Since plasma displays ultimately use UV radiation to excite their pixel phosphors, they contain a protective layer of magnesium oxide (the same compound found in both Milk of Magnesia—as magnesium hydroxide—and some electrical cables. Versatile stuff!) that blocks the ionizing UV radiation) but is transparent to the visible spectrum. So you're still probably getting zapped less by your plasma TV than by an old CRT, though the CRT "leakage" is the standard by which safe radiation is measured for electronics.

The upshot is that none of the radiation emanating from your appliances is the kind emitted by nuclear materials, so you're not going to get cancer or radiation sickness from it. None of it is of sufficient energy to fry so much as a mosquito, let alone cook your brain. In fact, one thing most people overlook is that we are ourselves walking electrical fields. Our brains are electrochemical dynamos, zipping signals along our synapses millions of times a second, the proverbial telephone switchboard. Our own electrical field created by the movement of current between cells is slight, but the fields inside our cells are as powerful as lightning bolts. One day, we may actually become our own local area network. That little shock of electricity between people that we talk about is not only a metaphor.

Okay, now that you know you're not being bathed in deadly ionizing radiation or high levels of non-ionizing radiation from your computer or TV or microwave, what about that "charging remotely" thing? How's that work? Wifi? Radio waves? Light? Microwaves? Hey, maybe that's frying our brains! Slap a radiation hazard sign on all those psychics' doors! But thanks to Allyson, at least you can be polite about it.

Physics Cocktails

Heavy G

The perfect pick-me-up when gravity gets you down.
2 oz Tequila
2 oz Triple sec
2 oz Rose's sweetened lime juice
7-Up or Sprite
Mix tequila, triple sec and lime juice in a shaker and pour into a margarita glass. (Salted rim and ice are optional.) Top off with 7-Up/Sprite and let the weight of the world lift off your shoulders.

Any mad scientist will tell you that flames make drinking more fun. What good is science if no one gets hurt?
1 oz Midori melon liqueur
1-1/2 oz sour mix
1 splash soda water
151 proof rum
Mix melon liqueur, sour mix and soda water with ice in shaker. Shake and strain into martini glass. Top with rum and ignite. Try to take over the world.