Monday, May 28, 2012

The following post is from a series about the annual Ig Nobel Prizes in science, which
honor “achievements that first make people laugh and then make them think.” They were awarded in September in Cambridge, Mass.

Now we come to the Ig Nobel Physiology Prize. Yawns are notoriously contagious in humans and in other social animals, especially primates. In humans, yawning has been thought to do various things, including cooling the brain, increasing arousal when you’re sleepy and, possibly, helping to synchronize group behavior.

Could yawning be a form of unconscious empathy? This would mean that in order to have a contagious yawn, the animals involved would have to be capable of empathy, of fellow feeling. We know that dogs and primates, and humans, probably are, but that means we can’t really test for whether it’s empathy or not. We need a species that is social but probably can’t feel for its compatriots.

That’s where tortoises come in. To test whether yawning requires empathy and thus get at the real purpose that yawning might serve, Anna Wilkinson of the University of Lincoln in England and her colleagues took a group of redfooted tortoises that lived together and trained one of them to yawn when exposed to a red square. Then they had tortoises watch the trained tortoise in action and checked them for yawns. The researchers also checked for yawns when no other tortoise was present and when the trained tortoise had no red square and so wasn’t yawning.

What they got was a big, fat negative. The test tortoises showed no notice of the other animals’ huge yawns. This may mean that contagious yawning is not just the result of a fixed-action pattern triggered when you see someone else yawn. If that were the case, the tortoises would have yawned right along with their compatriots. Contagious social yawning may require something more, a social sense or a sense of empathy resulting from complex social interactions. Of course, it could also mean that tortoises are just a really bad choice for contagious yawning. But the social explanation seems a little more supported.

Saturday, May 26, 2012

In 1977, 22-year-old Steve Jobs introduced the world to one of the first self-contained personal computers, the Apple II. The machine was a bold departure from previous products built to perform specific tasks: turn it on, and there was only a blinking cursor awaiting further instruction. Some owners were inspired to program the machines themselves, but others could load up software written and shared or sold by others more skilled or inspired.

Later, when Apple’s early lead in the industry gave way to IBM, Jobs and company fought back with the now classic Super Bowl advertisement promising a break from the alleged Orwellian ubiquity of Big Blue. “Unless Apple does it, no one will be able to innovate except IBM,” said Jobs’s handpicked CEO John Sculley.

In 1984 Jobs delivered the Macintosh. The blinking cursor was gone. Unlike prior PCs, the Mac was useful even without adding software. Turn it on, and the first thing it did, literally, was smile.

Under this friendly exterior, the Mac retained the essence of the Apple II and the IBM PCs: outside developers could write software and share it directly with users.

The rise of the Internet brought a new dimension to this openness. Users could run new code within seconds of encountering it online. This was deeply empowering but also profoundly dangerous. The cacophony of available code began to include viruses and spyware that can ruin a PC—or make the experience of using one so miserable that alternatives seem attractive. Jobs’s third big new product introduction came 30 years after his first. It paid homage to both fashion and fear. The iPhone, unveiled in 2007, did for mobile phones what the Mac did for PCs and the iPod did for MP3 players, setting a new standard for ease of use, elegance and cool. But the iPhone dropped the fundamental feature of openness.

Outsiders could not program it. “We define everything that is on the phone,” Jobs said. “You don’t want your phone to be like a PC. The last thing you want is to have loaded three apps on your phone, and then you go to make a call and it doesn’t work anymore.” Being closed to outsiders made the iPhone reliable and predictable. In that first year those who dared hack the phone to add features or to make it compatible with providers other than AT&T risked having it “bricked”—completely and permanently disabled— on the next automatic update from Apple. It was a far cry from the Apple II’s ethos, and it raised objections.

Jobs answered his critics with the App Store in 2008. Outside coders were welcomed
back, and thousands of apps followed. But new software has to go through Apple, which takes a 30 percent cut, along with 30 percent of new content sales such as magazine subscriptions. Apple reserves the right to kill any app or content it doesn’t like. No more surprises.

As goes the iPhone, so perhaps goes the world. The nerds of today are coding for cool but tethered gizmos, like the iPhone, and Web 2.0 platforms, like Facebook and Google Apps—attractive all, but controlled by their makers in a way even the famously proprietary Bill Gates never achieved with Windows. Thanks to iCloud and other services, the choice of a phone or tablet today may lock a consumer into a branded silo, making it hard for him or her to do what Apple long importuned potential customers to do: switch. Such walled gardens can eliminate what we now take for granted and what Jobs originally represented: a world in which mainstream technology can be influenced, even revolutionized, out of left field and without intermediation.

Today control increasingly rests with the legislators and judges who discipline platform makers. Enterprising law-enforcement officers with a warrant can flick a distant switch and turn a standard mobile phone into a roving mic or eavesdrop on occupants of cars
equipped with travel assistance systems. These opportunities are arising not only in places under the rule of law but also in authoritarian states. Curtailing abuse will require borrowing and adapting some of the tools of the hidebound, consumer-centric culture that many who love the Internet seek to supplant. A free Net may depend on some wisely developed and implemented locks and a community ethos that secures the keys to those locks among groups with shared norms and a sense of public purpose rather than in the hands of one gatekeeper.

In time, the brand names may change; Android may tighten up its control of outside code, and Apple could ease up a little. Yet the core battle between the freedom of openness and the safety of the walled garden will remain. It will be fought through information appliances that are not just products but also services, updated through a network by the constant dictates of their makers. Jobs, it seems, left his mark on both sides on the tugof war over Internet openness.

Wednesday, May 23, 2012

At a recent math conference, Rouslan Krechetnikov watched his colleagues gingerly carry cups of coffee. Why, he wondered, did the coffee sometimes spill and sometimes not? A research project was born.

Although the problem of why coffee spills might seem trivial, it actually brings together a variety of fundamental scientific issues. These include fluid mechanics, the stability of fluid surfaces, interactions between fluids and structures, and the complex biology of walking, explains Krechetnikov, a fluid dynamicist at the University of California, Santa Barbara.

In experiments, he and a graduate student monitored high-speed video of the complex motions of coffee-filled cups people carried, investigating the effects of walking speed and variability among those individuals. Using a frame-by-frame analysis, the researchers found that after people reached their desired walking speed, motions of the cup consisted of large, regular oscillations caused by walking, as well as smaller, irregular and more frequent motions caused by fluctuations from stride to stride, and environmental factors such as uneven floors and distractions.

Coffee spilling depends in large part on the natural oscillation frequency of the beverage—that is, the rate at which it prefers to oscillate, much as every pendulum swings at a precise frequency given its length and the gravitational pull it experiences. When the frequency of the large, regular motions that a cuppa joe experiences is comparable to this natural oscillation frequency, a state of resonance develops: the oscillations reinforce one another, much as pushing on a playground swing at the right point makes it go higher and higher, and the chances of coffee sloshing its way over the edge rise. The small, irregular movements a cup sees can also amplify liquid motion and thus spilling. These findings were to be detailed at a November meeting of the American Physical Society in Baltimore.

Once the key relations between coffee motion and human behavior are understood, it might be possible to develop strategies to control spilling, “such as using a flexible container to act as a sloshing absorber,” Krechetnikov says. A series of rings arranged up and down the inner wall of a container might also impede the liquid oscillations.

Friday, May 18, 2012

“You are what you eat.” The old adage has for decades weighed on the minds of consumers who fret over responsible food choices. Yet what if it was literally true? What if material from our food actually made its way into the innermost control centers of our cells, taking charge of fundamental gene expression?

That is in fact what happens, according to a recent study of plant-animal microRNA transfer led by Chen-Yu Zhang of Nanjing University in China. MicroRNAs are short sequences of nucleotides—the building blocks of genetic material. Although microRNAs do not code for proteins, they prevent specific genes from giving rise to the proteins they encode. Blood samples from 21 volunteers were tested for the presence of microRNAs from crop plants, such as rice, wheat, potatoes and cabbage.

The results, published in the journal Cell Research, showed that the subjects’ bloodstream contained approximately 30 different microRNAs from commonly eaten plants. It appears that they can also alter cell function: a specific rice microRNA was shown to bind to and inhibit the activity of receptors controlling the removal of LDL—“bad” cholesterol—from the bloodstream. Like vitamins and minerals, microRNA may represent a previously unrecognized type of functional molecule obtained from food.

The revelation that plant microRNAs play a role in controlling human physiology highlights the fact that our bodies are highly integrated ecosystems. Zhang says the findings may also illuminate our understanding of co-evolution, a process in which genetic changes in one species trigger changes in another. For example, our ability to digest the lactose in milk after infancy arose after we domesticated cattle. Could the plants we cultivated have altered us as well? Zhang’s study is another reminder that nothing in nature exists in isolation.

Tuesday, May 15, 2012

You can find a microwave oven in nearly any American kitchen— indeed, it is the one truly modern cooking tool that is commonly at hand—yet these versatile gadgets are woefully underestimated. Few see any culinary action more sophisticated than reheating leftovers or popping popcorn. That is a shame because a microwave oven, when used properly, can cook certain kinds of food perfectly, every time. You can even use it to calculate a fundamental physical constant of the universe. Try that with a gas burner.

To get the most out of your microwave, it helps to understand that it cooks with light waves, much like a grill does, except that the light waves are almost five inches (12.2 centimeters) from peak to peak—a good bit longer in wavelength than the infrared rays that coals put out. The microwaves are tuned to a frequency (2.45 gigahertz, usually) to which molecules of water and, to a lesser extent, fat resonate.

The water and oil in the exterior inch or so of food soaks up the microwave energy and turns it into heat; the surrounding air, dishes and walls of the oven do not. The rays do not penetrate far, so trying to cook a whole roast in a microwave is a recipe for disaster. But a thin fish is another story. The cooks in our research kitchen found a fantastic way to make tilapia in the microwave. Sprinkle some sliced scallions and ginger, with a splash of rice wine, over a whole fish, cover it tightly with plastic wrap and microwave it for six minutes at a power of 600 watts. (Finish it off with a drizzle of hot peanut oil, soy sauce and sesame oil.)

The cooking at 600 W is what throws many chefs. To heat at a given wattage, check the power rating on the back of the oven (800 W is typical) and then multiply that figure by the power setting (which is given either as a percentage or in numbers from one to 10 representing 10 percent steps). A 1,000-W oven, for example, produces 600 W at a power setting of 60 percent (or “6”). To “fry” parsley brushed with oil, cook it at 600 W for about four minutes. To dry strips of marinated beef into jerky, cook at 400 W for five minutes, flipping the strips once a minute.

If you are up for slightly more math, you can perform a kitchen experiment that Albert Einstein would have loved: prove that light really does zip along at almost 300 million meters per second. Cover a cardboard disk from a frozen pizza with slices of Velveeta and microwave it at low power until several melted spots appear. (You don’t want it rotating, so if your oven has a carousel, prop the cardboard above it.) Measure the distance (in meters) between the centers of the spots. That distance is half the wavelength of the light, so if you double it and multiply by 2.45 billion (the frequency in cycles per second), the result is the velocity of the rays bouncing about in your oven.

Friday, May 11, 2012

Primates can now move and sense the textures of objects using only their thoughts

When real brains operate in the real world, it’s a two-way street. Electrical activity in the brain’s motor cortex speeds down the spinal cord to the part of the body to be moved; tactile sensations from the skin simultaneously zip through the spinal cord and into the brain’s somatosensory cortex. The two actions are virtually inseparable: absent the feel of a floor under your feet, it’s awfully difficult to walk properly, and lacking the tactile sensation of a coffee mug, your brain cannot sense how tightly your fingers should grasp it. Until now, attempts to help paralyzed patients move a prosthetic have addressed only half of our interaction with the world. A new study offers hope of expanding that capacity.
Scientists led by Miguel Nicolelis, professor of neurobiology at Duke University Medical Center, have reported the first-ever demonstration in which a primate brain not only moved a “virtual body” (an avatar hand on a computer screen) but also received electric signals encoding the feel of virtual objects the avatar touched—and did so clearly enough to texturally distinguish the objects. If the technology, detailed in the journal Nature, works in people, it would change the lives of paralyzed patients. (Scientific American is part of Nature Publishing Group.) They would not only be able to walk and move their arms and hands, Nicolelis says, but also to feel the texture of objects they hold or touch and to sense the terrain they tread on.

Other research groups are working on similar advances. At the University of Pittsburgh, neuroscientists led by Andrew Schwartz have begun recruiting patients paralyzed by spinal cord injury into a similar trial that would allow them to “feel” the environment around them thanks to electrodes in the somatosensory cortex that receive information from a robot arm.

Nicolelis hopes to bring his research to fruition by 2014, when he plans to unveil the first “wearable robot” at the opening game of soccer’s World Cup in his home country of Brazil. Think Iron Man, a full-body, exoskeletonlike prosthetic. Its interface will be controlled by neural implants that capture signals from the motor cortex to move legs, hands, fingers and everything else. And it will be studded with sensors that relay tactile information about the outside world to the somatosensory cortex. Buoyed by the advances so far, Nicolelis predicts that the device will be ready in time. “It’s our moon shot,” he says.

Tuesday, May 8, 2012

The job of saving humanity from extinction currently falls to no one. NASA and other organizations should take it on

Over the past couple of years the U.S. space program has gone through a huge shake-up, leaving the nation’s goals in space unclear. I have a suggestion. NASA, working with other national space agencies and private organizations, should take on the job of ensuring that no destructive asteroid ever hits Earth on our watch. What project is more worthwhile in the long term or aweinspiring in the short term than protecting humanity from ruin?

At first glance, asteroids may seem like a distant threat. But the hazard is well documented, and the consequences could not be more severe. The history of life on Earth has been shaped by asteroid impacts. One million of them wider than 40 meters in diameter orbit the sun in our vicinity, by some estimates. An asteroid of that size struck Earth over Siberia in 1908 and laid waste an area 150 times larger than the Hiroshima atomic bomb did. The odds of a repeat in this century are about 50 percent. On the larger end, asteroids greater than about one kilometer across would have global effects that threaten human civilization.

The first step in prevention is prediction. We must find, track and predict the future trajectory of those million near-Earth objects. Astronomers have already catalogued the orbits of most of the kilometer-scale objects they think are out there, and none are known that will hit Earth in the next 100 years. Yet the great majority of smaller ones, those big enough to destroy a country or unleash a tsunami that devastates coastal cities, remain untracked. This unfinished business should be tackled next.

Asteroids are warmer than the background sky and therefore stand out in the infrared. Telescopes have blind spots, however: they cannot look in the direction of the sun, which limits the effectiveness of telescopes stationed on or near Earth. The National Research Council recommended in 2009 that NASA place an infrared survey spacecraft in a Venuslike orbit around the sun. As it looked outward, away from the sun, the observatory would spot asteroids that go unseen from Earth. Once completed, such a survey would remain valid for about a century— the timescale on which the measured orbits begin to change because of gravitational interactions with planets—before we would have to do it again. The cost of such a mission would be several hundred million dollars—expensive, to be sure, but a bargain compared with NASA’s current budget, let alone the damage of an asteroid strike.

Should astronomers find an asteroid on a collision course, our task would be to reach out and alter its orbit to prevent that impact. If we find the asteroid early enough (decades ahead of its projected impact), several existing technologies might work: tow it, ram it, nuke it or employ some combination. (My colleagues and I used to advocate pushing on the asteroid with a rocket but recent results on asteroid properties and orbits have made us reconsider.)

Yet no one is really sure whether these options would actually work. Surely the time to test them is before they are needed for real. NASA and other organizations should build and try out a system to deflect a nonthreatening asteroid in a controllable way. Given that astronomers have not even begun a complete asteroid survey, there is a real risk they will find an incoming asteroid before we have time to do a dry run. So this work must begin now. It would not take large increases to NASA’s budget.

All civilizations that inhabit planetary systems must eventually deal with the asteroid threat, or they will go the way of the dinosaurs. We need to predict in advance when impacts are going to occur and, if necessary, shift the orbits of threatening asteroids. In effect, we must change the evolution of the solar system.

Saturday, May 5, 2012

A watershed is an area of land surrounding a riparian habitat that supplies all of the habitat’s water. Environmental damage to a watershed in the form of pollution or erosion directly affects its riparian waters. Conversely, a healthy environment and watershed give rise to healthy riparian habitat. For instance, undisturbed watersheds containing trees and plant life have riparian areas with clean, clear water. Vegetation, ground cover, and extensive root systems in these places prevent sediments and runoff. In heavy rainstorms, water rushes into streams and dirties the water with soil. A slow leaching of soils and vegetation, by contrast, adds nutrients to the riparian system rather than polluting it.

Clean inflow from healthy watersheds replenishes riparian habitat for a diverse collection of microbial, plant, and animal life. The banks and sediments of streams contain bacteria and fungi that decompose organic matter in the water and soil. Life on the water’s bottom, on rocks and pebbles, is usually composed of microbial communities called biofilms, made up of bacteria and algae, and small plant life called phytoplankton. Invertebrates and dissolved minerals in riparian water feed insect larvae and small fish, and many riparian habitats contain freshwater fish upon which large and small mammals prey. Riparian sites also provide shelter for animals, migration routes, and a shady resting place in hot climates.

Riparian vegetation prefers moist, shady conditions; some species contain root systems that have adapted to a shallow water table and tolerate seasonal flooding. The deep roots of riparian trees prevent erosion and the undercutting of banks in which flowing water wears the bank away from the bottom up. Native plants along waterways provide shelter for insects, amphibians, reptiles, mammals, and birds. Vegetation that overhangs flowing waters also helps keep the waters cool for fish such as trout and salmon, which are discussed in the sidebar “Salmon.”

Tuesday, May 1, 2012

Desalination (also desalinization) converts salt-containing waters such as seawater into
freshwater. Desalination has the potential to be particularly valuable in places suffering drought, countries in severe water stress, or in areas of expanding desertification.

Two common methods for removing salts from water are distillation and reverse osmosis. Distillation is an inexpensive process in which freshwater evaporates out of heated salt water. Reverse osmosis (RO) requires more expensive filtration equipment than distillation. In RO pressure forces seawater through a filter, called a membrane, containing very small diameter pores. The pores let water pass through but remove about half of the dissolved salts. The concentrated salt water can be returned to the ocean and the freshwater used for irrigation. More advanced RO systems contain membrane pores in the range of 1 micrometer (μm) that make the treated water safe to drink. Microfiltration uses smaller pores of 0.05 to 0.5 μm diameter, and ultrafiltration uses pores of 0.001 to 0.01 μm diameter. Both of these filtration techniques ensure that water is safe to drink because they remove even very tiny contaminants such as viruses. RO plants usually employ a pre-RO filtration step called coarse screening that catches large insoluble materials on a screen to make the membrane filtration more efficient.

At least 7,500 desalination plants operate worldwide with about 60 percent of them in the
Middle East. Saudi Arabia, Kuwait, and Israel depend on desalination for a major portion of their clean freshwater. Saudi Arabia owns the world’s largest plant, which produces 130 million gallons (492 million l) of freshwater daily. North Africa, the Caribbean, and countries in the Mediterranean region have also explored desalination; Mexico and the United States use it on a small scale.

Though desalination technology can supply water to thirsty areas of the world, it currently produces less than 1 percent of the world’s water needs. Three disadvantages contribute to desalination’s slow acceptance. First, the treatment plants, especially RO, are expensive to build. RO requires costly equipment, and both distillation and RO consume large amounts of energy, so desalination’s costs create too great a burden for drought-stricken developing countries. Even in developed nations, desalination costs more than other water treatment methods. Second, the desalination process creates a large quantity of salt, which must be cleaned from equipment on a regular maintenance schedule. Third, the excess salt and high-salt wastewater must be returned to the environment. Dumping the high-salt wastes into the ocean harms aquatic ecosystems in the area; dumping it on land has the potential to contaminate surface waters and groundwaters.

In order for desalination to lessen world water shortages, technology will need to design
more efficient, inexpensive filters. The Canadian author and water treatment expert Maude Barlow said in 2008, “Even with current plans to triple global production, including nuclear-powered desalination plants, this technology cannot meet the demand for freshwater in the world.” Desalination adds to the world’s water supply, but it does not appear that it will be part of sustainable water use in the near future.

Top Tabs

About Me

In its broadest sense, science (from the Latin scientia, meaning "knowledge") refers to any systematic knowledge or practice. In its more usual restricted sense, science refers to a system of acquiring knowledge based on scientific method, as well as to the organized body of knowledge gained through such research.

Fields of science are commonly classified along two major lines: natural sciences, which study natural phenomena (including biological life), and social sciences, which study human behavior and societies. These groupings are empirical sciences, which means the knowledge must be based on observable phenomena and capable of being experimented for its validity by other researchers working under the same conditions.