One hundred and fifty-one years after the publication of On the Origin of Species, digital creatures have evolved to communicate like fireflies in a computer program that blurs the boundaries of life.

Recorded in line-by-line detail, their development in a software platform called Avida may provide insight into biological behavior and inspiration for the design of distributed computer networks.

“Evolutionary programs have been around for a while, but we haven’t seen them applied to distributed computing,” said computer scientist Philip McKinley of Michigan State University. Synchronized communication can be “seen in the natural world. But in Avida, we can go back to how and why it evolved. We can see the key points that allowed this relatively complex behavior to emerge.”

The new synchronization findings, made by McKinley and fellow MSU computer scientist David Knoester, were published November 18 in Artificial Life.

Inside the program, developed in the early 1990s at the California Institute of Technology and refined at MSU’s Digital Evolution Laboratory, digital organisms called Avidians take the form of self-replicating code. Their genomes are written in assembly language and stored in separate regions of memory, executed again and again at electronic speeds. Programmers set the parameters of mutation and natural selection, and evolutionary principles manifest themselves in silico.

“We like to say ‘it’s not a simulation of evolution, it’s evolution.’ The difference is that these are computer programs,” McKinley said.

In a previous and well-known study, researchers supported a key tenet of evolutionary theory by demonstrating how easily complexity could emerge in Avidians through incremental changes in simple, existing functions.

McKinley and Knoester specialize in organismal interactions: How complexity emerges not only in individuals, but also in groups.

Their earlier work examined the evolution of collective perception, cooperation and decision making. In the new study, however, they emphasized communication and selected for groups of Avidians that best synchronized their flashing with others.

Fireflies, which coordinate their blinking across distances spanning miles, are the best-known synchronized communicators of the biological world. How they do it isn’t fully understood, but Knoester said “it was literally a three- or four-line change” in Avida.

Crucial to Avidian synchronization was the handling of the computational version of “junk DNA,” or genetic code that seems to have no apparent purpose. In biology, junk DNA is now appreciated as having crucial regulatory functions. In the Avidians, individuals evolved to change their flash timing by adjusting the speed at which “junk” instructions were executed.

McKinley and Knoester don’t think that fireflies necessary synchronize the same way, as Avida provided a computational and likely different route to the same outcome. More importantly, it gave the researchers algorithms they would not have otherwise imagined.

The algorithms could inspire functional code beyond Avida’s confines.

“Avidians build network topologies. What sort of topologies do they come up with that are robust to damage, if the routing nodes fail?” Knoester said. “We’re also collaborating with a professor in the electrical engineering department who works on robotic fish. We’re not really interested in schooling; we want robots to track oil slicks, to monitor water quality. To do those things, you need to stay connected.”

As for the upper limit on Avidian complexity, “I’m not sure we know yet,” Knoester said.

Video: Organisms in Avida, a software platform for artificial life, running their genomic instructions. Eventually they evolve to flash in synchrony, like fireflies./Philip McKinley and David Knoester.

In F. Scott Fitzgerald’s short story, “The Curious Case of Benjamin Button,” an old man gets younger with each passing day, a fantastic concept recently brought to life on film by Brad Pitt. In a lab in Boston, a research team has used genetic engineering to accomplish something similarly curious, turning frail-looking mice into younger versions of themselves by stimulating the regeneration of certain tissues. The study helps explain why certain organs and tissues break down with age and, researchers say, offers hope that one day such age-related deterioration can be thwarted and even reversed.

As we age, many of our cells stop dividing. Our organs, no longer able to rejuvenate themselves, slowly fail. Scientists don’t fully understand what triggers this, but many researchers suspect the gradual shrinking of telomeres, the protective DNA caps on the end of chromosomes. A little bit of telomere is lost each time a cell divides, and telomerase, the enzyme that maintains caps, isn’t typically active in adult tissues. Another piece of evidence: People with longer telomeres tend to live longer, healthier lives, whereas those with shorter telomeres suffer more from age-related diseases, such as diabetes, Alzheimer’s, and heart disease.

Telomerase Activity

Several years ago, Ronald DePinho, molecular biologist and director of the Belfer Institute of the Dana-Farber Cancer Institute, and colleagues at Harvard Medical School in Boston genetically engineered mice to lack a working copy of the telomerase gene. The animals died at about 6 months—that’s young for mice, which usually live until they are about 3 years old—and seemed to age prematurely. At an early age, their livers and spleens withered, their brains shrank, and they became infertile. By early adulthood, these mice exhibited many of the maladies seen in 80-year-old humans.

DePinho says he wondered what would happen to the aging process in these mice if they suddenly began making telomerase again. “Would [we] slow it, stabilize it, or would we reverse it?” He and his colleagues genetically engineered a new batch of mice with the same infirmity, but this time they added back a telomerase gene that became active only when the mice received a certain drug. The researchers kept the gene off during development and let these mice prematurely age, as the previous ones had. But then at 6 months, the team switched on the telomerase gene.

The burst of telomerase production spurred almost total recovery. The rodents became fertile, their livers and spleens increased in size, and new neurons appeared in their brains, the researchers reported online yesterday in Nature.

The ability to reverse age deterioration in the mutant mice indicates that the cells that divide to replenish tissues don’t simply die when their telomere clock expires, says DePinho. They apparently persist in a dormant state from which they can be revived. “One could imagine applying this approach to humans,” he says, focusing the therapy on specific tissue types such as the liver, where telomerase is thought to play an important role in regeneration after damage by hepatitis, parasitic infection, and alcoholism.

K. Lenhard Rudolph, who studies stem cell aging at the University of Ulm in Germany, says that the results are encouraging for people with diseases that cause accelerated aging, like progeria, because the mice in this study were rescued despite already suffering from the effects of chronic disease. “It is a proof of principle that telomeres are at work here.”

Drug companies and researchers are seeking ways to restore, protect, or extend a person’s telomeres, but the jury is still out on whether such interventions can slow the symptoms of aging, let alone reverse them. Telomere investigator Maria Blasco of the Spanish National Cancer Research Center in Madrid cautions that DePinho’s experiment shouldn’t raise people’s expectations of antiaging therapies just yet. “This study uses genetically modified mice,” she says. “What remains a very important question in the field is can you delay aging in a normal mouse?”

DePinho agrees with those concerns. He also warns that his approach has potential drawbacks, as increasing telomerase activity beyond its natural levels can cause cancer. Still, that may not be an insurmountable problem if telomerase levels can be carefully controlled. DePinho notes that the mice in his study, whose telomerase activity was returned to a natural level, didn’t develop tumors.

The laws of physics say that you can’t get energy for nothing — worse still, you will always get out of a system less energy than you put in. But a nanoscale experiment inspired by a nineteenth-century paradox that seemed to break those laws now shows that you can generate energy from information.

Masaki Sano, a physicist at the University of Tokyo, and his colleagues have demonstrated that a bead can be coaxed up a ‘spiral staircase’ without any energy being directly transferred to the bead to push it upwards. Instead, it is persuaded along its route by a series of judiciously timed decisions to change the height of the ‘steps’ around it, based on information about the bead’s position. In this sense, “information is being converted to energy”, says Sano. The work is published by Nature Physics today1.

The team’s set-up was inspired by a nineteenth-century thought experiment proposed by Scottish physicist James Clerk Maxwell, which — controversially, at the time — suggested that information could be converted into energy. In the thought experiment, a demon guards a door between two rooms, each filled with gas molecules. The demon allows only fast-moving gas particles to pass out of the room on the left and into the room on the right, and only slow-moving particles to pass in the opposite direction.

As a result, the room on the right gradually gets warmer as the average speed of particles in that room increases, and the room on the left gets colder. The demon thus creates a difference in temperature without ever imparting any energy directly to the gas molecules — simply by knowing information about their speeds. This seems to violate the second law of thermodynamics, which states that you cannot make a system more ordered without any energy input.

A paradox put into practice

To create a real-life version of the demon experiment, Sano and his colleagues placed an elongated nanoscale polystyrene bead, which could rotate either clockwise or anticlockwise, into a bath of buffer solution. The team applied a varying voltage around the bead, making it progressively harder for the bead to rotate a full 360 degrees in the anticlockwise direction. This effectively created a “spiral staircase” that was harder to “climb up” in the anticlockwise direction than to “fall down” in the clockwise direction, says Sano.

“This is a beautiful experimental demonstration that information has a thermodynamic content.”

Christopher Jarzynski
University of Maryland

When left alone, the bead was randomly jostled by the surrounding molecules, sometimes being given enough of a push to turn anticlockwise against the voltage — or jump up the stairs — but more often turning clockwise — or going “downstairs”. But then the team introduced their version of Maxwell’s demon.

They watched the motion of the bead, and when it randomly turned anticlockwise they quickly adjusted the voltage — the equivalent of Maxwell’s demon slamming the door shut on a gas molecule — making it tougher for the bead to turn back clockwise. The bead is thus encouraged to keep climbing “upstairs”, without any energy being directly imparted to the bead, says Sano.

The experiment does not actually violate the second law of thermodynamics, because in the system as a whole, energy must be consumed by the equipment — and the experimenters — to monitor the bead and switch the voltage as needed. But it does show that information can be used as a medium to transfer energy, says Sano. The bead is driven as a mini-rotor, with a information-to-energy conversion efficiency of 28%.

“This is a beautiful experimental demonstration that information has a thermodynamic content,” says Christopher Jarzynski, a statistical chemist at the University of Maryland in College Park. In 1997, Jarzynski formulated an equation to define the amount of energy that could theoretically be converted from a unit of information2; the work by Sano and his team has now confirmed this equation. “This tells us something new about how the laws of thermodynamics work on the microscopic scale,” says Jarzynski.

Vlatko Vedral, a quantum physicist at the University of Oxford, UK, says that it will be interesting to see whether the technique can be used to drive nanomotors and artificial molecular machines. “I would also be excited to see whether something like this is already at work in nature,” he says. “After all, you could say that all living systems are ‘Maxwell’s demons’, trying to defy the tendency for order to turn back into randomness.”

By Eugenie Samuel Reich in Nature, (doi:10.1038/news.2010.620) Published online 18 November 2010

Errors lead to accusations that committee did not do its homework before making the 2010 award for physics.

Simple structure. Complex debate. PASIEKA / SCIENCE PHOTO LIBRARY

A high-profile graphene researcher has written to the Nobel prize committee for physics, objecting to errors in its explanation of this year’s prize. The award was given to Andre Geim and Konstantin Novoselov of Manchester University, UK, for their work on graphene, a two-dimensional carbon structure that has huge potential in the field of electronics.

Due to the Nobels’ prominence, it is not unheard of for disgruntled researchers to criticize a prize committee’s decision. But this complaint focuses instead on the quality of the scientific background document issued by the committee to explain why it awarded the prize. “The Nobel Prize committee did not do its homework,” says Walt de Heer of Georgia Institute of Technology in Atlanta. He sent his letter to the committee on 17 November.

After enquiries made by Nature in advance of De Heer’s letter, the committee is making at least one correction to its online information, says chairman Ingemar Lundström. “Some of the things we also think are mistakes.”

De Heer holds patents on the use of graphene in electronics, and made some of the earliest measurements of electronic effects in the material. Geim accuses de Heer of trying to boost his own reputation. “If he complains about Stockholm, some people might start thinking that he contributed something important,” says Geim.

“The motive is simply to have the record straight on a document this important.”

Walt de Heer
Georgia Institute of Technology, Atlanta

De Heer does disagree with the award of the physics prize to Geim and Novoselov, calling it “extremely fast”, but he insists that his objections to the prize committee’s document are not motivated by sour grapes. “The motive is simply to have the record straight on a document this important,” he says. “Its standards have to be higher than for any other prize and they’re not.”

According to the background document as downloaded by Nature on 17 November, Geim and Novoselov won the 2010 prize for “decisive contributions” to the development of graphene. It explains that the field was ignited by a 2004 paper1 that the group published in Science.

Series of errors

But de Heer sees a series of errors that he believes overplay the significance of Geim and Novoselov’s work at the expense of other researchers. One example is Figure 3 of the document, which is taken from Geim and Novoselov’s 2004 paper. The caption says the data were obtained using graphene, which the document defines as “a single atomic layer of carbon”. But the result was actually obtained in few-layer graphene (FLG), a multilayer form of carbon also known as graphite. Graphene and graphite have different electronic properties — part of the reason for the tremendous interest in studying single atomic layers.

Geim says the error is not a big deal because he and his colleagues later reported similar data on single-layer graphene in 20052. But De Heer says in his letter to the committee that his own 2004 paper3 included measurements on a single layer of graphene, even though he did not realize it at the time.

“Kim made an important contribution and I would gladly have shared the prize with him.”

Andre Geim
Manchester University

Other mistakes downplay the work of Philip Kim of Columbia University in New York, whom many researchers think should have shared the prize. When the Manchester group published crucial electronic measurements on graphene in Nature in 2005, the paper4 appeared back-to-back with one5 from Kim’s group. “He made an important contribution and I would gladly have shared the prize with him,” says Geim.

Kim says he is honoured by the suggestion. “Personally I wish it but it’s not working that way,” he says. “I respect the decision.”

Figure 4 of the Nobel document shows two panels of data, but its caption refers twice to the left panel, which shows data obtained by Novoselov and Geim, and not at all to the right panel, which shows Kim’s data. In addition, a citation to Kim’s 2005 paper could be read as referring to an inset of data in the figure, leaving the main panel uncited. Kim downplays the errors, calling them editorial in nature.

Complete surprise

De Heer also complains that the main text of the background document exaggerates the importance of Novoselov and Geim’s 2004 paper. It states that the study came as a complete surprise to the physics community and that, before their work, graphene had never been isolated and was not thought to be stable. “It is a complete straw man”, says De Heer.

That claim by the Nobel committee has irked other graphene researchers as well. “That statement is not accurate,” says Paul McEuen at Cornell University in Ithaca, New York. McEuen says that graphene had been made before the 2004 paper, and that several groups were working towards making electrical measurements on it. McEuen says that in his view, the most important contributions were those published in 2005 by Geim and Novoselov and separately by Kim.

Novoselov and Geim argue that the accuracy of the committee’s statement is a matter of opinion. Only “a tiny minority” of researchers thought that graphene would be stable, says Geim. He points out that thousands have since voted with their feet by switching fields to work on graphene. Novoselov and Geim’s 2004 Science paper has received 3,357 citations according to the Web of Knowledge citation index.

Lundström says that the committee is now correcting several points in its document raised by Nature and by De Heer’s letter, but says it is unlikely to change its general statement on the significance of the Manchester group’s 2004 work. He adds that as a “popular” document, the backgrounder does not necessarily reflect all the information that the committee relied on when awarding the prize.

De Heer also speculates that information from nomination material has been used in the backgrounder. “The document reads like a nomination letter,” he says. Per Delsing at Chalmers University of Technology in Göteborg, an adjunct member of the Nobel committee, responds that the committee did extensive research into the field before awarding the prize, but wouldn’t comment on the suggestion that material submitted by a nominator might have been used in the preparation of the document. “I cannot reveal that. Many of these things are secret,” he says.

The committee has apparently already made one correction to the document. In a version downloaded in October 2010 by Rodney Ruoff at the University of Texas at Austin, the names of Ruoff and five other authors were omitted from a reference to a 2009 paper describing the scaling-up of sheets of graphene6. These names are now included. Ruoff says he’d like the Nobel Prize committee to investigate how its document was generated and whether material from nomination letters was incorporated. “This whole experience has left me wondering how the Nobel-prize process works,” he says.

However, Klaus von Klitzing of the Max Planck Institute for Solid State Research in Stuttgart, Germany, winner of the 1985 Nobel prize for physics, says he sees no need to criticize this year’s committee. He points out that a symposium on graphene was held earlier this year in Stockholm, during which committee members heard from leading graphene researchers including de Heer and Kim. “I believe that members of the Nobel Prize Committee had a good overview about the scientific situation,” he says.

“I’ve been playing around with the Text CAPTCHA demo page and wondered how well WolframAlpha is at logic questions. As it turns out, Wolfram is pretty smart! Although, since a CAPTCHA requires an exact answer, some of the results from WolframAlpha are logically correct, but are not exactly correct. If someone wanted to use WolframAlpha to crack the text CAPTCHA technology, they could build in filters and such to narrow down answers to what the CAPTCHA is likely looking for.

Out of 10 demo questions, 3 failed and 7 were correct (although, 4 had the correct answer but would fail a CAPTCHA if the exact answer was not parsed out). Here are the results:

Text CAPTCHA: What is seven hundred and forty four as a number?WolframAlpha: NumberQ[744]Result:ALMOST

Text CAPTCHA: The 7th letter in the word “central” is?WolframAlpha: the wordResult:FAILED

Text CAPTCHA: Which word in this sentence is all IN capitals?WolframAlpha: capitals INResult:ALMOST

Another quantum weirdness: Light can behave like a wave or a particle, depending on how you observe it. Credit: flickr/Ethan Hein

The more one probes the universe at smaller and smaller scales, the weirder matter and energy seem to behave.

But this strangeness may limit its own extent in quantum mechanics, the theory describing the behavior of matter at an infinitesimal level, according to a new study by an ex-hacker and a physicist.

“We’re interested in this question of why quantum theory is as weird as it is, but not weirder,” said physicist Jonathan Oppenheim of the University of Cambridge. “It was an unnatural question for people to have asked even 20 years ago. The reason we’re able to get these results is that we’re thinking of things in the way a hacker might think of things.”

A lot of eerie things happen in the quantum world. According to the Heisenberg uncertainty principle, for instance, it’s impossible to know everything about a quantum particle. The more precisely you know an electron’s position, the less precisely you know its momentum. Stranger still, the electron doesn’t even have properties like position and momentum until an observer measures them. It’s as if the particle exists in a plurality of worlds, and only by making a measurement can we force it to choose one.

In another weirdness, two particles can be bound together such that observing one causes changes in the other, even when they’re physically far apart. This quantum embrace, called entanglement (or more generally, nonlocality), made Einstein nervous. He famously called the phenomenon “spooky action at a distance.”

But there’s a limit to how useful nonlocality can be. Two separated people can’t send messages faster than the speed of light.

“It’s surprising that that happens,” said Stephanie Wehner, an ex-hacker and quantum-information theorist at the National University of Singapore. “Quantum mechanics is so much more powerful than the classical world, it should surely go up to the limits. But no, it turns out that there is some other limitation.”

As strange as quantum mechanics is, it could be stranger.

“The question is, can quantum mechanics be spookier?” Oppenheim said. “Researchers started asking why quantum theory doesn’t have more nonlocality, and if there’s another theory that could.”

It turns out that the amount of nonlocality you can have — that is, how much you can rely on two entangled particles to coordinate their changes — is limited by the uncertainty principle. Oppenheim and Wehner describe how they came to this conclusion in the Nov. 19 issue of the journal Science.

To see the link between uncertainty and nonlocality, Wehner suggests thinking of a game played by two people, Alice and Bob, who are far apart and not allowed to talk to each other.

On her desk, Alice has two boxes and two coffee cups. A referee flips a coin and tells her to put either an even or odd number of cups into the boxes. She has four choices: one cup in the left box, one in the right box, one cup in each box, or no cups at all. This is equivalent to Alice encoding two bits of information, Wehner says. If a cup in a box represents a 1 and no cup represents a 0, Alice can write 00, 01, 10 or 11.

Then the referee asks Bob to guess if there is a cup in either the left or the right box. If he guesses correctly, Alice and Bob both win. This is the same as Bob trying to retrieve one of the bits that Alice encoded.

In the normal, non-quantum world, the best strategy for this (admittedly really boring) game lets the duo win just 75 percent of the time. If they each have one of a pair of entangled particles, they can do better. Alice can influence the state of Bob’s particle by observing her own. Bob can then look at his particle and have some idea of what Alice’s looks like, and use that information to make a more educated guess about which box has a cup.

But this strategy only improves the pair’s odds of winning to 85 percent. Bob can’t always guess perfectly because the uncertainty principle says he can’t know both bits of information at the same time, Oppenheim and Wehner explained. The stronger the uncertainty principle is, the harder it will be for Bob to retrieve the bit.

“The reason we can’t win this game better than 85 percent is because quantum mechanics respects the uncertainty principle,” Oppenheim said.

Given the history of these two concepts, linking uncertainty to nonlocality is a little ironic, he noted. In 1935, Albert Einstein tried to tear down the uncertainty principle using entanglement, and wrote in a famous paper with Boris Podolsky and Nathan Rosen that “no reasonable definition of reality could be expected to permit this.”

“When people first discovered nonlocality, they hated it,” Oppenheim said. “It was just too weird. People tried to eradicate it and undermine it.”

As the century wore on, however, physicists realized that creating a near-psychic link between two particles could be useful in cryptography and enable ultra-fast quantum computers.

“Now we’ve gotten used to it, and we even like it,” Oppenheim said. “Then you start wishing there could be more of it.”

Although there aren’t any immediate practical applications of this link, the finding does reveal some mysteries about the fundamental nature of physics. The discovery could also inform future theories that go beyond quantum mechanics, such as a unified theory of everything.

“We know that our present theories are not consistent, and that there’s some underlying theory,” Oppenheim said. Physicists don’t know what the uncertainty principle or nonlocality will look like in this new theory, “but we at least know that these two things will be locked together.”

IEEE Computer Society is presenting the 2011 Simulator Design competition for students worldwide with a top prize of 8,000 USD and a second place prize of 2,000 USD. Student teams will be invited to design a CPU simulator, a program used in many architecture courses to illustrate how computers work.

“This is an exciting competition because it cuts across traditional boundaries by combining architecture with program design and software engineering – just like real life,” said Alan Clements, chair of the competition and an emeritus professor of computer science. “All you have to do is to write a program. Well, that’s not quite all. You have to write an excellent program using professional design techniques.”

The competition requires that students have taken a course in architecture and have both programming and software engineering skills. Student teams will submit both a report and a working program at the end of the competition.
Who can compete?
The competition is open to student members of the IEEE Computer Society organized into teams consisting of three to five students enrolled at the same institution of higher learning.

As part of their member benefits, all student members receive access to the Computer Society Digital Library (CSDL).

The competition is conducted through online submission of reports and simulators to the panel of international judges (chosen by the IEEE Computer Society). This year’s judges include Bob Colwell,one of the world’s leading experts on computer design and Intel’s former chief architect on the Pentium 4 processor.
To register and for more information visit the competition web site at: http://www.computer.org/portal/web/competition
(Registration deadline is 18 January 2011)

Whoever figures out how to predict the stock market will get rich quick. Unfortunately, the market’s ups and downs ultimately depend on the choices of a massive number of people—and you don’t know what they’re thinking about before they decide to buy or sell a stock. Then again, maybe Google knows. A team of scientists has shown a strong correlation between queries submitted to the Internet search giant and the weekly fluctuations in stock trading. But it’s unlikely to make anyone wealthy.

Queries and crashes. By studying the terms that people search for on the Internet, economists may be able to predict financial crises. Credit: Tobias Preis

The stock market is a famously complex and jittery system. In any given week of trading, the price of shares in companies might stay the same, rise steadily, or suddenly crash. The causes of these patterns have evaded researchers, though not for lack of trying. An army of “quants”—many of them poached from academic math and physics departments—has studied data from stock indices such as the S&P 500 for decades. But within any given week, the time scale that matters to traders, the movement of the market seems random.

The reason, says Tobias Preis, a physicist at Johannes Gutenberg University Mainz in Germany, is that people decide to buy or sell stocks based not only on personal motivations but on the collective decisions of others. This “herding behavior” makes the stock market so chaotic that the pattern of trading in one week is nearly useless for predicting what will happen the following week. To predict the market, you need data on what is going through people’s minds before they make their financial decisions. One such source of data is the total weekly volume of Internet search queries, now available to researchers through Google Trends.

Researchers led by Preis compared the week-by-week fluctuations in two sets of data: The number of times that the name of a company in the S&P 500 was included in a Google search query, and the price and trading volume of that company’s stock. They focused on the 6 years from 2004 to 2010.

The findings, to be published 15 November in Philosophical Transactions of the Royal Society A, aren’t going to make anybody rich. The Google data could not predict the weekly fluctuations in stock prices. However, the team found a strong correlation between Internet searches for a company’s name and its trade volume, the total number of times the stock changed hands over a given week. So, for example, if lots of people were searching for computer manufacturer IBM one week, there would be a lot of trading of IBM stock the following week. But the Google data couldn’t predict its price, which is determined by the ratio of shares that are bought and sold.

At least not yet. Neil Johnson, a physicist at the University of Miami in Florida, says that if researchers could drill down even farther into the Google Trends data—so that they could view changes in search terms on a daily or even an hourly basis—they might be able to predict a rise or fall in stock prices. They might even be able to forecast financial crises. It would be an opportunity for Google “to really collaborate with an academic group in a new area,” he says. Then again, if the hourly stream of search queries really can predict stock price changes, Google might want to keep those data to itself.

Abstract

The “Monty Hall Dilemma” (MHD) is a well known probability puzzle in which a player tries to guess which of three doors conceals a desirable prize. After an initial choice is made, one of the remaining doors is opened, revealing no prize. The player is then given the option of staying with their initial guess or switching to the other unopened door. Most people opt to stay with their initial guess, despite the fact that switching doubles the probability of winning. A series of experiments investigated whether pigeons (Columba livia), like most humans, would fail to maximize their expected winnings in a version of the MHD. Birds completed multiple trials of a standard MHD, with the three response keys in an operant chamber serving as the three doors and access to mixed grain as the prize. Across experiments, the probability of gaining reinforcement for switching and staying was manipulated, and birds adjusted their probability of switching and staying to approximate the optimal strategy. Replication of the procedure with human participants showed that humans failed to adopt optimal strategies, even with extensive training.

The Foundational Questions Institute announced this week its latest essay contest, “Is Reality Digital or Analog?”, and if it’s anything like the past two contests, we’re in for a real treat: the contest should draw entrants from some of the deepest thinkers of our time. This time around, Scientific American has joined the institute as a co-sponsor of the contest.

The article we published in June on the nature of time, written by philosopher of physics Craig Callender, grew out of FQXi’s first essay contest. The contest, which has a first-place prize of $10,000, is one of the ways that FQXi—a smallish, newish organization that gets money from the Templeton Foundation and other donors—supports cutting-edge research that tends to fall between the cracks at larger, risk-averse funding agencies.

An essay contest might sound like something you’d have done in high school, but such competitions have a distinguished history in science. Cash is always welcome, but the main benefit for most participants is the opportunity to play with an idea in a way they can’t in a formal journal paper. The Gravity Research Foundation, for example, has run one since 1949, and practically everyone who’s anyone in gravitational theory, from Steven Hawking to Roger Penrose to John Wheeler, has entered. Scientific American itself ran a famous essay contest in 1921 to explain Einstein’s theories of relativity.
�
FQXi’s contest on the nature of time and a second one on the limits of physics drew a huge variety of fascinating contributions from a veritable Who’s Who of physics. It also allowed for new voices who might not otherwise get heard. This is one of the few times when an institution of science is willing to run the risk of psychoceramics in order not to exclude potentially interesting ideas. All the entries were posted to the FQXi website and anyone was free to comment on them and vote for a winner, in addition to the selection of a panel of judges.

As every essay-writer knows, half the fun is to interpret the question. The latest, about digital vs. analog reality, could go in a lot of different directions. The obvious one is to ask whether spacetime is discrete and what that would mean, but I imagine that entrants will come up with even more interesting interpretations. In our November 1999 issue, cosmologists Lawrence Krauss and Glenn Starkman posed the digital-vs.-analog question and discussed what it meant for life in the very far future of our universe.

The FQXi scientific directors, Max Tegmark and Anthony Aguirre, and I have been trying to get our two organizations to work together for several years, but it only came together this year. We brainstormed enough essay questions for the next dozen contests, we’ll work together on the judging, and our hope is that the prize-winning essay(s) will appear in some form in the magazine. In the meantime, bookmark the contest site and check back periodically to read what entries people have submitted!