According to a news feature on Foldit, the project arose from an earlier distributed computing effort called Rosetta@home. That project used what has become the standard approach for home-based scientific work: a screensaver that provided a graphical frontend to a program that uses spare processor time to solve weighty scientific problems. For Rosetta, that problem was the task of figuring out how proteins, which are composed of a chain of chemicals called amino acids, adopt their final, three-dimensional shape.

This is typically an energy minimization problem. Proteins tend to form structures that keep hydrophobic parts buried internally, away from the water they're dissolved in. They also form bridges between neighboring sections by hydrogen bonds and charge interactions. Maximize these sorts of interactions and you minimize the energy involved.

It sounds simple, but with anything more than a short chain of amino acids, there are a tremendous number of potential configurations to be sampled in 3D space, which can bring powerful computers to their knees.

The Rosetta algorithm handles the huge energy landscape it needs to scan by taking big leaps between different configurations, then attempting to minimize the energy by making smaller tweaks. This lets it sample large portions of the structural landscape, but sometimes leaves it stuck: the path between its current location and an energy minimum may take it through a high energy state, which would keep Rosetta from finding the solution.

Apparently, the program's home users noticed that the screensaver would often show the program stuck close to a much better structure. One of Foldit's developers is quoted as saying, "People started writing in, saying, 'I can see where it would fit better this way.'"

The Rosetta team decided to give them a chance to see if they really could.

Starting with algorithms, ending with brains

Foldit takes a hybrid approach. The Rosetta algorithm is used to create some potential starting structures, but users are then given a set of controls that let them poke and prod the protein's structure in three dimensions; displays provide live feedback on the energy of a configuration.

Foldit uses some of the same conventions typical of other computer games, like a few simple structural problems to give new users a smooth learning curve. It also borrows from other online gaming communities; there are leaderboards, team and individual challenges, user forums, and so on.

Though very few of those who played Foldit had any significant background in biochemistry, the gamers tended to beat Rosetta when it came to solving structures. In a series of ten challenges, they outperformed the algorithms on five and drew even on another three.

By tracing the actions of the best players, the authors were able to figure out how the humans' excellent pattern recognition abilities gave them an edge over the computer. For example, people were very good about detecting a hydrophobic amino acid when it stuck out from the protein's surface, instead of being buried internally, and they were willing to rearrange the structure's internals in order to tuck the offending amino acid back inside. Those sorts of extensive rearrangements were beyond Rosetta's abilities, since the energy changes involved in the transitions are so large.

Similarly, Rosetta was good at linking up stretches of protein through charge interactions and hydrogen bonds, but it would often get things slightly off (think of a zipper that's off by a single tooth). Shifting every bond by a single partner was beyond Rosetta's abilities, but it's something a human can do trivially.

That's not to say the Rosetta algorithm didn't play a valuable role in Foldit. Humans turn out to be really bad at starting from a simple linear chain of amino acids; they need a rough idea of what the protein might look like before they can recognize patterns to optimize. Given a set of 10 potential structures produced by Rosetta, however, the best players were very adept at picking the one closest to the optimal configuration.

The authors also note that different players tended to have different strengths. Some were better at making the big adjustments needed to get near an energy minimum, while others enjoyed the fine-scale tweaking needed to fully optimize the structure. That's where Foldit's ability to enable team competitions, where different team members could handle the parts of the task most suited to their interests and abilities, really paid off.

The Nature article makes it clear that researchers in other fields, including astronomy, are starting to try similar approaches to getting the public to contribute something other than spare processor time to scientific research. As long as the human brain continues to outperform computers on some tasks, researchers who can harness these differences should get a big jump in performance.

I have seen Adrien Treuille (3rd author on the paper), talk about this project. It's a pretty slick looking game for a research project. He talked about, in a semi-joking manner, the ultimate version of this game being "Level 278: Cure Cancer". That would much cooler then getting your 4th WoW character to lvl 80.

It is no great suprise that people can do better than computer programs at recognizing complex patterns. But it is a surprise that untrained people can make a substantial contribution to a difficult scientific problem. Those trying to improve our educational system should be looking carefully at these results. We need automated systems that can teach and exercise student's basic skills in reading, writing, and mathematics. We need systems that can provide feedback on the approaches work and those that do not. We also need more experimentation and feedback on the characteristics of the educational environment like gaming that can help hold the student's interest in the activity.

Tried this game about 1/2 a year or so ago when I was heavily into F@H. It's interesting, but it can get amazingly difficult very fast. I think, much like learning pointers in C, some folks will "just get it" and others will have a harder time.

Are computers and programmers really that bad at pattern recognition?Are people really that good at it?On our pharmaceutical packing line, we use a vision system for quality control. While a human would do a better job, humans get tired and computers don't. A problem we have is the vision system finding a large number of false positives, such as thinking a zero is an O.

Are computers and programmers really that bad at pattern recognition?Are people really that good at it?On our pharmaceutical packing line, we use a vision system for quality control. While a human would do a better job, humans get tired and computers don't. A problem we have is the vision system finding a large number of false positives, such as thinking a zero is an O.

Short answer: yes.Long answer- Humans have billions of years of evolution to enhance their pattern recognition skill. Computers... not so long.

Are computers and programmers really that bad at pattern recognition?Are people really that good at it?On our pharmaceutical packing line, we use a vision system for quality control. While a human would do a better job, humans get tired and computers don't. A problem we have is the vision system finding a large number of false positives, such as thinking a zero is an O.

Humans are really, really, really good at pattern recognition. So much so that we'll detect patterns and make up fictitious explanations for them simply to set our minds at ease.

You're right that they get tired, though. I have a feeling the advantage is much more useful in some problem spaces than others.

As for your "zero" problem, I'm surprised that it isn't a standard for packing labels to use the zero with a slash through it in order to distinguish it from "o".

I guess another question would be the speed of task at hand. Is the brute force computer program able to come up with more solutions in the same amount of time as the humans can come up with the same number of better or matching solutions?

I guess another question would be the speed of task at hand. Is the brute force computer program able to come up with more solutions in the same amount of time as the humans can come up with the same number of better or matching solutions?

This article doesn't need to be fair towards computers. They won't be offended.

I guess another question would be the speed of task at hand. Is the brute force computer program able to come up with more solutions in the same amount of time as the humans can come up with the same number of better or matching solutions?

It's not just a matter of trying a lot of solutions--you also need to evaluate them, and you need to figure out if there's a way to do better. Often, a better way will be possible but there won't be a clear path from A to B--that's the part that a computer cannot easily figure out, and the search space is far too large for it to brute force its way there.

Great article, but I'm not sure why Rosetta is written about in the past tense - the project is bigger than it's ever been but would still do much better with orders of magnitude more compute power available. Protein folding is a massively compute intensive problem, given the huge search space and goes up exponentially with each additional amino acid in the protein...

I played with Foldit a bit about a year ago and thought it was well executed. What I really wanted was some sort of 3D input device, however - something like a tennis ball suspended inside a boxframe, so I could twist and phase-shift it. Use taught rubber bands from the eight corners of the box, and you'd have a bit of force feedback too.

Anyway, I wonder if the Foldit play-data can provide a learning set for Rosetta to gain experience from, so it gets better at the things that it currently lags humans on.

If the algorithm can't beat the gamers, then the algorithm is weak. I write this stuff for a living. In my free time I wrote an algorithm for Bubble Breaker that can blow away anybody, no matter how good they think they are at it. I would bet my life that I could improve that algorithm until it can make the gamer's heads spin.

Is the Rosetta project interface open to writing your own methods? It sounds like they are using simulated annealing, based on the "high energy state" confounding it. Simulated annealing is a weak approach to solving NP hard problems.

I tried the game and didn't like it. The interface is totoally a WoW clone, and the cooldown on the peptide switching is way too long. Plus if you die, all the ATP you had stored up takes a 10% reduction hit, and you have to fight your way back through all the respawned ribosomes to retrieve your molucular corpse. So annoying.

If the algorithm can't beat the gamers, then the algorithm is weak. I write this stuff for a living. In my free time I wrote an algorithm for Bubble Breaker that can blow away anybody, no matter how good they think they are at it. I would bet my life that I could improve that algorithm until it can make the gamer's heads spin.

Is the Rosetta project interface open to writing your own methods? It sounds like they are using simulated annealing, based on the "high energy state" confounding it. Simulated annealing is a weak approach to solving NP hard problems.

You should contact them. These aren't computer scientists, they are molecular biologists who happen to have some programming experience, or know someone who does.

If the algorithm can't beat the gamers, then the algorithm is weak. I write this stuff for a living. In my free time I wrote an algorithm for Bubble Breaker that can blow away anybody, no matter how good they think they are at it. I would bet my life that I could improve that algorithm until it can make the gamer's heads spin.

Is the Rosetta project interface open to writing your own methods? It sounds like they are using simulated annealing, based on the "high energy state" confounding it. Simulated annealing is a weak approach to solving NP hard problems.

Nah, you put enough minds to a problem, no matter how hard, it can get solved. Get over yourself.

Are computers and programmers really that bad at pattern recognition?Are people really that good at it?On our pharmaceutical packing line, we use a vision system for quality control. While a human would do a better job, humans get tired and computers don't. A problem we have is the vision system finding a large number of false positives, such as thinking a zero is an O.

I'd also like to point out that the approach taken here is many individuals performing bite-sized work, whereas you're referring to a few people performing long periods of monotonous and repetitive work.

If only Adobe or Microsoft would start using valid proteins as a base for registration keys. then, all you'd have to do is wait for a keygen to come along (maybe 3 days or so). bonus points for calling it "un-hackable."

i wonder if they could write the pirated copies off as a donation to cancer research...