In January 2009, [mathematician Tim] Gowers decided to use his blog to run a very unusual social experiment. He picked out an important and difficult unsolved mathematical problem, a problem he said he’d “love to solve.” But instead of attacking the problem on his own, or with a few close colleagues, he decided to attack the problem completely in the open, using his blog to post ideas and partial progress. What’s more, he issued an open invitation asking other people to help out. Anyone could follow along and, if they had an idea, explain it in the comments section of the blog. Gowers hoped that many minds would be more powerful than one, that they would stimulate each other with different expertise and perspectives, and collectively make easy work of his hard mathematical problem. He dubbed the experiment the Polymath Project.

The Polymath Project got off to a slow start. Seven hours after Gowers opened up his blog for mathematical discussion, not a single person had commented. Then a mathematician named Jozsef Solymosi from the University of British Columbia posted a comment suggesting a variation on Gowers’s problem, a variation which was easier, but which Solymosi thought might throw light on the original problem. Fifteen minutes later, an Arizona high-school teacher named Jason Dyer chimed in with a thought of his own. And just three minutes after that, UCLA mathematician Terence Tao—like Gowers, a Fields medalist—added a comment. The comments erupted: over the next 37 days, 27 people wrote 800 mathematical comments, containing more than 170,000 words. Reading through the comments you see ideas proposed, refined, and discarded, all with incredible speed. You see top mathematicians making mistakes, going down wrong paths, getting their hands dirty following up the most mundane of details, relentlessly pursuing a solution. And through all the false starts and wrong turns, you see a gradual dawning of insight. Gowers described the Polymath process as being “to normal research as driving is to pushing a car.” Just 37 days after the project began Gowers announced that he was confident the polymaths had solved not just his original problem, but a harder problem that included the original as a special case.

This episode is a microcosm of how intellectual progress happens.

Humanity's intellectual history is not the story of a Few Great Men who had a burst of insight, cried "Eureka!" and jumped 10 paces ahead of everyone else. More often, an intellectual breakthrough is the story of dozens of people building on the ideas of others before them, making wrong turns, proposing and discarding ideas, combining insights from multiple subfields, slamming into brick walls and getting back up again. Very slowly, the space around the solution is crowded in by dozens of investigators until finally one of them hits the payload.

The problem you're trying to solve may look impossible. It may look like a wrong question, and you don't know what the right question to ask is. The problem may have stymied investigators for decades, or centuries.

If so, take heart: we've been in your situation many times before. Almost every problem we've ever solved was once phrased as a wrong question, and looked impossible. Remember the persistence required for science; what "thousands of disinterested moral lives of men lie buried in its mere foundations; what patience and postponement... are wrought into its very stones and mortar."

"Genius is 1 percent inspiration, 99 percent perspiration," said Thomas Edison, and he should've known: It took him hundreds of tweaks to get his incandescent light bulb to work well, and he was already building on the work of 22 earlier inventors of incandescent lights.

Pick any piece of progress you think of as a "sudden breakthrough," read a history book about just that one breakthrough, and you will find that the breakthrough was the result of messy progress like the Polymath Project, but slower: multiple investigators, wrong turns, ideas proposed and combined and discarded, the space around the final breakthrough slowly encroached upon from many angles.

I doubt the problem will be solved by getting smart people to sit in silence and think real hard about decision theory and metaethics. If the problem can be solved, it will be solved by dozens or hundreds of people hacking away at the tractable edges of Friendly AI subproblems, drawing novel connections, inching toward new insights, drawing from others' knowledge and intuitions, and doing lots of tedious, boring work.

...This isn't the only way to solve hard problems, but when problems are sufficiently hard, then hacking away at their edges may be just about all you can do. And as you do, you start to see where the problem is more and less tractable. Your intuitions about how to solve the problem become more and more informed by regular encounters with it from all angles. You learn things from one domain that end up helping in a different domain. And, inch by inch, you make progress.

So: Are you facing an impossible problem? Don't let that stop you, if the problem is important enough. Hack away at the edges. Look to similar problems in other fields for insight. Poke here and there and everywhere, and put extra pressure where the problem seems to give a little. Ask for help. Try different tools. Don't give up; keep hacking away at the edges.

Humanity's intellectual history is not the story of a Few Great Men who had a burst of insight, cried "Eureka!" and jumped 10 paces ahead of everyone else.

While I agree with this statement, the preceding example doesn't support it. I participated in the polymath project, and while it is true that there were anonymous or pseudonymous contributors, the project was mostly sustained by the fame and communal pull of Gowers and Tao. The retelling of the story you chose made it seem like Tao appeared out of the blue, but in fact Tao and Gowers work in the same field and certainly knew each other beforehand.

Therefore I feel it's not impossible to read the polymath project through the lens of Few Great Men.

I don't know how to quantify how much "math discovery" they did relative to the other participants. You can still read through the comments, so if you have some particular metric you're interested in, that will help clarify the issue. The roughest possible estimate would be in terms of numbered items (the de facto unit of polymath development), and it's clear that Gowers has more of these than any one else.

There's another story that compromises between the "Genius" story (which is almost certainly false in general; I can't say whether it's true in this case) and the Bazaar / "everyone and nobody did it" story, which neglects the crucial importantances of people like Tao and Gowers.

How do trade hubs arise? There's a network effect, where if NYSE or Jita is a deep, liquid market, then NYSE or Jita becomes more desirable trade location, and therefore becomes even more liquid. How do we standardize on currencies? There's a network effect, where a dollar or a cowrie is valuable because it is widely accepted in trade, and so becomes more valuable.

However, if there is an initially symmetric situation, then there's a coordination problem of "where will the hub be?". The geometric centrality of Jita or New York (where "geometry" includes things like velocity of travel and political risks) is crucial for them to win the initial struggle with other candidate hubs. The value-density (lightness, ease of transport) fungibility, and difficulty of forging paper money, and perhaps the association with national identity and taxation was crucial for paper money winning out as the standard for trade.

Gowers and Tao's prestige might have acted more as a symmetry-breaker, identifying THIS as the hub that is likely going to eventually be the place to be.

I should clarify that I'm not saying the "Genius" story completely applies here. I disagree with the quote's presentation of the project (particularly the cherry-picking of Jason Dyer; most of the non-anonymous/pseudonymous contributors had some advanced training, as far as I can tell). While I agree with lukeprog's conclusion, I definitely think there are better illustrations of this process in action.

Gowers and Tao's prestige might have acted more as a symmetry-breaker, identifying THIS as the hub that is likely going to eventually be the place to be.

Certainly, but they also substantially contributed to the project as well.

For comparison, subsequent polymath projects have had a hard time coming to completion without the presence of a particularly strong and/or famous central mathematician. (The mini-polymaths were advertised by Tao, IIRC.)

Do you think that Tao or Gowers' mathematical talent was critical for the polymath project that you personally participated in? (Focusing on it because you probably know more about it than other ones.)

"Genius is 1 percent inspiration, 99 percent perspiration," said Thomas Edison, and he should've known: It took him hundreds of tweaks to get his incandescent light bulb to work well, and he was already building on the work of 22 earlier inventors of incandescent lights.

It should be noted that Edison was (unconsciously) describing himself, not every genius. Tesla, a contemporary who worked with him, remarked about Edison's inability to see answers to problems the way Tesla often could (if I remember correctly, Tesla claimed that he could have worked out the necessary material for a filament from first principles). Edison, as the manager of an innovation factory, saw more returns from wringing perspiration out of his employees than focusing on inspiration.

Perhaps a better way to put it is that inspiration doesn't come from nowhere; it comes from perspiration. So sit down and get to work.

"Genius is 1 percent inspiration, 99 percent perspiration," said Thomas Edison, and he should've known: It took him hundreds of tweaks to get his incandescent light bulb to work well, and he was already building on the work of 22 earlier inventors of incandescent lights.

If Edison had a needle to find in a haystack, he would proceed at once with the diligence of the bee to examine straw after straw until he found the object of his search. [...] His method was inefficient in the extreme, for an immense ground had to be covered to get anything at all unless blind chance intervened... [...] I was almost a sorry witness of such doings, knowing that a little theory and calculation would have saved him ninety per cent of his labor.

Even allowing for a significant bias against Edison on Tesla's part, it does seem like he relied on perspiration to an extraordinary degree among high achievers. Of course, even that diligence wouldn't have been of much use if it hadn't come together with a very considerable talent.

More generally, there are two problems with the general message of this article:

It is delusional for most people to believe that they can contribute usefully to really hard problems. (Except in trivial ways, like helping those who are capable of it with mundane tasks in order to free up more of their time and energy.) There is such a thing as innate talent, and doing useful work on some things requires an extraordinary degree of it.

There is also a nasty failure mode for organized scientific effort when manpower and money are thrown at problems that seem impossibly hard, hoping that "hacking away at the edges" will eventually lead to major breakthroughs. Instead of progress, or even an honest pessimistic assessment of the situation, this may easily create perverse incentives for cargo-cult work that will turn the entire field into a vast heap of nonsense.

It is delusional for most people to believe that they can contribute usefully to really hard problems.

It's damaging to repeat this though, since most bright people who are 1 in 10,000+ think they are 1 in 10 due to Dunning-Krugger effects.

Except in trivial ways, like helping those who are capable of it with mundane tasks in order to free up more of their time and energy.

Mundane work is not trivial. For instance, I've watched lukeprog spend more of his days moving furniture at Singularity Institute in the past 6 months than anyone else in Berkeley... including dozens of volunteers and community members in the area all of whom could have have done it, none of whom considered trying. For most tasks, hours really are fungible. If otherwise smart people didn't think mundane work was trivial, we'd get so much more done. Nothing is harder for me to get done at Singularity Institute than work that "anybody could do".

As another example, I've had 200 volunteers offer to do work for Singularity Institute. Many have claimed they would do "anything" or "whatever helped the most". SEO is clearly the most valuable work. Unfortunately, it's something "so mundane", that anybody could do it... therefore, 0 out of 200 volunteers are currently working on it. This is even after I've personally asked over 100 people to help with it.

SEO is clearly the most valuable work. Unfortunately, it's something "so mundane", that anybody could do it.

I actually think you have it backwards there. The reason people aren't engaging in this activity is because it is the opposite of mundane. It is confusing, difficult, and requires previous skills.

General Evidence: There are lots of postings for Search Engine Optimizers, and they all want applicants to already have experience doing SEO. If it was something that was so mundane that anyone could do it with a couple hours of training, what you'd see instead are "no experience necessary" job postings for SEO where the company is willing to take an hour or two to train a schlub that they can then pay minimum wage too.

(Speaking of minimum wage, if you guys are spending a significant amount of your time doing menial tasks like moving furniture, it might be time to get a schlub of your own. You can pay someone $8/hr to do menial tasks 20 hrs/ week, for a total of about $8000 / year.)

Personal Anecdotal Supporting Evidence: I clicked on your link, and the thought in my head wasn't "oh, this is too mundane", but rather was "wtf?? This looks super-complex and confusing. It must be the type of thing that "computer people" know how to do. Not something for me. I don't have the knowledge or skill-set"

The reason people aren't engaging in [SEO] is because it is the opposite of mundane. It is confusing, difficult, and requires previous skills.

Not really. The link-building tutorial page Louie links to at the Singularity Volunteers site contains several examples of link-building tasks that require little experience:

Comment on blogs and in forums. Although some blogs still utilize “nofollow” tags on outbound comment links, it is not a trend that I foresee continuing as long as comment spam protection keeps improving. Therefore, I recommend leaving high-quality insightful comments on other blogs, which will create a backlink and could entice blog owners to link back to your site in the future. Also, you have a far better chance of acquiring a back link if you’ve contributed something to someone else’s blog first.

[Submit] your website to various niche, local, and general directories...

The other pages linked at the bottom of that page provide lots of other examples.

Also, Louie is entirely right about this:

Mundane work is not trivial. For instance, I've watched lukeprog spend more of his days moving furniture at Singularity Institute in the past 6 months than anyone else in Berkeley... including dozens of volunteers and community members in the area all of whom could have have done it, none of whom considered trying. For most tasks, hours really are fungible... Nothing is harder for me to get done at Singularity Institute than work that "anybody could do".

I've spent enough time cleaning rooms and moving boxes and furniture and so on at Singularity Institute (including an entire day just last week) that I could have written and published 1-3 more papers by now if I hadn't done any of that.

And that, ladies and gentlemen, is what happens when people get the idea that mundane work is "trivial."

If you want to do mundane tasks for me so I can write more papers on Friendly AI like this one, please contact me: luke [at] singularity.org.

Props to John Maxwell for being the latest person to actually do something mundane and high value for me, freeing up my time so I can work on an intelligence explosion book chapter tonight.

In principle, “good” SEO is not entirely zero-sum: it improves the quality of search results, by making sites, and pages within those sites, which are relevant to the user's query more likely to show up in results than irrelevant sites and pages, and the results for those pages to be more clear about what they’re about.

Successful SEO is zero-sum to the degree that it is done by sites competing against each other which are fungible to the searcher, as TheOtherDave hints. There's also a lot of advice and offers for doing this sort of SEO because that's where the perceived money is.

There's making your site look good (to the search engine), and then there's making your site be good.

Ok, I've added you to joeant and "rateitall" directories. You were already in dmoz, and IPL2 is no longer taking submissions. The other ones don't seem as appropriate: thegoodwebguide is UK-only, craigslist requires you to post something that's more like an ad, and the others are blogrolls and "local business directories" - which singinst is not (neither local, nor a business).

Let me know if there are other, better lists of directories to which you should be submitted.

joeant submission has been approved. It'll appear in the directory when the weekly update occurs. I also added a link in the sidebar of my blog which occasionally gets a surprising amount of linkjuice... might as well spread the love around :)

I agree that intelligence is not needed to make useful contributions. However...

It's damaging to repeat this though, since most bright people who are 1 in 10,000+ think they are 1 in 10 due to Dunning-Kruger effects.

I doubt this. Standardized tests are common (tests for CTY, SATs, etc.), and usually include percentiles. If you see "99.9+%" enough times, you'll notice. And 1 in 10,000 is a lot. (400 college friends) × (5% of people smart enough to go to your college) = not enough people for a 1 in 10,000 person to know anyone brighter than they are.

Have you thought about studying the persuasive and motivational arts in an attempt to increase this batting average? I'm always fascinated how watching videos by Internet marketer Eben Pagan makes me want to buy his stuff, forgetting the fact that I could probably get comparable knowledge from much cheaper library books. Sometimes I wish all of the advice I read came in an Eben Pagan format.

For what it's worth, I just submitted Less Wrong and its articles to 2 subreddits, 3 link directories, and Hacker News... and now I'm resting easy in the knowledge that I've saved trillions of future lives in expected value. (Only mentioning this in case anyone else is interested in saving trillions of lives, of course. Reddit is probably a good place to start, especially if you already have an account; my reddit submission rate is throttled severely.)

See what I tried to do there? ;)

I suspect the first step is to transition from blame-oriented language to opportunity-oriented language.

It is delusional for most people to believe that they can contribute usefully to really hard problems.

This seems more and more like the most damaging meme ever created on LessWrong. It persistently leads to people that could have made useful contributions (to AI safety) making no such contribution. Would it be a better world in which lots more people tried to contribute usefully to FAI and a small percentage succeeded? Yes, it would, even taking into account whatever cost the unsuccessful people pay.

There are many ways to do, even small, contributions for everyone. The easiest is giving money (to someone whom you believe is trying to address the "really hard problems"). But there are many others. I would take two examples of things I do (or plan to do in the short future) : I'm helping with the French translation of HP:MoR and I'll (try to at least, nothing serious done yet) help SIAI with migrating their publication to their new LaTeX template (see http://lesswrong.com/r/discussion/lw/9d3/new_si_publications_design/ ). Both are tiny contributions, but which can in the hand help in various ways the SIAI to tackle the really hard problems. A lot of people doing those small things can allow the great things to happen much faster.

Of course, you can replace SIAI with anyone you think could solve the hard problems - other kind of research, charity, political party if you believe a given one is doing more good than harm, ...

The hardest part in that is probably in identifying who is more likely to actually help in solving the really hard problems. I tend to "invest" my energy and money in different kind of entities, hoping at least one of them will do something good enough on the long run.

I agree. But compared to where we are right now, I think more people should actually go work directly on the core FAI problem. If the smartest half of each LW meetup earnestly and persistently worked on the most promising open problem they could identify, I'd give 50% chance that at least one would make valuable progress somewhere.

I sometimes start group solution brainstorming sessions by announcing "So, here's a Very Bad Idea for how we might go about solving this problem. [Explanation] This is, as I say, a Very Bad Idea, for several reasons. Does someone have a better one?" It no longer startles me how often people then proceed to build on the VBI and turn it into something viable.

It's clear, emphatic, fairly precise, and doesn't dance around the point it's trying to make - but without presenting too many ideas in too short a space. It's how you write if you're being careful to be understood and interesting. Most writing I encounter isn't like this at all -- computer scientists largely can't write -- and so it does have a family resemblance to Eliezer's writing.

Good feeling that this article will inspire people to work on an important problem, because I will send it around.

Good feeling that this article will help people who do not get inspired easily to just get started and avoid Akrasia.

Both of which come often from Eliezer's articles often, but the main thing that jumped out at me was the seamless interweave of different parts of history (Edison/Polymath) to draw conclusions about needed future action. Which we see very often in the sequences especially when past scientists are used as examples.

Maybe. But it is actually a criticism of Eliezer, whose approach to solving the FAI problem, as far as I can make it out, does seem to be "getting smart people to sit in silence and think real hard about decision theory and metaethics."

Very nice story about "polymaths". But to a lesser extend, it's what I love about team work (even small-scale, like working in team of 2-3 persons on a project instead of working alone) and at much greater scale about Free Software.

Does this have implications for the risks associated with AI? Tao is a lot smarter than we are, but he doesn't seem to be plotting to harvest us for our phosphorus, or anything.

This example and others mentioned also suggest that interactions among intelligent agents may be at least as important as intelligence per se. If we can learn to work together more effectively, I think we'll be able to out-think computers for a long time (where "a long time" is defined as long enough for over-population, climate change, nuclear war, etc. to be serious risks).

That may be a faster route to AI. But my point was that making an AI that's smarter than the combined intelligence of humans will be much harder (even for an AI that's already fairly smart and well-endowed with resources) than making one that's smarter than an individual human. That moves this risk even further into the future. I'm more worried about the many risks that are more imminent.

You miss my point. Once we have a GAI, we can have many GAI, and if things scale amazingly in number of humans I see no reason they shouldn't scale similarly in number of AI. From "we have a GAI capable of recursive self improvement, that is significantly better at GAI design than any individual human" to "we have a GAI capable of recursive self improvement, that is significantly better at GAI design than all collective humans" involves the passage of non-zero time, but I don't expect it to be significant compared to the time to get there in the first place without significant other considerations.

Would the first AI want more AI's around? Wouldn't it compete more with AI's than with humans for resources? Or do you assume that humans, having made an AI smarter than an individual human, would work to network AI's into something even smarter?

Either way, the scaling issue is interesting. I would expect the gain from networking AI's to differ from the gain from networking humans, but I'm not sure which would work better. Differences among individual humans are a potential source of conflict, but can also make the whole greater than the sum of the parts. I wouldn't expect complementarity among a bunch of identical AI's. Generating useful differences would be an interesting problem.

If there is more to be gained by adding an additional AI then there is to be gained by scaling up the individual AI, then the best strategy for the AI is to create more AI's with the same utility function.

Edited to add: Unless, perhaps, the AI had an explicit dislike of creating others, in which case it would be a matter of which effect was stronger.

This might work well for mathematics and programming, but I don't know if would work so well for things like lightbulbs where you have to build real stuff or, say, neuroscience where you have to run experiments on real brains.

I don't know if would work so well for things like lightbulbs [or] neuroscience.

It doesn't matter whether your piece of progress is in mathematics or technology or basic science, I still always find this to be true:

Pick any piece of progress you think of as a "sudden breakthrough," read a history book about just that one breakthrough, and you will find that the breakthrough was the result of messy progress like the Polymath Project, but slower: multiple investigators, wrong turns, ideas proposed and combined and discarded, the space around the final breakthrough slowly encroached upon from many angles.

Ironically, lightbulbs are the paradigmatic example of invention being messy and of multiple discovery; multiple discoveries in general is covered by Kevin Kelly in ch7 of What Technology Wants (online draft):

An incandescent light bulb based on a coil of carbonized bamboo filament heated within a vacuum bulb is not inevitable, but "the electric incandescent light bulb" is. The concept of "the electric incandescent light bulb" abstracted from all the details that can vary while still producing the result — luminance from electricity, for instance — is ordained by the technium's trajectory. We know this because "the electric incandescent light bulb" was invented, re-invented, co-invented, or "first invented" dozens of times. In their book Edison's Electric Light: Biography of an Invention, Robert Friedel and Paul Israel list 23 inventors of incandescent bulbs prior to Edison. It might be fairer to say that Edison was the very last "first" inventor of the electric light.