Good luck with that third bullet. Big ideas can’t be planned like growing tomatoes in one’s garden. We stumble upon ideas, and although we can sometimes recall how we got there, we could not have anticipated the discovery in advance. That’s why grant proposals never wrap up as, “And via following this four-part plan, I will have arrived at a ground-breaking new hypothesis by year three.”

Three impossible thoughts before breakfast we can manage, but one great idea before dinner we cannot.

Unplanned ideas are often best illustrated by 'Eureka!", or 'Aha!', moments, like Einstein's clock tower moment that sparked his special relativity, or Archimedes’ bathtub water-displacement idea.

Why are great ideas so unanticipatable?

Perhaps ideas cannot be planned because of some peculiarity of our psychology. Had our brains evolved differently, perhaps we would never have Eureka moments.

On the other hand, what if it is much deeper than that? What if the unplannability of ideas is due to the nature of ideas, not our brains at all? What if the computer brain, Hal, from 2001: A Space Odyssey were to say, “Something really cool just occurred to me, Dave!”

In the late 1990s I began work on a new notion of computing which I called “self-monitoring” computation. Rather than having a machine simply follow an algorithm, I required that a machine also “monitor itself.” What this meant was that the the machine must at all stages report how close it is to finishing its work. And, I demanded that the machine’s report not merely be a probabilistic guess, but a number that gets lower on each computation step.

What was the point of these machines? I was hoping to get a handle on the unanticipatability of ideas, and to understand the extent to which Eureka moments are found for any sophisticated machine.

If a problem could be solved via a self-monitoring machine, then that machine would come to a solution without a Eureka moment. But, I wondered, perhaps I would be able to prove that some problems are more difficult to monitor than others. And, perhaps I would be able show that some problems are not monitorable at all – and thus their solutions necessitate Eureka moments.

On the basis of my description of self-monitoring machines above, one might suspect that I demanded that the machine's “self-monitoring report” be the number of steps left in the algorithm. But that would require machines to know exactly how many steps they need to finish an algorithm, and that wouldn’t allow machines to compute much.

Instead, the notion of “number” in the self-monitoring report is more subtle (concerning something called “transfinite ordinal numbers”), and can be best understood by your and my favorite thing...

Committee meetings.

Imagine you have been placed on a committee, and must meet weekly until some task is completed. If the task is easy, you may be able to announce at the first meeting that there will be exactly, say, 13 meetings. Usually, however, it will not be possible to know how many meetings will be needed.

Instead, you might announce at the first meeting that there will be three initial meetings, and that at the third meeting the committee will decide how many more meetings will be needed. That one decision about how many more meetings to allow gives the committee greater computational power.

Now the committee is not stuck doing some fixed number of meetings, but can, instead, have three meetings to decide how many meetings it needs. This decision about how many more meetings to have is a “first-order decision.”

And committees can be much more powerful than that.

Rather than deciding after three meetings how many more meetings there will be, you can announce that at the end of those decided-upon number of meetings, you will allow yourself one more first-order decision about how many meetings there will be. The decision in this case is to allow two first-order decisions about meetings (the first occurring after three initial meetings).

You are now beginning to see how you as the committee head could allow the committee any number of first-order decisions about more meetings. And the more first-order decisions allowed, the more complicated the task the committee can handle.

Even with all these first-order decisions, committees can get themselves yet more computational power by allowing themselves second-order decisions, which concern how many first-order decisions the committee will be allowed to have. So, you could decide that on the seventh meeting the committee will undertake a second-order decision, i.e., a decision about how many first-order decisions it will allow itself.

And once you realize you are allowed second-order decisions, why not use third-order decisions (about the number of second-order decisions to allow yourself), or fourth-order decisions, and so on.

Committees who follow a protocol of this kind will always be able to report how close they are to finishing their work. Not “close” in the sense of the exact number of meetings. But “close” in the sense of the number of decisions left at all the different levels. And, after each meeting, the report of how close they are to finishing always gets lower.

And when such a committee does finish, the fact that it finished (and solved whatever problem it was tasked) will not have come as a surprise to itself. Instead, you as committee chair will say, “We’re done, as we foresaw from our previous meetings.”

My self-monitoring machines carry out their self-monitoring in the same fashion as in the committee examples I just gave. (See the little appendix at the end for some examples.)

What does this have to do with the Eureka moment!?

Some problems are harder to self-monitor than others, in the sense of requiring a higher tier in the self-monitoring hierarchy just mentioned. Such problems are possible to solve while self-monitoring – and thus possible to solve without a Eureka moment – but may simply be too difficult to monitor.

Thus, one potential reason for why a machine has an 'Aha!' moment is that it simply fails to monitor itself, perhaps because it is too taxing to do so at the required level (even though the problem was in principle monitorable). Such Eureka moments could potentially have been made without Eureka moments.

Here, though, is the surprising bit that I proved...

Of all the problems that machines can solve, only a fraction of them are monitorable at all.

The class of problems that are monitorable turns out to be a computationally meager class compared to the entire set of problems within the power of machines. Therefore, most of the interesting problems that exist cannot be solved without a Eureka moment!

What does this mean for our creative efforts?

It means you have to be patient.

When you are carrying out idea-creation efforts, you are implementing some kind of program, and odds are good it may not be monitorable even in principle. And even if it is monitorable, you are likely to have little or no idea at which level to monitor it. (A problem being monitorable doesn’t mean it is obvious how to do so.)

The scary part of idea-mongering is that you don’t know if you will ever get another idea. And even if an oracle told you that there will be one, you have no way of knowing how long it will take.

It takes a sort of inner faith to allow yourself to work months or years on idea generation, with no assurance there will be a pay-off!

But what is the alternative? The space of problems for which you can gauge how close you are to solving it is meager.

I’d rather leave the door open to a great idea that comes with no assurance than be assured I will have a meager idea. You can keep your nilla wafer – I’m rolling the dice for raspberry cheesecake!

For example, suppose the machine can add 1 on each step. Then a self-monitoring machine can compute the function “y=x+7” via allowing itself only seven steps, or “meetings”. No matter the input x, it just adds 1 at each step, and it will be done.

To handle “y=2x”, a machine must allow itself one (first-order) decision, which will be to allow itself x steps, and add 1, x many times, starting from x. (This corresponds to having a self-monitoring level of omega, the first transfinite ordinal. For “y=kx”, the level would be omega * (k-1).)

In order to monitor “y=x^2” (i.e., “x squared”) it no longer suffices to allow oneself some fixed number of first-order decisions. One needs x many first-order decisions, and what x is changes depending on the input. So now the machine needs one second-order decision about how many first-order decisions it needs. Upon receiving x=17 as input, the machine will decide that it needs 16 more first-order decisions, and its first first-order decision will be to allow itself 17 steps (to add one) before making its next first-order decision. (This corresponds to transfinite ordinal omega squared. If the equation were “y=x^2 + k”, for example, the ordinal would be omega^2 + k.)

This hierarchy keeps going, to omega^omega, to omega^omega^omega, and so on.

Comments

And committees can be much more powerful than that.

Rather than deciding after three meetings how many more meetings there will be, you can announce that at the end of those decided-upon number of meetings, you will allow yourself one more first-order decision about how many meetings there will be. The decision in this case is to allow two first-order decisions about meetings (the first occurring after three initial meetings).

Wow, this sounds like the worst possible way to get anything done but maybe not. The bulk of my career has been with Maxwell's Equations, for example, and you cannot solve them, of course, you can only converge on the best answer given more time and resources and a lower return on investment.

But the framework to solving them - a company that makes a tool that physicists and engineers use to optimize fields to optimize designs - is entirely quantifiable in the beginning. Basically, if we were going into a product planning meeting and anyone announced we would have meetings to decide how many meetings we would need, I would fire the whole room.

Why? Because I don't need people to accomplish what they did, I only need Bayes. In trying to anticipate a World Cup victor, for example, it is easy to set up a Bayes utility because as long as the information is self-correcting as it converges, it will be right.

I guess, in summary for a comment nearly as long as your article (and thanks for your patience reading all this), those eureka moments can certainly be planned. Einstein did not have a Eureka moment as much as he had a problem that gnawed at him for years, and he kept at it until, in true Sherlock Holmes fashion, he eliminated the impossible and found something new. No different than when you write a book - an idea strikes you, which may or not be a eureka moment, but the true evidence is that you make yourself write every day and, as you go, you get miracle insights because you have kept at it, including changing books altogether. But it is the framework and the mental discipline that makes it happen

Of course, the "committee meeting" metaphor is just my way of explaining the way I quantified "how close am I to a solution" without having to discuss transfinite ordinal numbers. For the casual passerby, I'm not actually suggesting committees! :) I am suggesting a computational mechanism that not only computes, but does so in such a manner so that it isn't surprised when it finishes the problem.

I agree that there are measures one can take to enhance the odds that one will get that big idea. That's the whole "aloof" business I've been yammering about. ..not to mention stick-to-it-iveness, etc.

But the space of problems brains like ours have to deal with is huge, and there's no reason to believe the problems lie only within the realm of monitorable functions (nor reason to believe that we have the resources or mechanisms to monitor them even when they are, in principle, monitorable). ...and that necessarily means inherently Eurekalicious problems, no matter how powerful one's brain.

I believe that creative, educated people can extrapolate, or get ideas from existing breakthroughs and intuit unexpected connections between them. But, it takes genius to have a completely new breakthrough - a real idea - and how often does that happen? If you're an Einstein, and Edison, a Galileo, a Michelangelo or a Picasso, the breakthroughs keep coming, but for the rest of us? Also remember that Einsteins don't just work until they have an idea that can make their reputation - they are who they are, doing what they do, regardless. Sometimes patient, sometimes impatient, they just kept going. Even Leeuwenhoek, the non-scientist who discovered bacteria, said "my work, which I've done for a long time, was not pursued in order to gain the praise I now enjoy, but chiefly from a craving after knowledge, which I notice resides in me more than in most other men. And therewithal, whenever I found out anything remarkable, I have thought it my duty to put down my discovery on paper, so that all ingenious people might be informed thereof. "

Makes sense that the extrapolations would speed up as education improves, and as we hear more about each others worlds via today's media soup - fertilization is important. Does that exposure also increase the likelihood for completely new breakthroughs, or are completely new breakthroughs a function of the individual genius, and not exposure to the extrapolations of others?

You know, I could have used a word like "immersed" in place of "an Einstein, and Edison, a Galileo, a Michelangelo or a Picasso," and said it better.

I don't think you have to "be" a genius to have genius, any more than you have to be a great artist to have talent. I do think there needs to be a willingness to swim in the pool, and a certain lust for ideas comes in handy. There's a saying that the first hundred drawings are throwaways. After that you've developed the hand-eye coordination to steer at will, and hopefully also the confidence to let go and let art happen.

And there I went, saying "lust for ideas" to the "value of aloof" guy. :-)

I like to think I had a eureka moment (for me anyway) a couple of days ago, regarding quantum mechanics, when I suddenly realised that it is the way that we detect these energies and particles that is completely dictating our perceptions as to what they are doing. Why is there is so much talk about these observations and perceptions and yet so little talk about the validity of the detection processes and how this information is being conveyed to us?
The detectors are only detecting what they are expecting to find or rather what the scientists who told the computer programmer that programmed the machine is expecting to find. On top of that there is then the selection process itself which again could be very biased and relies upon what the computer programmer has programmed as the selection criteria and so on. It is therefore possible that we are not even detecting some of the most interesting phenomena that may be occuring in particle colliders at CERN.
Quantum field theory (from the Penguin Dictionary)
________________________________________
Quote "A quantum mechanical theory in which particles are represented by fields whose normal modes of oscillation are quantized. Elementary particle interactions are described by relativistically invariant theories of quantized fields (i.e. by relativistic quantum field theories). In quantum electrodynamics, for example, charged particles can emit or absorb a photon, the quantum of the electromagnetic field. Quantum field theories naturally predict the existence of antiparticles and both particles and antiparticles can be created or destroyed; a photon, for example, can be converted into an electron plus its antiparticle, the positron. Quantum field theories provide a proof of the connection between spin and statistics underlying the Pauli exclusion principle. See also electroweak theory; gauge theory; quantum chromodynamics".
As Lubos said in the Garret Lisi paper blog quote 'the experimenters wanted to observe something that is even crazier: having the same type of particles, photons, that suddenly acquire antisymmetric wave functions. That's really silly. The statistics is such a defining feature of the electromagnetic field that photons that wouldn't follow it would really have to be "different particle species". So even if they observed the "crazy, shocking" result they wanted to observe, the interpretation wouldn't be in terms of photons that "play by different rules for a while". It would be about a discovery of a new particle species that is sometimes emitted together with photons". Luboš Motl | 07/01/10 | 09:48 AM
It is also therefore possibly that these experiments are not as safe as the scientists at CERN believe them to be.
I think that the particles and forces that are not being detected are the ones to worry about, if you're a worrier like me.

I've had so many Eureka moments in my life, I actually count on them coming when I need them. I can't know exactly when, but I have sufficient empirical evidence to trust they will arrive.

A Eureka moment is not only for geniuses (or people that had breakthrough scientific ideas like Einstein etc), but for all of us encountering various problems that seem unsolvable.

To me it is no mystery, it is merely a coupling of myriads of information pieces put together in a new way. Thus, the more you work on a problem, the easier it is for you to know where additional pieces might fit. But, what amuses me the most, is that you actually need a lot of seemingly useless or irrelevant input or information in order to find solution to your initial problem (scientific and non-scientific).

Looking at the same 'thing' from multiple angles help you put information together in new ways. I believe that is all Eureka is.

Mark, I think there's a point that's often missed in thinking about ideas and problems, which is that often the real difficulty is in having a poorly defined or articulated problem. In other words, we often go in circles because it turns out we don't have a good handle on what the problem actually is that we're trying to solve.

In cases where there is a specific solution, then we can often solve such problems when we take the time to actually define them and the parameters that affect them. In addition to this, many Eureka moments are actually the realization that we've been "over-thinking" the problem and become preoccupied with irrelevancies.

I also agree with you regarding the idea of "genius", since ideas don't occur in a vacuum. The pre-requisite condition is that the knowledge and experience must exist before a Eureka moment can manifest. It would be truly remarkable if a physicist was working on a problem and suddenly got an idea about a new biological species. Unless there is some connection between the ideas being worked on and the knowledge an individual possesses, such moments simply don't occur.

What I find interesting, is how many such Eureka moments may have occurred in individuals that are untrained in a particular discipline and never realized what they had.

On not knowing the right problem to even solve, I agree. As far as my self-monitoring machines are concerned, one could say that the function they're asked to compute concerns finding the right problem to solve, and then also solving it. Luckily, my idea above is so abstract I can remain mute about exactly what function the brain is trying to compute when it is aiming for a discovery. :)

As I read through your post, it reminded me so much of an Edge Master Class, A Short Course in Thinking About Thinking (http://www.edge.org/3rd_culture/kahneman07/kahneman07_index.html ), in which Daniel Kahneman talks about estimating the amount of time he and his academic group imagined it would take to complete writing a textbook. The estimates varied between eighteen months and two and a half years, and so they felt excited and optimistic about the judgment and decision-making book they were attempting to write.

But here's the kicker: the Dean of Education, who was kept in the loop, was aware that 40% of the teaching groups that began writing a book never finished it and for those who did finish, it took them between seven and ten years to complete it. What's amazing here is that the Dean had never thought about the importance of this data to writing teams before being asked for it by Kahneman. The Dean had gathered this data through knowledge accretion but ignored its significance. And fascinatingly enough, it did turn out to be relevant in this instance: Kahneman left the country and didn't finish the book after all.

I recommend the entire Edge article for a variety of informative and amusing insights into the human mind, in particular, the value of viewing a problem from the inside as well as the outside. Despite our Eureka moments, even the most intelligent among us are beleaguered by logical fallacies.