Sunday, December 31, 2006

A giant once told me some unfavorable things about George Lakoff's immense ambition: "Lakoff thinks he has discovered the fundamentals of all things". With books about Cognitive Science, Philosophy, Mathematics, and Morals last time I checked, all claiming to be radical departures from the establishment, it is hard to argue against that claim. Yet, Lakoff has to be taken seriously. His basic tenet (in Math, Philosophy, CogSci etcetera) is that people, unlike the enlightenment philosophers view, do not think in literal terms but through analogies and metaphors. This is a point with which we of course agree wholeheartedly.

We have written about mechanisms for creating subtle associations previously. How do these almost invisible associations and connotations affect policy and the public? For a first, policies with strongly loaded connotations such as "tax relief" become, after strong reinforcement, "common sense". And at this point, the avenue becomes wide open for policy-makers; for even if you argue against "tax relief", the mere activation of the concept is an argument for it.

Interestingly, he also points out contradictions in the mindsets of people in both sides of the political spectrum. How can conservatives be "pro-life" and for the death penalty? How can liberals be "pro-choice" and against the death penalty?

Now for Thomas Schelling turn: the 2005 Nobel lecture (REALPLAYER format). If you'd like to watch it on a window, click here. And here is the text.

In the aftermath of 9-11, I was invited to a conference on global terrorism in Japan; in which one of the unforgettable moments was an ominous talk held in Hiroshima, from a survivor of the atomic bomb. That was, of course, after a visit to the peace memorial, and the museum carrying, among other things, the sight of dozens of watches, all broken at the same time by the blast; or concrete structures holding an "human atomic shadow".

Why haven't countries dropped atomic bombs against each other in the last 60 years? There were plenty of bitter, grotesque, wars opportunities for such decision to be made. Yet, it hasn't materialized. As one of the prominent game-theoreticians involved in the arms race, Schelling offers a particularly rich perspective over this question.

What I find particularly interesting about his talk is that his analysis highly supports our view that decisions are embedded into perception. Perception molds one's options; throwing some options to the foreground, others to the background. How a power perceives atomic bombs embeds the power's actual propensity to use it. If your analogy is "the a-bomb is like a bullet", you just might have more propensity to pull that trigger than you'd have if your analogy were to "chemical or biological weapons".

Here are some excerpts:

...in 1953, the US National Security Council brings out a statement mentioning that they should resist "the moral problem in the inhibitions to use the a-bomb";...shortly followed by "the US will consider nuclear weapons to be as available for use as other munitions";..."Nuclear weapons must now be treated as in fact having become conventional"

President Johnson, on the other hand, would later proclaim that "there is no such thing as a conventional nuclear weapon"; "to deploy it is a political decision of the highest order".

Perception of nuclear weapons has changed a lot since Hiroshima; a lot of the credit should go to scientists which fought to enhance consciousness and to provide unwelcome news for the generals, such as the thought of nuclear winter. Take, for instance, slides 69 to 75 of this class for some sobering views of the arsenal circa 1981. We have had a good 60 years of restraint from that button. Let's hope for work on more.

Notes: (i) You need REALPLAYER to watch this stream; (ii) your browser might have a cached copy of another video; so if a different video starts, click on this Post's title to watch it.

I'd guess that it has more to do with public choice theory than ardent lobbying. Since national food producers are unlikely to reformulate their entire line for the benefit of a few million New Yorkers, a trans-fat ban would sweep large categories of food off the supermarket shelves, in a way that would be directly and obviously attributable to the ban (since they would disappear from every supermarket shelf at once). Banning them in restraurants, on the other hand, will merely make some of the food taste worse, other of the food more expensive, and so forth, in a thoroughly idiosyncratic way. Consumers are unlikely to connect thousands of subtles shift in their local restaurant fare to the ban, as they surely would of glazed donut holes suddenly vanished from the shelves of the city.

True, a sudden disappearance of such poison delicious food from shelves would at first get New Yorkers on the road craving for their finger foods, with a potential subsequent backlash against the policy.

However, it seems that a proper reading of "Fluid concepts and creative analogies" might be of value here. Psychologically, it surely feels that we have static concepts, such as restaurant or supermarket, both used and contrasted against each other in this case. In real life, however, things get much more messy than that. So what is perhaps missing from this debate is the fact that there is a concept which, for this particular context, fits right in between supermarkets and restaurants: restaurant chains.

Restaurant chains fall prey to the ban, and also to the supermarket consequences.

Consider how McDonald's or Papa Johns might respond. Their whole supply chain breathes transaturated fats, so a change in the whole supply chain meets unbearable costs. On the other hand, getting out of a market such as New York also seems to be unbearable (not to mention that the decision might be distributed across a large number of franchise owners, wikipedia-style). Another possibility is that New York Big Macs will be steeply more costly than those 40km away.

Either way, the point is, concepts are fluid, all-singing all-dancing entities that swing according to the tune of the moment. The ease with which we ignore this makes such dichotomies such as "why restaurants and no supermarkets?" sound closer to reality than they really are. This is ignored in many economic and policy discussions, and is a point to which we will return in due course.

Friday, December 29, 2006

This is a blog about science. All sciences related to decision-making welcome. Science, not politics. Having said that, we cannot refrain from that occasional indulgence...

The Economist just ran a piece (subscription required) over Brazil's civil aviation collapse:

The underlying problems are that the air force runs air-traffic control and it and the government have failed to keep up with booming traffic. This has grown at 15% a year or more since 2004, notes Andre Castellini of Bain, a consultancy. The government ignored repeated calls for more air-traffic controllers and investment. While control towers lacked essential equipment, airports received expensive facelifts.

After a bumbling beginning, the government of President Luiz Inácio Lula da Silva is now taking steps to avoid chaos during the peak holiday season. Air-traffic controllers are being drafted out of retirement and backup equipment is being installed. Some responsibility is to be decentralised from the overburdened control centre in Brasília. If that is not enough to ensure a trouble-free Christmas and Carnival, the defence minister offered another solution: prayer.

Of course, more panic, crimes, and general confusion ensued during the holiday season (and it's not even over yet, so, ladies and gentleman, please hold your breath). Airlines are either deceiving customers or totally unaware of what's going on. And the consequences are severely bitter: A kid needing an organ transfer just took too long at the airports to make it.

The fault is not due to the airlines. It is due to a collapsing air-control system. Of course credit goes up all the way to the presidential team, which held back resources needed to keep up with the surge in civil aviation. The government's position, of course, is to deny all accusations and to please the public with their amusing internal fights.

There is no solution to this problem, because the solution has been thrown away during our recent elections. Lula, the Messiah turned president (he is called by the foreign policy minister as "our guide"), accused his challenger, Alckmin, a former Governor of São Paulo state, of planning to privatize large state-owned companies, "destroying the Brazilian people's hard-earned assets". He won the election, but painted himself into a corner.

This whole thing comes to me as an "I told you so" moment of relief. A couple or so of years ago I was asked to review the state of Brazilian logistics("Por uma nova logística nacional"), and I obviously argued the obvious: money is needed, and the government has none; so the time has come to privatize Brazil's airports and aviation infrastructure (such as prosperous countries have done). This was published in a 2-book series from the Getulio Vargas Foundation. That option, however, is now a closed avenue, for the re-elected president decided to close it in his election accusations.

This administration is a textbook example of biases departing from rationality. From overconfidence to groupthink, it's all in there. Historians of Brazil had better start studying their social psychology and behavioural economics.

In another context, Nobel prize winner Thomas Schelling argues that sometimes it is to your advantage to close some decision avenues with his 'strategic commitment' concept... perhaps that's what Lula the Messiah is reading. If only he could read it while sober we might have somewhere to go during vacation time.

Disclaimer: I have briefly met Mr. Alckmin for activities of the Club of Rome. Yet, the paper was written long before such collaborations ever took place.

Ok. If you took a look at those chess masters, you might be interested in analyzing, scientifically, what goes on on the minds of people. Since science grows by comparing stuff, perhaps it is high time to compare intuition to the alternative: The lack of it.

What is life like if you don't have intuition? There are two possibilities, either you (i) cannot learn all those subtle things we do unconsciously, and are bound to respond to your impulses, or (ii) you badly suffer from analysis paralysis. In both cases what is lacking is perhaps the ability to create invisible, subgconitive, associations, the ability to obtain preferences, the basis of all judgement--more on this on another post; for now, let's consider the condition Hofstadter refers to as Sphexiness:

When the time comes for egg-laying, the wasp Sphex builds a burrow for the purpose and seeks out a cricket which she stings in such a way as to paralyze but not kill it. She drags the cricket into the burrow, lays her eggs alongside, closes the burrow, then flies away, never to return. In due course, the eggs hatch and the wasp grubs feed off the paralyzed cricket, which has not decayed, having been kept in the wasp equivalent of a deepfreeze.

Here is our hero working with the delicious, paralyzed, cricket, and checking how the newly buried home looks.

Hofstadter continues:

To the human mind, such an elaborately organized and seemingly purposeful routine conveys a convincing flavor of logic and thoughtfulness, until more details are examined. For example, the wasp's routine is to bring the paralyzed cricket to the burrow, leave it on the threshold, go inside to see that all is well, emerge, and then drag the cricket in. If the cricket is moved a few inches away while the wasp is inside making the preliminary inspection, the wasp, on emerging from the burrow, will bring the cricket back to the threshold, but not inside, and will then repeat the preparatory procedure of entering the burrow to see that everything is all right. If again the cricket is removed a few inches while the wasp is inside, once again she will move the cricket up to the threshold and reenter the burrow for a final check. The wasp never thinks of pulling the cricket straight in. On one occasion, this procedure was repeated forty times, with the same result.

Talk about a behaviorist dream. The poor Sphex is condemned to obey its impulses, without a shred of understanding, and without the ability to recognize that 'something is not ok here... but what exactly?" This funny feeling is the subcognitive basis for intuition, and not all species are privileged to have it. Of course, the elaborate behavior has a name: instinct.

Despite your consideration of the unstoppable gambler, the alcoholic, and other compulsive or obsessed people, humans do display their powers of intuition most of the time. This begs two questions: what are the mechanisms that enable human intuition? How do those funny feelings arise? And why do some species have intuition, while others do not?

Wednesday, December 27, 2006

Confession of the day: As a pre-teenager, I was into "Asteroids", which I think is from Atari. Nice game (if you're from the Half-life2 age you'll know how old I am and how sad this sounds). A bunch of large rocks on a 2D space, and some UFO's thrown in to shoot your spaceship.

Anyway, one day I was dazzled with this guy. Instead of finishing up the rocks and moving on to a harder stage, like real man do, the guy destroyed almost all rocks, and, subsequently sat and waited for the UFOs to come.

The point being? The points, of course: one large rock shot would give you 20 points, while a small UFO would bring 1000 points. Imperial powers don't have as many lifes to spare than that guy did--it was awesome. This is, perhaps, a game that game-theorists should have been playing all along.

There is a huge momentum on "behavioral game theory", or "cognitive game theory", or whatever you want to call it. For a review, see the book with the former title from Colin Camerer. The basic idea is that, rather strangely (for economists only), people do not "play" their best options. Real people do not play "rationally". They stubbornly deviate from the rational choice, even in simple settings.

Perhaps the best example is that of the problem known as 'ultimatum bargaining'. Suppose that an agent A makes a proposal and another agent B decides to accept or reject the offer.

A: do you want ice cream?B: yes

B's reply should always be yes, as long as the offer holds "utility" (we'll only use the term under quotes, ladies and gents, for reasons yet to be brought up). But here's the catch: A has some amount of money, say, $100, and proposes a split between A and B.

A: I get $60 and you get $40?B: (having no option but to accept or reject the $40) Ok...

A: I get $99 and you get $1?B: (having no option but to accept or reject the $1) Rot in hell.

This scenario is an established psychological fact, and a departure from "economic rationality". After all, $1 is more than $0, and $0 is what our friend decided to get. So behavioral game theorists (a psychological type of game-theoretician (a mathematical type of theoretician)) have invented the label that humans have, quote, "negative reciprocity". But here lies the problem with inventing yet one more heuristic or bias:

Recent theories attempt to explain rejections using social preference functions which balance a person's desire to have more money with their desire to reciprocate those who have treated them fairly or unfairly, or to achieve equality. [...] Economists have resisted [such functions] because it seems to be too easy to introduce a new factor in the utility function for each game [in which behavior diverges from EUT]. (from Camerer 2003, p.11)

This is the problem with the heuristics and biases program today. As the number of biases increases with each new paper, how can economics manage to have an integrated mathematical model underlying decision-making? The flurry of distinct functions make it seem like the enormous number of incompatible unix and linux distributions. Are we going to have "utility", plus or minus a laundry list of exclusions? Where is the dismal science going?

Perhaps it is high time for an overhaul of the basic concept, centered in human intuition; a mathematization of Klein's model (this is almost given by Hofstadter's FCCA)--but we're not there yet.

In the case of ultimatum bargaining, why do people reject? Perhaps because they do not play the game at all, or even see that there is a game (and a gain) in it. The reasoning is simple: consider for instance Hofstadter's variations of "the same phrase":

* He didn’t give a flying f*** . * He didn’t give a good God damn. * He didn’t give a tinker’s damn. * He didn’t give a damn. * He didn’t give a darn. * He didn’t give a hoot. * He didn’t care at all. * He didn’t mind. * He was indifferent.

These phrases have the same meaning. A choice of one, however, tells a lot about who you are and where you are and who you are talking to. Have you heard of the smallest conversation ever? Here it goes:

"?""!"

Ok. Now here's the context in which this occurred. Victor Hugo had just finished "Les miserables", and went for a vacation of sorts. Anxiety, however, made him restless and made him write to his publisher the following letter: "?", to which his publisher, knowing that sales were good, and not to be outdone by V.H., responded: "!".

A conversation takes place in a context. Content and context fight to create meaning. If you don't care about something and want to tell someone that fact, it makes a big difference which phrase you will pick from Hofstadter's list: it means a lot about who you are and who you think your listener is.

And so does a game. If you, as a proposer, offer someone 1/100th of what you have, you are, in effect, by the subtle connotations of the act, telling people how much worth you view them. Their rejection has absolutely nothing to do with money, rationality, or retribution--they never framed the situation as an economic game.

If people were playing "nature", or an inanimate object, they would not mind to accept humble offers; it means nothing about who they are. Miners do not scream out: "I've had it with you, you greedy mountain; I know you have a million dollars in gold, stop offering me these miserable bargains". Instead, they accept the bargains, and hope for better ones in the future. People will shoot the large asteroids for a mere 20 points. Yet, given the chance, they might rather choose to get a better bargain, as in the case of waiting for those small, juicy, 1000-points, UFOs.

Farewell to 2006, it's high time for some champagne, but before, some thoughts on what could be accomplished in 2007. It's a way to see how things will screw hold up by December of the upcoming year. Here's a bunch of fascinating, original, projects underway. Details about each one will follow in due course.

(i) To wrap up Jarbas great work on intuition and system 1/system 2 model; also the other paper on the distinction between hard/easy problems; after that there's more work on the pipeline; a lot more work actually;

(ii) To show that RPD can account for high-stakes decision making in real-life economic settings, such as an international currency crisis; this means that the rational actor view should be dethroned from those settings as well;

(iii) To finish my proof-of-concept project Capyblanca & have a report as soon as possible;

(iv) To develop a mathematical model alternative to the rational actor, and apply the model to at least one economic setting. Since this is related to what David, Anne, and Milber are doing (or starting to do), that's the subject for the next post on game theory.

(v) The Club of Rome: There are (at least) three (large) projects in our minds, and I'm hoping for a 2nd run of "Cultural challenges of Globalization".

Intuition is often poorly defined, if at all. Sometimes it is mentioned as 'unconscious knowledge'. Our definition is 'acquired instinct'--and instinct is of course 'unconscious mental or physical response. But this is for another post. (For the philosophers out there: kindly disregard our mental/physical distinction above as mere artifact; this is no praise of dualism; more on the issue in the future).

Back to intuition. Gary Klein is probably the psychologist which provided the best basic research on it; bringing up the recognition-primed decision model. One thing I find quite pleasing is that whenever the model is described, it is Klein's epiphany that's being described; not some arid psycho-model:

Six years after he founded his company, Klein won a major contract from the Army Research Institute, which asked him to study how people make decisions under time pressure and uncertainty. He decided to track firefighters. He moved into a firehouse in Cleveland and started his interviews. But there was a problem: Veteran firefighters said that they never made decisions. They would simply arrive at a fire, look it over, and attack it. Klein was horrified. "Here we'd just won this big contract, and we were focused on members of a community who said that they never made decisions.

"The commanders said fire fighting is just a matter of following routine procedures," Klein continues. "So I asked to see the book in which all of those procedures were codified. And they looked at me as if I was nuts. They said, 'Nothing's written down. You just learn through experience.' That word -- 'experience' -- became my first clue.

"I noticed that when the most experienced commanders confronted a fire, the biggest question they had to deal with wasn't 'What do I do?' It was 'What's going on?' That's what their experience was buying them -- the ability to size up a situation and to recognize the best course of action."

"what's going on" is, of course, the central question involved on the Copycat Project. But let's not get too carried away for the time being. Let's just take it for granted that humans do have the power to quickly perceive situations in very abstract terms. More on how this is achieved -- and the implications for Klein's model --, in the future.

Thought of this way, intuition is really a matter of learning how to see -- of looking for cues or patterns that ultimately show you what to do. The commander who saved his crew didn't have ESP, he simply had "SP." His sensory perception detected subtle details -- small-but-stubborn fire, extreme heat, eerie quiet -- that would have been invisible to less-experienced firefighters. "Experienced decision makers see a different world than novices do," concludes Klein. "And what they see tells them what they should do. Ultimately, intuition is all about perception. The formal rules of decision making are almost incidental."

The critical role of recognition in decision making came into sharper focus when Beth Crandall, 51, vice president of research operations at Klein Associates, got a contract from the National Institutes of Health to study how intensive-care nurses make decisions. In 1989, she interviewed 19 nurses who worked in the neonatal ward of Miami Valley Hospital in Dayton, Ohio. The nurses cared for newborns in distress -- some postmature, some premature. When premature babies develop a septic condition or an infection, it can rapidly spread throughout their bodies and kill them. Detecting sepsis quickly is critical. Crandall heard dozens of stories from nurses who would glance at an infant, instantly recognize that the baby was succumbing to an infection, and take emergency action to save the baby's life. How did they know whether to act? Almost always, Crandall got the same answer: "You just know."

You just know. You don't think; don't compare options; don't evaluate anything. You just know. That is our phenomena; our anchoring point in looking at decision-making theories.

I have never met Herbert Simon. Some years ago prof. Bianor asked me to invite him to Brazil to give a lecture at EBAPE. I don't know what a personal talk with Simon would be like, because I guess we like "the same girls". I mean, we like the same scientific questions: cognitive psychology, decision making, economics, artificial intelligence, management science, operations research, and so forth. Either we would talk for hours and hours, or we would not be able to speak to one another at all. (Either way, it would have been amazingly cool.) For all that he has accomplished, and that is an immense lot, I am really skeptical of a big part of it. For instance, his chess theory and my chess theory seem to be from totally different planets. I think his approach to artificial intelligence is not a promising avenue; and of course from these emails our differences in decision-making theories should be obvious. Anyway, I contacted one of his colleagues, renowned chess psychologist Fernand Gobet, which, unfortunately, told me that Prof. Simon had just died. So he never came to visit us.

To get back to the first point, how should Simon's theory of decision-making be evaluated given the enormous evidence from psychology, from priming studies, and from unconscious influences on human behavior? Once again, Simon wrote: "[Some of] these activities--fixing agendas, setting goals, and designing actions are usually called problem-solving. The last--evaluating and choosing, is usually called decision-making."

How do we fix our agendas? How do we set our goals? How do we design a set of actions? How do we evaluate each possible action? How do we select one out of many?

Consciously, right? By thinking about the problems consciously. By thinking about what we want, and how we could get there. Well, that's very very true, but the point is, what happens when we think consciously? A lot of unconscious stuff. And, most of the time, this unconscious stuff takes many decisions for us.

Look at this example: which problem is harder?

(A) A bat and a ball cost 238 in total. If the bat costs 112 more than the ball, how much is the ball?

(B) A bat and a ball cost 110 in total. If the bat costs 100 more than the ball, how much is the ball?

Most people will say (A) is harder. But most people would get (B)'s answer wrong. The "unconscious stuff" that goes on when we solve (B) actually makes us prone to errors. We think the ball costs 10, while it should be obvious that it does not. We are primed by the numbers 110 and 100 to get a wrong answer. Jarbas is writing a fascinating study of this phenomenon.

A psychologist named John Bargh is the world's expert on priming. He has shown that, by making subtle experiments, you can alter people's goals, attitudes, and behavior in all kinds of directions. You can make good people more prone to act in an aggressive way, for instance. Most importantly, people never discover that they have been 'manipulated'. This fact is extremely important for decision-making theories, and, at the same time, is ignored by them (some exceptions being Klein and Hofstadter).

The previous bats and balls problem comes from the heuristics & biases research school, was cited by Kahneman at his Nobel lecture, and has been studied in detail by Shane Frederick. One of their ideas is that decisions are affected by two different systems in the mind, intuition and reason. Intuition would be the unconscious, automatic steps taken in our minds without any effort, and it would present "reason" with a possible solution to any problem. This solution could be accepted or rejected consciously.

At the first MIM class earlier this year, someone asked me to define intuition, and I didn't respond. "Tricky philosophical business", I said, or something to the effect. But the question remained in my mind, and a definition that now comes to me is part of another question: Why do humans have intuition? Or, to rephrase, why don't we just have instinct?

Consider what happens when a domesticated horse that has never seen a lion first sees one. Does the horse ignore the lion? That's a plausible alternative, after all, since the horse never learned that lions can be dangerous. But they have instinct. So an adult horse that has never seen a lion freaks out completely whenever they see one for the first time. A pure adrenaline shock. All pre-programmed by instinct to respond to danger. Humans, of course, also have instinct. But we have evolved what can be called "the ability to acquire instinct". Because we are a species that spread through all habitats, it was more important to be able to acquire immediate, automatic, unconscious responses by learning than to be pre-programmed genetically towards predator species. It takes time, genetic time, evolutionary time, generations after generations time, to develop instinct. We moved from habitat to habitat very rapidly, so fear, repulsion, anger, attraction, could not be brought to you by instinctive impulses. And it's not just "animals" we have to defend from. We have to defend from the world's greatest predator, the perfect predator, ourselves. We have human enemy clans to defend from--so the ability to acquire "new instincts" is, for us, extremely important. In fact, for all species found across numerous habitats, intuition should be important. But, if I'm correct, species confined to a single habitat, with a fixed set of prey and predators, should not bother to develop the cognitive skills to acquire new instincts. So my definition of intuition is: acquired instinct.

The good side of all of this is that we humans can transfer the cognitive ability to acquire knowledge to intuition. As children, we have to consciously learn the shapes of letters, and painfully slowly read words; as adults, the words are simply there, as if no effort of interpretation was carried out. Chess beginners consciously have to think about what to do; chess masters just do "the best move", as Capablanca said, rather remarkably. And, as Gary Klein has shown, the same thing is found from nurses to firefighters to the military, and so on. One of the things I'd like us to do is to show that the same effects arise in top decision-makers, such as heads of state or central bankers, and show that this thing about intuition and unconscious knowledge means serious business. Another thing is to show the centrality of analogy; yet another one is to provide a mathematical model alternative to the rational actor model, and apply it to one or two economic problems.

There are huge opportunities out there for doing great, significant, work. I hope we'll grasp some.

Tuesday, December 26, 2006

What happens in those fleeting moments when you perceive things or people or situations?

It was one of the best days of my life. I was doing my PhD, surrounded by great, incredibly intelligent, and interesting, and funny, friends; working for a whole year at UNICAMP, I had the best girlfriend ever, and my computer program was finally finished: that important paper would soon be out for publication and life coudn't be better. So after a whole day at the lab, me and my girlfriend decided to go watch some movie at the cinemas.

So my mind is going through many good things, with only one very small annoyance. Cris, my girlfriend, is taking forever to get ready. Women! Anyway, she wants to stop at her apartment, then to go to mine, before heading to the movies. Being a smart man, I say what we always do: of course. So we go and the whole ordeal takes such a loooooong, long time, at her place.

Then we go to my place. I was in a hurry, because we were late to the movie. She couldn't care less; but you know how women are. Anyway, m y perception of the situation, before going in the door, was that this was "just another regular, good, day". That perception would change in one second.

As I opened the door of my apartment, something looked strange. It was all very subtle and fast. The lights were out and the apartment was dark, but, in less than a single second, before I turned the lights on, I felt sheer fear. Someone was there, inside the apartment. This person was facing me directly. As soon as I turned the lights on, we would have to face each other. I could only hope that they would rob whatever they wanted and leave us alone. In a fraction of a second, my new perception was of the worst kind. Our lives were at stake. But, once again, that perception would change in the next second.

I turn on the lights, ready or not to deal with the situation. And there are lots of people. Not just one, but lots of them. And they scream "surprise"! Ohh yes, ohh gosh, Ok, yes, It's my birthday, and Oh god, everyone's here! So in another glance, my perception changes radically, again, to one of an unexpected surprise party. She gave the keys to someone and that explained how they were in. She took loads of time so they'd get ready. I can still remember the momentary collapse of the ideas of a robbery, and of going to the movies, vanishing away. Nothing like that was going to happen anymore. The perception now is: time to celebrate.

Everyone has been through these kinds of rapid shifts in perception. We don't control it. It happens automatically. There is nothing we can do, in fact, to control how we perceive situations given the speed with which our psychology processes situations. So what goes on in those fleeting moments when we perceive things or people or situations? What is perception, after all?

Perception is categorization. It is the automatic process of categorizing things, people, and situations.

So we have categories in our minds. These categories came from previous experience. We have a number of categories acquired through life. Let's say, at any point T in time, some person has C(T) number of acquired categories. But here's the thing: scientists have counted the number of stimulus coming from our senses at any point in time: we have 11 Million different stimulus bombarding us with data from our 5 senses. How do we go from 11.000.000 stimulus (which we have never seen combined before and will never see combined again) to a relatively small number of categories? This is what Hofstadter discovered: we have an innate ability to make categories "dance", by making analogies to previous situations. We see very different things as "the same" by analogy processes. As Hofstadter put it, the "inexact matching between incoming stimulae and previous categories is analogy-making par excellence".

Let me give an example that I find interesting. Take the iPod mp3 player--it is a hit product that "saved Apple Computer". On the day the first iPod was announced, people did not have the category "iPod" in their minds, so they had to resort to previous categories... making all kinds of analogies you can imagine. In fact, I decided to collect some analogies on the iPod, to use someday in a paper in Marketing. Here are some:

THE iPod is: "the razor or the blade?" "the latest educational tool" "the villain" "the gold standard" "the remote [control]" "apple of our eyes today" "game changer" "the network" "undisputed king" "lifestyle accessory of the new millennium" The "Kleenex" of mp3 players "the new floppy [disks]" "the Xerox machine of consumer electronics" the biggest rabbit in [...] Steve Jobs hat right now the coca-cola of music players The avenue for customers to go to Apple stores the PDA The most guilty "as a mobile phone" [future prospect] "as a canvas for expression" "as a bootable drive" "as a security threat" [video iPod] "as a marketing tool" "as a data repository" "as a unit of measure" [in terms of money, height, and so on; e.g., how many iPods to circle the Earth?] "as key for security" "as a platform" [such as the Solaris platform] "as a legislative force" "as an Ebook" "as a presentation device" "as a business tool" "as a phone phreaking device" "as a portable DVD player" "as a learning tool"

Now, does this affect decision-making? Do people change their decisions because we are analogy machines? Well, think about this: if someone thinks that the iPod is the "lifestyle accessory of the new millennium" or "a canvas for expression", isn't that person more likely to make a buying decision than someone that says it is "a security threat" or a "Trojan horse"? I think so.

Politicians have, of course, known that analogies embed decisions, analogies embed responses from people. If a politician says that "Quebec will become a small boat in storming seas", they are pushing people away from the idea of Quebec independence. If they say that "Quebec did nothing to deserve life in prison", they are arguing for independence of Quebec. And so we go.

Decision-making theories that do not have analogies at their core will, quite simply, fail, at one point or other, to explain what humans do.

Simon writes: "[Some of] these activities--fixing agendas, setting goals, and designing actions are usually called problem-solving. The last--evaluating and choosing, is usually called decision-making."

So the idea is that there are 2 different things, and that theories of decision-making should separate them. Should they? At first thought it seems very natural; one is concerned with what we want to do, the second is concerned with how. But human psychology gets in the way.

One example comes from the vagueness of expert advice.

[Chess novice makes a move]Chess expert: "you don't want to make that move!".Chess novice: "why not?"Chess expert: "That's not the kind of move you make in this kind of position."

What kind of advice is that? Why is it so vague? Because it comes from an expert. Experts have immense difficulty perceiving what they know. It is very hard to verbalize knowledge that has been stored deep in memory.

Let's think about a situation where we are all experts. Some stranger approaches you in the street and you just feel that they do not have good intentions. It has happened to me in Paris, New York, and of course in Rio. It's safe to assume everyone has gone through this many times.

Now, get a phone, call a child, and try to explain them how to know when to reject people that approach them. Can you put it in words? It's very, very, hard to put in words how to know which approaching people you should reject and turn your backs and move on, and which you should listen to. That's why people educate children to never talk to strangers. Children can't go wrong this way. But we, adults, know when someone is trouble. Have you ever been approached by someone and just felt "I REALLY don't want to talk to this person"?

I bet the answer is yes. But can you write down how you came to that judgement? Did you "fix agendas", "set goals", and "designed actions", which were later "evaluated" until one was selected? Probably not. Probably you saw something you did not like. Then you felt weird about that person and started looking for more cues, until a moment comes when you decide--"I'm getting out of here".

This leads us to Gary Klein's recognition-primed decision model. A firefighter, a nurse, a radar operator, they all act instantly, as if they had not (i) fixed an agenda, (ii) set goals, (iii) designed actions (problem-solving part), then evaluated the possible actions and finally selected one of them (decision-making). No, that's not what Klein found out. People with expertise would find themselves immersed into a situation; and with the perception of subtle cues they would instantly have a diagnosis and a course of action. No agenda-setting, no goals defined, no set of possible actions, no evaluation of these actions. Just perception and action.

Of course, any historical piece arguing over this should have 80 pages for an introduction. But here's a hunch; George Boole's important book, written more than 100 years ago, was named "the laws of thought".

During world war two, two major things happened. The world turn into this unreasonableness state, with the major powers throwing fire at each other. The thought that there must have a reasonable way for all to coexist must have been imprinted in every mind. Meanwhile, scientists, the reason guys, were seen as key players in saving the world. Turing broke enigma, Einstein wrote to the president, von Newmann, among other things, published "Economic behavior and the theory of games", etcetera. Whether or not a cogent explanation can emerge from this hunch is, of course, debatable. Yet the fact is that reasoning (in the form of maximizing "utility" and selecting the "best" course of action) is deeply planted in economics, management science, and mathematical psychology. Meanwhile, intuition is left to hippies, to "Dr. Brian Weiss" charlatans, or mavericks such as Gary Klein or Adrian de Groot.

How do people make decisions? By reasoning or by intuition? What is reason? What is intuition? Can they be explained scientifically? Why does the behavior of Homo economicus, the hallmark of game theory, deviate from that of Homo Sapiens sapiens?

How do we process information? How do we think about issues? Which psychological mechanisms enable us to do such? Which psychological mechanisms blind us from better decisions?

Perhaps no reason justifies your reading of this blog, but we surely have a number of reasons for these posts, including:

(i) reviews of decision-making theories and papers, (ii) reviews of decision-making in real life (a.k.a. current events) with analysis, (iii) comments of what's on the blogosphere; (iv) a summary of videos and course materials for FGV/EBAPE's Decision-making course;(iv) Excess caffeine.

The word intuition on our title is our small claim to fame here... while economics, cognitive science, (a great deal of) psychology, computer science, and management science practically ignore this unscientific, hippie, new-age, esoteric term, it is our intention to shed some light into it and perhaps even throw some scientific respectability to this dismal word.

For the skeptics in there, who might think that this talk about intuition in decision-making is not to be taken seriously, you might want to have a glimpse of it in action. Here are:

Two top grandmasters, Maxim Dlugy and Hikaru Nakamura, battling it out in a 1-minute blitz game after the U.S. Championship. [International Master Ben Finegold comments on the side.]