How much rationality is enough?

Last week I had the great good fortune to attend the Max Planck Institute at Leipzig’s first conference on Rigorous Theories of Business Strategies in a World of Evolving Knowledge. The conference spanned and intense four days of presentations, exploration, and discussion on formal approaches to business strategy. Participants were terrific and covered the scholarly spectrum: philosophers, psychologists, game theorists, mathematicians, and physicists. Topics included cooperative game theory, unawareness games, psychological micro-foundations of decision making, and information theory. It was heartening to see growth in the community of formal theorists interested in strategy and my guess is that the event will spawn interesting new research projects and productive coauthoring partnerships. (Thanks to our hosts, Jurgen Jost and Timo Ehrig for organizing and sponsoring the conference!)

If one had to pick a single, overarching theme, it would have to be the exploration of formal approaches to modeling agents with bounded rationality. For example, I presented on subjective equilibrium in repeated games and its application to strategy. Others discussed heuristic-based decision making, unawareness, ambiguity, NK-complexity, memory capacity constraints, the interaction of language and cognition, and dynamic information transmission.

Over the course of the conference, it struck me just how offensive so many of my colleagues find the rationality assumptions so commonly used in economic theory. Of course, rational expectations models are the most demanding of their agents and, as such, seem to generate the greatest outrage. What I mean to convey is the sense that displeasure with these kinds of modeling choices go beyond dispassionate, objective criticism and into indignation and even anger. If you are a management scholar, you know what I mean.

Thus, at a conference such as this, we spend a lot of time reminding ourselves of all the research that points to all the limitations of human cognition. We detail how humans suffer from decision processes that are emotional, memory constrained, short-sighted, logically inconsistent, biased, bad at even rudimentary probability assessment, and so on. Then, we explore ways to build formal models in which our agents are endowed with “more realistic” cognitive abilities.

Perhaps contrary to your intuition, this is heady stuff from a modeler’s point-of-view: formalizing stylized facts about real cognition is seen as a worthy challenge … and discovering where the new assumptions lead is always amusing. From the perspective of many management scholars, such theories are more realistic, better able to explain observations of shockingly stupid decisions by business practitioners and, hence, superior to the silly, overly simplistic models that employ a false level rationality.

I am not mocking the sentiment. In fact, I agree with it. Indeed, none of the economists I know dispute the fact that human cognition is quite limited or that perfect rationality is an extreme and unrealistic assumption. (This isn’t to say there aren’t those who believe otherwise but, if there are, they are not acquaintances of mine.) On the contrary, careers have been made in game theory by finding clever ways to model some observed form of irrationality and using it to explain some observed form of decision failure. If this is the research agenda then, surely, we have hardly scratched the surface.

Yet, as I thought about it during the MPI conference last week, it dawned on me that our great preoccupation with irrational agents is misdirected. That animals as cognitively limited as us often, if not typically, fail to achieve rational consistency in our endeavors is no puzzle. What else would you expect? Rather, the deep mystery is how agents so limited in rational thought invent democracy, create the internet, land on the moon, and run purposeful organizations that succeed in a free market. Casual empiricism suggests that the pattern of objective-oriented progress in the history of mankind is too pervasive to ascribe to dumb luck. Even at the individual level, in spite of their many cognitive failings, the majority of people lead purposeful, productive lives.

This leads me to remind readers that economists invented the rational expectations model precisely because it was the only option that came anywhere close to explaining observed patterns in economy-level reactions to changes in government policies. This, even though the perfect rationality assumption is axiomatically false. There you have it.

Which leaves open the challenge of identifying which features of human cognition lead to persistent patterns of success in highly unstable environments. I conjecture that our refined pattern recognition abilities play a role in this apparent miracle. Other candidates include our determination to see causality everywhere we look as well as our incredible mental flexibility. Social factors and institutions must be involved — and, somewhere in there, a modicum of rationality and logic. After all, we did invent math.

Share this:

Like this:

Related

11 Comments on “How much rationality is enough?”

“This, even though the perfect rationality assumption is axiomatically false.”

I wonder to which conception of rationality you’re referring. In economics rationality as understood in the rational actor model has little do with ‘rationality’ in the colloquial sense. Rationality in economics means that people have transitive and complete preferences, whatever these preferences are. In fact, even suicide can be considered a rational action given a certain set of preferences. In behavioral economics and experimental economics, two fields that supposedly refuted the rationality assumption, rationality is still considered core to any modelling.

Nice reflections. :-) If I were to continue with the reflections, it seems to me that if we take an evolutionary economics perspective, it is not all that strange that we see the emergence of lots of cool and sophisticated organizations and practices even if managers & entrepreneurs were just randomly choosing various combinations of production factors / strategies. Sort of like we see lots of cool and sophisticated organic forms as a result partly of blind variation in natural evolution. Of course, such a perspective requires some sort of selection process, which might (I do not know) imply some sort of rationality in terms of consumer preferences.(..?)

Nice post! This reminds me of Simon’s (1962) article “The Architecture of Complexity”. I think one of Simon’s points was that many of our problems are what he calls nearly decomposable. While everything is connected to everything else, you can ignore most of the connections almost all the time. Consequently, very simple heuristics can produce amazing results over time (but it does require several failures at the individual-level) as problems can be solved bit by bit (e.g., you can first invent the computer and then the Internet).

I like Simon’s (p. 470) parable of Hora and Tempus: “There once were two watchmakers, named Hora and Tempus, who manufactured very fine watches. Both of them were highly regarded, and the phones in their workshops rang frequently–new customers were constantly calling them. However, Hora prospered, while Tempus became poorer and poorer and finally lost his shop. What was the reason?”

The reason, according to Simon, was that Tempus assembled the watch from start to finish in one chunk while Hora “had designed them so that he could put together subassemblies of about ten elements each.” The mathematics of hierarchical design favors Hora. Any disruption in the former’s assembly process will force him to start over while Hora only has to re-assemble the last sub-assembly.

We can design (technological, organizational, logical and political) systems hierarchically which allows us to use and build on them intelligently without knowing exactly how they work. Just like we can blog without actually knowing how the Internet works.

Then of course it remains to be debated whether the ‘near decomposability’ of the world or our ability to design systems hierarchically is a property of the ‘world’ or of our intelligence.

Interesting reflections on this topic in Eric Baum’s book What is Thought? He’s an AI guy who’s worked a lot on trying to build up rational behavior from evolving simpler components. His main conclusion is that evolution selected from out of the zillions of algorithms and heuristics mathematically possible a relatively small set that compactly fit the structure of our environment.

I’m currently reading Glimcher’s book Foundations of Neuroeconomics which is a remarkable attempt at truly interdisciplinary pioneering. (His earlier book on neuroeconomics was also very interesting–one of the hallmarks of his approach is that the influence between fields runs both ways, with economic logic helping to explain puzzles in neurobiology and vice versa.)

Maybe strategy and management researchers should have a look at Vernon Smith’s “ecological rationality” (as opposed / complementary to constructivist rationality)?
In the context of ecological rationality, the “social factors and institutions” are indeed “involved” in that they could help us explain some of the factors of the “patterns of success in highly unstable environments” that are not explained by features of (constructivist) human cognition or deliberate action and design by strategists and managers.
Concerning for instance Marcus Linder’s requirement for “some sort of selection process”, Smith offers:
“(Constructivist) reason is good at providing variation, but not selection. Constructivism is indeed an engine for generating variation, but is far too limited in its ability to comprehend and apply all the relevant facts to serve the process of selection, which is better left to ecological processes.”
I have not been able to find references to V. Smith in the strategy or management literature. Anyone?

Attacking rationality is currently popular but has yet to produce a material alternative. Behavioral economists have not brought us even one step further in our understanding. Documenting biases is like documenting different types of friction in the attempt to prove Newtonian physics false. It’s simply foolish. First because finding feathers and other like objects not conforming to the model does not disprove the or gravity. Second the scientist must offer an alternative that explains more than the current theory.

It is also worth noting that rationality is an assumption used at one level to explain market outcomes that operate at another level. Economists assume agent rationality to make market level predictions. These strong or naive assumptions work quite well when compared to all alternatives.

I don’t read the behavioral project as trying to “disprove” rationality. Rather, I read the pure rationalists (Fama, Milton Friedman and their fellow travelers) as denying feathers or friction can even exist at all.