Tuesday, 17 January 2012

The question is, how did economics change its attitude to mathematics
in the forty years between Håvelmo’s The Probability Approach inEconometrics and his Nobel Prize in 1989, when he was pessimistic about
the impact the development of econometrics had had on the practice of
economics. Coinciding with Håvelmo’s pessimism, many economists were
reacting strongly against the ‘mathematisation’ of economics, evidenced
by the fact that before 1925, only around 5% of economics research
papers were based on mathematics, but by 1944, the year of Havelmo
and von Neumann-Morgenstern’s contributions, this had quintupled to
25%1.
While the proportion of economics papers being based on maths has not
continued this trajectory, the inﬂuence of mathematical economics has and the
person most closely associated with this change in economic practice was Paul
Samuelson.

Samuelson is widely regarded as the most inﬂuential economist to come out of
the United States and is possibly the most inﬂuential post-war economist in the
world. He was the ﬁrst U.S. citizen to be awarded the Nobel Prize in Economics in
1970 because “more than any other contemporary economist, he has contributed
to raising the general analytical and methodological level in economic
science”2.
He studied at the University of Chicago and then Harvard, were he obtained his
doctorate in 1941. In 1940 he was appointed to the economics department of M.I.T.,
in the ﬁnal years of the war he worked in Wiener’s group looking at gun control
problems3,
where he would remain for the rest of his life. Samuelson would comment that “I
was vaccinated early to understand that economics and physics could share the
same formal mathematical theorems”.

In 1947 Samuelson published Foundations of Economic Analysis, which laid
out the mathematics Samuelson felt was needed to understand economics. It is
said that von Neumann was invited to write a review Foundations in
1947 declined because “one would think the book about contemporary
with Newton”. Von Neumann, like many mathematicians who looked at
economics, believed economics needed better maths than it was being
oﬀered4.
In 1948 Samuelson published the ﬁrst edition of his most famous work,
Economics: An Introductory Analysis, one of the most inﬂuential textbooks on
economics ever published, it has run into nineteen editions and sold over four million copies.

There appears to be a contradiction, Håvelmo seems to think his introduction
of mathematics into economics was a failure, while Samuelson’s status seems to
suggest mathematics came to dominate economics. In the face of contradiction,
science should look for distinction.

I think the clue is in Samuelson’s attachment to “formal mathematical
theorems”, and that his conception of mathematics was very diﬀerent from that of
the earlier generation of mathematicians that included everyone from Newton and
Poincaré to von Neumann, Wiener and Kolmogorov.

A potted history of the philosophy of mathematics is that the numerologist
Plato came up with the Theory of Forms and then Euclid produced The Elements
which was supposed to capture the indubitability, the certainty, and immutability,
the permanence, of mathematics on the basis that mathematical objects where
Real representations of Forms. This was used by St Augustine of Hippo as
evidence for the indubitability and immutability of God, embedding into western
European culture the indubitability and immutability of mathematics. The
identiﬁcation of non-Euclidean geometries in the nineteenth century destroyed this
ediﬁce and the reaction was the attempt to lay the Foundations of Mathematics,
not on the basis of geometry but on the logic of the natural numbers.
Frege’s logicist attempt collapsed with Russell’s paradox and attention
turned to Hilbert’s formalismto provide a non-Platonic foundation for
mathematics. The key idea behind Formalism is that, unlike Platonic
Realism, mathematical objects have no meaning outside mathematics, the
discipline is a game played with symbols that have no relevance to human
experience.

The Platonist, Kurt Gödel, according to von Neumann, has “shown that
Hilbert’s program is essentially hopeless” and

The very concept of “absolute” mathematical rigour is not immutable.
The variability of the concept of rigour shows that something else
besides mathematical abstraction must enter into the makeup of
mathematics5

Mathematics split into two broad streams. Applied mathematics,
practised by the likes of von Neumann and Turing, responded
by focussing on real-world ‘special cases’, such as modelling the brain6.
Pure mathematics took the opposite approach, emphasising the generalisation of
special cases, as practised by Bourbaki and Hilbert’s heirs.

Formalism began to dominate mathematics in the 1940s-1950s. Mathematics
was about ‘rigorous’, whatever that means, deduction from axioms and deﬁnitions
to theorems. Explanatory, natural, language and, possibly worse, pictures, were to be
removed from mathematics. The “new math” program of the 1960s was a
consequence of this Formalist-Bourbaki dominance of mathematics.

It is diﬃcult to give a deﬁnitive explanation for why Formalism became
dominant, but it is often associated with the emergence of logical–positivism, a
somewhat incoherent synthesis of Mach’s desire to base science only on
phenomena (which rejected the atom), mathematical deduction and Comte’s
views on the unity of the physical and social sciences. Logical-positivism
dominated western science after the Second World War, spreading out from its
heart in central European physics, carried by refugees from Nazism.

The consequences of Formalism were felt most keenly in physics. Richard
Feynman, the physicists’ favourite physicist, hated its abandonment of relevance.
Murray Gell-Mann, another Noble Laureate physicist, commented in 1992 that
the Formalist-Bourbaki era seemed to be over

abstract mathematics reached out in so many directions and
became so seemingly abstruse that it appeared to have left physics
far behind, so that among all the new structures being explored by
mathematicians, the fraction that would even be of any interest
to science would be so small as not to make it worth the time of
a scientist to study them.

But all that has changed in the last decade or two. It has turned
out that the apparent divergence of pure mathematics from science
was partly an illusion produced by obscurantist, ultra-rigorous
language used by mathematicians, especially those of a Bourbaki
persuasion, and their reluctance to write up non–trivial examples
in explicit detail. When demystiﬁed, large chunks of modern mathematics
turn out to be connected with physics and other sciences, and
these chunks are mostly in or near the most prestigious parts
of mathematics, such as diﬀerential topology, where geometry,
algebra and analysis come together. Pure mathematics and science are ﬁnally being reunited and mercifully, the Bourbaki plague is
dying out.7

Economics has always doubted its credentials. Laplace saw the physical
sciences resting on calculus, while the social sciences would rest on
probability8,
but classical economists, like Walras, Jevons and Menger, wanted their emerging
discipline economics to have the same status as Newton’s physics, and so
mimicked physics. Samuelson was looking to do essentially the same thing,
economics would be indubitable and immutable if it looked like Formalist
mathematics, and in this respect he has been successful, the status of economics has grown faster than the growth of maths in economics. However, while the
general status of economics has exploded, its usefulness to most users of
economics, such as those in the ﬁnancial markets, has collapsed. Trading
ﬂoors are recruiting engineers and physicists, who always looked for the
relevance of mathematics, in preference to economists (or post-graduate
mathematicians).

My answer to the question “why don’t more economists see the potential of
mathematics” is both simple and complex. Economists have, in the main, been
looking at a peculiar manifestation of mathematics - Formalist-Bourbaki
mathematics - a type of mathematics that emerged in the 1920s in response to an intellectual
crisis in the Foundations of Mathematics. Economists have either embraced it, as
Samuelson did, or were repulsed by it, as Friedman was.

Friday, 6 January 2012

A research student, working in econometrics has e-mailed me with the
comment

I am a little confused why many economists do not see the
potential of mathematics.

The discipline of econometrics was introduced in the 1940’s with the key
monograph being Trygve Håvelmo’s The Probability Approach in Econometrics.
Håvelmo’s motivation for writing the paper is eloquently stated in the
preface

The method of econometric research aims, essentially, at a
conjunction of economic theory and actual measurements, using
the theory and technique of statistical inference as a bridge
pier. But the bridge itself was never completely built. So far,
the common procedure has been, ﬁrst to construct an economic
theory involving exact functional relationships, then to compare
this theory with some actual measurements, and, ﬁnally, “to
judge” whether the correspondence is “good” or “bad”. Tools
of statistical inference have been introduced, in some degree, to
support such judgements, e.g., the calculation of a few standard
errors and multiple-correlation coeﬃcients. The application of
such simple “statistics” has been considered legitimate, while, at
the same time, the adoption of deﬁnite probability models has
been deemed a crime in economic research, a violation of the very
nature of economic data. That is to say, it has been considered
legitimate to use some of the tools developed in statistical theory
without accepting the very foundation upon which statistical
theory is built. For no tool developed in the theory of statistics hasany meaning– except, perhaps, for descriptive purposes –withoutbeing referred to some stochastic scheme.

The reluctance among economists to accept probability models
as a basis for economic research has, it seems, been founded
upon a very narrow concept of probability and random variables.
Probability schemes, it is held, apply only to such phenomena as
lottery drawings, or, at best, to those series of observations where
each observation may be considered as an independent drawing
from one and the same “population”. From this point of view
it has been argued, e.g., that most economic time series do not
conform well to any probability model, “because the successive
observations are not independent”. But it is not necessary that
the observations should be independent and that they should all
follow the same one–dimensional probability law. It is suﬃcient to
assume that the whole set of, say n,
observations may be considered as one observation of n
variables (or a “sample point”) following an n-dimensional
joint probability law, the “existence” of which may be purely
hypothetical. Then, one can test hypotheses regarding this joint
probability law, and draw inference as to its possible form, by
means of one sample point (in n
dimensions). Modern statistical theory has made considerable
progress in solving such problems of statistical inference.

In fact, if we consider actual economic research–even that carried
on by people who oppose the use of probability schemes–we ﬁnd
that it rests, ultimately, upon some, perhaps very vague, notion
of probability and random variables. For whenever we apply a
theory to facts we do not–and we do not expect to–obtain exact
agreement. Certain discrepancies are classiﬁed as “admissible”,
others as “practically impossible” under the assumptions of
the theory. And the principle of such classiﬁcation is itself a
theoretical scheme, namely one in which the vague expressions
“practically impossible” or “almost certain” are replaced by “the
probability is near to zero”, or “the probability is near to one”.
This is nothing but a convenient way of expressing opinions about
real phenomena. But the probability concept has the advantage
that it is “analytic”, we can derive new statements from it by the
rules of logic.

Håvelmo’s argument can be split into four key points. If economics is to be regarded
as ‘scientiﬁc’, it needs to take probability theory seriously. He then notes
that economists have taken a naive approach to probability, and possibly
mathematics in general, and introduces the Lagrangian idea of representing
n
points in one dimensional space by one point in
n-dimensional
space. Finally he makes Poincaré’s point that probability is a convenient
solution, it makes the scientist’s life easier, and ﬁnally he makes Feller’s point that
it enables the creation of new knowledge, new statements.

Håvelmo then goes on to tackle the issue that goes back as far as Cicero, at
least, “there is no foreknowledge of things that happen by chance” by making the
critical observation, nature looks stable because we look at it in a particular
way

“In the natural sciences we have stable laws”, means not much
more and not much less than this: The natural sciences have
chosen very fruitful ways of looking on physical reality.

Håvelmo is saying that if economists look at the world in a diﬀerent way, if the right
analytical tools are available to them, they may be able to identify stable
laws.

At about the same time, Oskar Morgenstern was working with John von
Nueumann on The Theory of Games and Economic Behavior, a “big book
because they wrote it twice, once in symbols for mathematicians and once in prose
for economists”. Morgenstern begins the book by describing the landscape. On the
second page he, makes the case for using mathematics in economics, just as
Håvelmo had, but with a more comprehensive argument. Morgenstern reviews
the case as to why mathematics is inappropriate to economics, no doubt with von
Neumann at his shoulder,

The arguments often heard that because of the human
element, of psychological factors etc., or because there is –
allegedly – no measurement of important factors, mathematics
will ﬁnd no application [in economics] [von Neumann and
Morgenstern1967 p 3]

However, Morgenstern points out that Aristotle had the same opinion of the use of
mathematics in physics

Almost all these objections have been made, or might have been
made, many centuries ago in ﬁelds ﬁelds where mathematics is
now the chief instrument of analysis.

While measurement may appear diﬃcult in economics, measurement appeared
diﬃcult before the time of Albert the Great, again before Newton ﬁxed time and
space, when objects were either ‘hot’ or ‘cold’ or before the idea of potential
energy being released into kinetic energy emerged.

The reason why mathematics has not been more successful in
economics must, consequently, be found elsewhere. The lack of
real success is largely due to a combination of unfavourable
circumstances, some of which can be removed gradually. To begin
with economic problems were not formulated clearly and are often
stated in such vague terms as to make mathematical treatment
a priori appear hopeless because it is quite uncertain what the
problems really are. There is no point in using exact methods
where there is no clarity in the concepts and the issues to which
they are to be applied. Consequently the initial task is to clarify
the knowledge of the matter by further careful description. But
even in those parts of economics where the descriptive problem
has been handled more satisfactorily, mathematical tools have
seldom been used appropriately. They were either inadequately
handled, as in the attempts to determine a general economic
equilibrium …, or they led to mere translations from a literary form
of expression into symbols, without any subsequent mathematical
analysis. [von Neumann and Morgenstern, 1967, p 4]

Morgenstern makes the critical observation, that the ‘correct’ use of mathematics in
science leads to the creation of new mathematics

The decisive phase of the application of mathematics to physics
– Newton’s creation of a rational discipline of mechanics –
brought about, and can hardly be separated from, the discovery
of [calculus]. (There are several other examples, but none stronger
than this.)

The importance of social phenomena, the wealth and multiplicity
of their manifestations, and the complexity of their structure,
are at least equal to those in physics. It is therefore expected
– or feared – that the mathematical discoveries of a stature
comparable to that of calculus will be needed in order to
produce decisive success in this ﬁeld. [von Neumann and
Morgenstern1967 p 5]

In 1989 Håvelmo was awarded the Nobel Prize in Economics “for his
clariﬁcation of the probability theory foundations of econometrics and his analyses
of simultaneous economic structures”. In his speech, the economist Håvelmo
reﬂected on the impact of his work,

To some extent my conclusions [are] in a way negative. I [draw]
attention to the – in itself sad – result that the new and, as we had
thought, more satisfactory methods of measuring interrelations
in economic life had caused some concern among those who
had tried the new methods in practical work. It was found that
the economic theories which we had inherited and believed in,
were in fact less stringent than one could have been led to
think by previous more rudimentary methods of measurement.
To my mind this conclusion is not in itself totally negative. If
the improved methods could be believed to show the truth, it is
certainly better to know it. Also for practical economic policy it is
useful to know this, because it may be possible to take preventive
measures to reduce uncertainty. I also mentioned another thing
that perhaps could be blamed for results that were not as good as
one might have hoped for, namely economic theory in itself. The
basis of econometrics, the economic theories that we had been led
to believe in by our forefathers, were perhaps not good enough.
It is quite obvious that if the theories we build to simulate actual
economic life are not suﬃciently realistic, that is, if the data we
get to work on in practice are not produced the way that economic
theories suggest, then it is rather meaningless to confront actual
observations with relations that describe something else. [ Prize Lecture Lecture to the memory of Alfred Nobel ]

Håvelmo’s aim in the 1940s, along with that of John von Neuman, had been
to improve economic methodology, the consequence was, in Håvelmo’s case, was
that it highlighted deﬁciencies in economic theory. The question is, what
happened in economics in the forty years between Håvelmo’s paper on econometrics and his Nobel Prize in
1989 to lead to such a negative reflection on the development of economics. I shall come back to this in my next post.

Twitter

Followers

Subscribe

About Me

I am a Lecturer in Financial Mathematics at Heriot-Watt University in Edinburgh. Heriot-Watt was the first UK university to offer degrees in Actuarial Science and Financial Mathematics and is a leading UK research centre in the fields.

Between 2006-2011 I was the UK Research Council's Academic Fellow in Financial Mathematics and was involved in informing policy makers of mathematical aspects of the Credit Crisis.