Wednesday, 26 September 2012

Within the field of Financial Mathematics, the Fundamental Theorem of Asset Pricing consists of
two statements, (e.g. [Shreve, 2004, Section 5.4])

Theorem: The Fundamental Theoremof Asset Pricing

1. A market admits no arbitrage, if and only if, the market has a martingale measure.
2. The martinagale measure is unique, if and only if, every contingent claim can be hedged.

The theorem emerged between 1979 and 1983 ([Harrison and Kreps, 1979], [Harrison and
Pliska, 1981],[Harrison and Pliska, 1983]) as Michael Harrison sought to establish a mathematical
theory underpinning the well established Black-Scholes equation for pricing options. One remarkable
feature of the Fundamental Theorem is its lack of mathematical notation, which is highlighted by
the use of mathematical symbols in the Black-Scholes equation, which came out of economics.
Despite its non-mathematical appearance, the work of Harrison and his collaborators opened finance
to investigation by functional analysts (such as [Schachermayer, 1984]) and by 1990, any
mathematician working on asset pricing would have to do so within the context of the Fundamental
Theorem.

The use of the term ‘probability measure’ places the Fundamental Theory within the
mathematical theory of probability formulated by Andrei Kolmogorov in 1933 ([Kolmogorov, 1933
(1956)]). Kolmogorov’s work took place in a context captured by Bertrand Russell, who in 1927
observed that

It is important to realise the fundamental position of probability in science. …As
to what is meant by probability, opinions differ. Russell [1927 (2009), p 301]

The significance of probability in providing the basis of statistical inference in empirical
science had been generally understood since Laplace. In the 1920s the idea of randomness,
as distinct from a lack of information, the absence of Laplace’s Demon, was becoming
significant. In 1926 the physicist Max Born was “inclined to give up determinism”, to
which Einstein responded with “I for one am convinced that [God] does not play dice”
[von Plato, 1994, pp 147–157]. Outside the physical sciences, Frank Knight, in Risk,Uncertainty and Profit, argued that uncertainty, a consequence of randomness, was the
only true source of profit, since if a profit was predictable the market would respond and
make it disappear (Knight [1921 (2006, III.VII.1–4]). Simultaneously, in his Treatise onProbability, John Maynard Keynes observed that in some cases cardinal probabilities could be
deduced, in others, ordinal probabilities, one event was more or less likely than another,
could be inferred, but the largest class of problems were not reducible to the conventional
concept of probability ([Keynes, 1972, Ch XXIV, 1]. Keynes would place this inability to
precisely define a numerical probability at the heart of his economics ([Skidelsky, 2009, pp
84–90]).

Two mathematical theories had become ascendant by the late 1920s. Richard von Mises, an
Austrian engineer linked to the Vienna Circle of logical-positivists, and brother of the economist
Ludwig, attempted to lay down the axioms of probability based on observable facts within a
framework of Platonic-Realism. The result was published in German in 1931 and popularised in
English as Probability, Statistics and Truthand is now regarded as a key justification of the
frequentist approach to probability.

To balance von Mises’ Realism, the Italian actuary, Bruno de Finetti presented a more
Nominalist approach. De Finetti argued that “Probability does not exist” because it was only an
expression of the observer’s view of the world. De Finetti’s subjectivist approach was closely related
to the less well-known position taken by Frank Ramsey, who, in 1926, published Probability andTruth, in which he argued that probability was a measure of belief. Ramsey’s argument was
well-received by his friend and mentor John Maynard Keynes but his early death hindered its
development.

While von Mises and de Finetti took an empirical path, Kolmogorov used mathematical
reasoning to define probability. Kolmogorov wanted to adress they key issue for physics at the time
which was that was that, following the work of Montmort and de Moivre in the first decode of the
eighteenth century, probability had been associated with counting events and comparing relative
frequencies. This had been coherent until mathematics became focused on infinite sets at the
same time as physics became concerned with statistical mechanics in the second half of
the nineteenth century. Von Mises had tried to address these issues but his analysis was
weak in dealing with infinite sets, that came with continuous time. As Jan von Plato
observes

von Mises’s theory of random sequences has been remembered as something to be
criticized: a crank semi-mathematical theory serving as a warning of the state of
probability [at the time] von Plato [1994, p 180]

In 1902 Lebesgue had redefined the mathematical concept of the integral in terms of abstract
‘measures’ in order to accommodate new classes of mathematical functions that had emerged in the
wake of Cantor’s transfinite sets. Kolmogorov made the simple association of these abstract
measures with probabilities, solving the von Mises’ issue of having to deal with infinite
sets in an ad hoc manner. As a result Kolmogorov identified a random variable with a
function and an expectation with an integral, probability became a branch of Analysis, not
Statistics.

Kolmogorov’s work was initially well received, but slow to be adopted. One contemporary
American reviewer noted it was an important proof of Bayes’ Theorem ([Reitz, 1934]), then still
controversial (Keynes [1972, Ch XVI, 13]) but now a cornerstone of statistical decision making.
Amongst English-speaking mathematicians, the American Joseph Doob was instrumental in
promoting probability as measure ([Doob, 1941]) while the full adoption of the approach followed its
advocacy by Doob and William Feller at the First Berkeley Symposium on Mathematical Statistics
and Probability in 1945–1946.

While measure theoretic probability is a rigorous theory outside pure mathematics it is seen as
redundant. Von Mises criticised it as un-necessarily complex ([von Mises, 1957 (1982), p 99]) while
the statistician Maurice Kendall argued that measure theory was fine for mathematicians, but of
limited practical use to statisticians and fails “to found a theory of probability as a branch of
scientific method” ([Kendall, 1949, p 102]). More recently the physicist Edwin Jaynes champions
Leonard Savage’s subjectivism as having a “deeper conceptual foundation which allows it to
be extended to a wider class of applications, required by current problems of science”
in comparison with measure theory ([Jaynes, 2003, p 655]). Furthermore in 2001 two
mathematicians Glenn Shafer and Vladimir Vovk, a former student of Kolmogorov, proposed
an alternative to measure-theoretic probability, ‘game-theoretic probability’, because
the novel approach “captures the basic intuitions of probability simply and effectively”
([Shafer and Vovk, 2001]). Seventy-five years on Russell’s enigma appears to be no closer to
resolution.

The issue around the ‘basic intuition’ of measure theoretic probability for empirical
scientists can be accounted for as a lack of physicality. Frequentist probability is based on
the act of counting, subjectivist probability is based on a flow of information, where as
measure theoretic probability is based on an abstract mathematical object unrelated
to phenomena. Specifically in the Fundamental Theorem, the ‘martingale measure’ is a
probability measure, usually labelled ℚ, such that the price of an asset today, X0 is the
expectation, under the martingale measure, of the discounted asset prices in the future,
XT

Given a current asset price X0, and a set of future prices, XT the probability distribution ℚ is
defined such that this equality holds, and so is forward looking, in the fact that it is
based on current and future prices. The only condition placed on the relationship that the
martingale measure has with the ‘natural’, or ‘physical’, probability measure, inferred from
historical price changes and usually assigned the label ℙ, is that they agree on what is
possible.

The term ‘martingale’ in this context derives from doubling strategies in gambling and it was
introduced into mathematics by Jean Ville in 1939, in a critique of von Mises work, to label a
random process where the value of the random variable at a specific time is the expected value of therandom variable in the future. The concept that asset prices have the martingale property was first
proposed by Benoit Mandlebrot ([Mandelbrot, 1966]) in response to an early formulation of
Eugene Fama’s Efficient Market Hypothesis (EMH) ([Fama, 1965]), the two concepts being
combined by Fama in 1970 ([Fama, 1970]). For Mandelbrot and Fama the key consequence of
prices being martingales was that the price today was, statistically, independent of the
future price distribution: technical analysis of markets was charlatanism. In developing
theEMH there is no discussion on the nature of the probability under which assets are
martingales, and it is often assumed that the expectation is calculated under the natural
measure.

Arbitrage, the word derives from ‘arbitration’, has long been a subject of financial mathematics.
In Chapter 9 of his 1202 text advising merchants, the Liber Abaci, Fibonacci discusses ‘Barter of
Merchandise and Similar Things’,

20 arms of cloth are worth are worth 3 Pisan pounds and 42 rolls of cotton are
similarly worth 5 Pisan pounds; it is sought how many rolls of cotton will be had
for 50 arms of cloth. ([Sigler, 2002, p 180])

In this case there are three commodities, arms of cloth, rolls of cotton and Pisan pounds, and
Fibonacci solves the problem by having Pisan pounds ‘arbitrate’ between the other two
commodities.

Over the centuries this technique of pricing through arbitration evolved into the law of one price,
that if two assets offer identical cash flows then they must have the same price. This was employed
by Jan de Witt in 1671 when he solved the problem of pricing life annuities in terms of redeemable
annuities, based on the presumption that

the real value of certain expectations or chances of objects, of different value, should
be estimated by that which we can obtain from as many expectations or chances
dependent on one or several equitable contracts. [Sylla, 2003, p 313, quoting De
Witt, The Worth of Life Annuities in Proportion to Redeemable Bonds]

In 1908 the Croatian mathematician, Vincent Bronzin, published a text which discusses pricing
derivatives by ‘covering’, or hedging them, them with portfolios of options and forward contracts
employing the principle of ‘equivalence’, the law of one price ([Zimmermann and Hafner, 2007]). In
1965 the functional analyst and probabilist, Edward Thorp, collaborated with a post-doctoral
mathematician, Sheen Kassouf, and combined the law of one price with basic techniques of
calculus to identify market mis-pricing of warrant prices, at the time a widely traded stock
option. In 1967 they published their methodology in a best-selling book, Beat the Market
([MacKenzie, 2003]).

Within economics, the law of one price was developed in a series of papers between 1954 and
1964 by Kenneth Arrow, Gerard Debreu and Lionel MacKenzie in the context of general
equilibrium. In his 1964 paper, Arrow addressed the issue issue of portfolio choice in the presence of
risk and introduced the concept of an Arrow Security, an asset that would pay out ‘1’ in a specific
future state of the economy but zero for all other states, and by the law of one price, all
commodities could be priced in terms of these securities ([Arrow, 1964]). The work of
Fischer Black, Myron Scholes and Robert Merton ([Black and Scholes, 1973]) employed the
principal and presented a mechanism for pricing warrants on the basis that “it should
not be possible to make sure profits” with the famous Black-Scholes equation being the
result.

In the context of the Fundamental Theorem, ‘an arbitrage’ is the ability to formulate a trading
strategy such that the probability, whether under ℙ or ℚ, of a loss is zero, but the probability of a
profit is positive. This definition is important following Hardie’s criticism of the way the term is
applied loosely in economic sociology ([Hardie, 2004]). The obvious point of this definition is that,
unlike Hardie’s definition [Hardie, 2004, p 243], there is no guaranteed (strictly positive) profit,
however there is also a subtle technical point: there is no guarantee that there is no loss if there is an
infinite set of outcomes. This is equivalent to the observation that there is no guarantee that an
infinite number of monkeys with typewriters will, given enough time, come up with a work
of Shakespeare: it is only that we expect them to do so. This observation explains the
caution in the use of infinite sets taken by mathematicians such as Poincare, Lebesgue and
Brouwer.

To understand this meaning of arbitrage, consider the most basic case of a single
period economy, consisting of a single asset whose price, X0, is known at the start of
the period and can take on one of two (present) values, XTU> XTD, representing two
possible states of the economy at the end of the period. In this case an arbitrage would
exist if XTU> XTD≥ X0, buying the asset now would lead to a possible profit at the
end of the period, with the guarantee of no loss. Similarly, if X0≥ XTU> XTD, short
selling the asset now, and buying it back at the end of the period would also lead to an
arbitrage.

In summary, for there to be no arbitrage opportunities we require that

This
implies that there is a real number, q,0 ≥ q≥ 1 such that

X0 =

XTD +q(XTU- XTD)

=

qXTU + (1 -q)XTD

≡

Eℚ[XT],

and it can be seen that q represents a measure theoretic probability that the economy ends in the U
state.

With this in mind, the first statement of the Fundamental Theorem can be interpreted
simply as “the price of an asset must lie between its maximum and minimum possible
(discounted) future price”. If X0> XTD we have thatq< 0 where as if XTU<X0 then
q >1, and in both cases q does not represent a probability measure, which, by definition
must lie between 0 and 1. In this simple case there is a trivial intuition behind measure
theoretic probability, the martingale measure and an absence of arbitrage are a simple
tautology.

To appreciate the meaning of the second statement of the theorem, consider the situation when
the economy can take on three states at the end of the time period, not two. If we label possible
future asset prices as XTU> XTM>XTD, we cannot deduce a unique set of probabilities
0 ≤ qU,qM,qD≤ 1, with qU + qM + qD = 1, such that

The
market still precludes arbitrage, but we no longer have a unique probability measure under which
asset prices are martingales, and so we cannot derive unique prices for other assetsin the market. In
the context of the law of one price, we cannot hedge, replicate or cover, a position in the market,
making it riskless and in terms of Arrow’s work the market is incomplete. This explains the
sense of the second statement of the Fundamental Theorem and is important in that
the statement tells the mathematician that in the real world of imperfect knowledge
and transaction costs, a model within the Theorem’s framework cannot give a precise
price.

Most models employed in practice ignore the impact of transaction costs, on the utopian basis
that precision will improve as market structures evolve and transaction costs disappear. Situations
where there are many, possibly infinitely many, prices at the end of the period are handled by
providing a model for asset price dynamics, between times 0 and T. The choice of asset price
dynamics defines the distribution of XT, either under the martingale or natural probability measure,
and in making the choice of asset price dynamics, the derivative price is chosen. This effect is similar
to the choice of utility function determining the results of models in some areas of welfare
economics.

The Fundamental Theorem is not well known outside the limited field of financial mathematics,
practitioners focus on the models that are a consequence of the Theorem where as social scientists
focus on the original Black-Scholes-Merton model as an exemplar. Practitioners are daily exposed to
the imprecision of the models they useand are skeptical, if not dismissive, of the validity of the
models they use ([Miyazaki, 2007, pp 409-410 ], [MacKenzie, 2008, p 248], [Haugh and
Taleb, 2009]). Following the market crash of 1987, few practioners used the Black-Scholes equation
to actually ‘price’ options, rather they used the equation to measure market volatility, a proxy for
uncertainty.

However, the status of the Black-Scholes model as an exemplar in financial economics has been
enhanced following the adoption of measure theoretic probability, and this can be understood
because the Fundamental Theorem, born out of Black-Scholes-Merton, unifies a number of distinct
theories in financial economics. MacKenzie ([MacKenzie, 2003, p 834]) describes a dissonance
between Merton’s derivation of the model (Merton [1973]) using techniques from stochastic
calculus, and Black’s, based on the Capital Asset Pricing Model (CAPM) (Black and
Scholes [1973]). When measure theoretic probability was introduced it was observed that the
Radon-Nikodym derivative, a mathematical object that describes the relationship between the
stochastic processes Merton used in the natural measure and the martingale measure,
involved the market-price of risk (Sharpe ratio), a key object in the CAPM. This point
was well understood in the academic literature in the 1990s and was introduced into the
fourth edition of the standard text book, Hull’s Options, Futures and other Derivatives, in
2000.

The realisation that the Fundamental Theorem unified Merton’s approach, based on stochastic
calculus advocated by Samuelson at M.I.T, CAPM, which had been developed at the Harvard
Business School and in California, martingales, a feature of efficient markets that had been proposed
at Chicago and incomplete markets, from Arrow and Debreu in California, enhanced the status of
Black-Scholes-Merton as representing a Kuhnian paradigm. This unification of a plurality of
techniques within a ‘theory of everything’ came just as the Black-Scholes equation came under
attack for not reflecting empirical observations of market prices and obituaries were being written for
the broader neoclassical programme ([Colander, 2000])and can explain why, in 1997, the Nobel Prize
in Economics was awarded to Scholes and Merton “for a new method to determine the value of
derivatives”.

The observation that measure theoretic probability unified a ‘constellation of beliefs, values,
techniques’ in financial economics can be explained in terms of the transcendence of mathematics.
To paraphrase Tait ([Tait, 1986, p 341])

A mathematical proposition is about a certain structure, financial markets. It refers
to prices and relations among them. If it is true, it is so in virtue of a certain fact
about this structure. And this fact may obtain even if we do not or cannot know
that it does.

In this sense, the Fundamental Theorem confirms the truth of the EMH, or any other of the other ‘facts’
that go into the proposition. It becomes doctrine that more (derivative) assets need to be created in
order to complete markets, or as Miyazaki observes [Miyazaki, 2007, pp 404 ], speculative activity as
arbitration, is essential for market efficiency.

However, this relies on the belief in the transcendence of mathematics. If mathematics is a human construction, it does not hold true.

References

K. J. Arrow. The role of securities in the optimal allocation of risk-bearing. The Reviewof Economic Studies, 31(2):91–96, 1964.

Thursday, 13 September 2012

The BBC's transmission of Ford Maddox Ford's Parade's End (a co-production with HBO, and adapted by Tom Stoppard who has a good appreciation of mathematics - Rosencrantz and Guildenstern Are Dead) reminded me that the central character, Christopher Tietjens was an actuary, probably the most famous actuary in English literature.

I came to Ford through his collaborator Joseph Conrad, a teenage interest with Coppola's Apocalypse Now led me to Conrad and a love of sailing inspired me to read all his novels while an undergraduate. I read the first two books of Parade's End after graduating.

I always thought Ford made Tietjens an actuary to highlight his fidelity, his trustworthiness. Statistics provides the foundation for our belief, our faith, in science, that is why Bertrand Russell (following Poincare) observed

It is important to realise the fundamental position of probability in science. ... As to what is meant by probability, opinions differ (p 301,An Outline of Philosophy)

around the same time Ford was writing Parade's End. So, in making Tietjens an actuary, and the Second Wrangler of his year, Ford is emphasising Tietjen's faithfulness, which is most obvious in his relations with his adulterous wife, Sylvia, and the more compatible Valentine Wannop. Tietjen's character is magnified by placing him alongside the less virtuous, but more successful, MacMaster.

The trajectory of Tietjen's career, the trauma of serving at the front impacted his work as a mathematician, echoes the real life experience of Émile Borel. Borel was the star of his generation of French mathematicians. His 1894 thesis laid the foundations for modern probability theory and within 16 years he had been appointed Deputy-Director of the most prestigious of the French 'grandes écoles' the École normale supérieure. Borel served in the war, but more significanltly, his adopted son was killed at the front. After the war, Borel pre-empted von Neumann's work in Game Theory and established the Institut de Statistiques de l'Université de Paris, but after 1924 abandoned mathematics for politics, serving as a minister and then, in his seventies was active in the Resistance. In his lifetime ha had been awarded the Croix de Guerre, Médaille de la Résistance with rosette, and the Grand Croix Légion d'Honneur.

However, in one respect the situation for mathematics now is worse than it was when Parade's End was written. Would a contemporary author of Ford's prestige choose to make a character a mathematician to emphasise their virtue. It suggests that the reputation of mathematics has been in decline since the 1920s when Russell and Ford were writing. Could this be related to G. H. Hardy's 1940 statement

I have never done anything ‘useful’. No discovery of mine has made, or is likely to make, directly or indirectly, for good or ill, the least difference to the amenity of the world. (p 49, A Mathematician's Apology)

Hardy's autobiography is significantly different from Borel's. Since this time, British (pure) mathematics, which encompasses probability, has abandoned the world that Tietjens and Borel lived in, and isolated itself in academic cloisters, to everyone's detriment.

As a footnote, Hardy, who is credited with introducing continental 'rigour' (rigour mortis?) , into British mathematics, opposed the Cambridge Mathematical Tripos, on which the Wranglers were selected, because he felt it had ossified British mathematics. That might have been the case, but the Tripos produced more than just pure mathematicians, it delivered leaders in professions as diverse as the law, medicine, the church, politics, as well as actuaries.

Twitter

Followers

Subscribe

About Me

I am a Lecturer in Financial Mathematics at Heriot-Watt University in Edinburgh. Heriot-Watt was the first UK university to offer degrees in Actuarial Science and Financial Mathematics and is a leading UK research centre in the fields.

Between 2006-2011 I was the UK Research Council's Academic Fellow in Financial Mathematics and was involved in informing policy makers of mathematical aspects of the Credit Crisis.