John Cochrane is obviously a big euro fan who doesn’t accept the conventional wisdom that the euro is a bad idea.

However, there seems to be some rather basic facts about optimal currency areas that all economists would perhaps be wise to consider …

The idea that the euro has “failed” is dangerously naive. The euro is doing exactly what its progenitor – and the wealthy 1%-ers who adopted it – predicted and planned for it to do.

That progenitor is former University of Chicago economist Robert Mundell. The architect of “supply-side economics” is now a professor at Columbia University, but I knew him through his connection to my Chicago professor, Milton Friedman, back before Mundell’s research on currencies and exchange rates had produced the blueprint for European monetary union and a common European currency.

Mundell, then, was more concerned with his bathroom arrangements. Professor Mundell, who has both a Nobel Prize and an ancient villa in Tuscany, told me, incensed:

“They won’t even let me have a toilet. They’ve got rules that tell me I can’t have a toilet in this room! Can you imagine?”

As it happens, I can’t. But I don’t have an Italian villa, so I can’t imagine the frustrations of bylaws governing commode placement.

But Mundell, a can-do Canadian-American, intended to do something about it: come up with a weapon that would blow away government rules and labor regulations. (He really hated the union plumbers who charged a bundle to move his throne.)

“It’s very hard to fire workers in Europe,” he complained. His answer: the euro.

The euro would really do its work when crises hit, Mundell explained. Removing a government’s control over currency would prevent nasty little elected officials from using Keynesian monetary and fiscal juice to pull a nation out of recession.

“It puts monetary policy out of the reach of politicians,” he said. “[And] without fiscal policy, the only way nations can keep jobs is by the competitive reduction of rules on business.”

He cited labor laws, environmental regulations and, of course, taxes. All would be flushed away by the euro. Democracy would not be allowed to interfere with the marketplace – or the plumbing.

As another Nobelist, Paul Krugman, notes, the creation of the eurozone violated the basic economic rule known as “optimum currency area”. This was a rule devised by Bob Mundell.

That doesn’t bother Mundell. For him, the euro wasn’t about turning Europe into a powerful, unified economic unit. It was about Reagan and Thatcher.

“Ronald Reagan would not have been elected president without Mundell’s influence,” once wrote Jude Wanniski in the Wall Street Journal. The supply-side economics pioneered by Mundell became the theoretical template for Reaganomics – or as George Bush the Elder called it, “voodoo economics”: the magical belief in free-market nostrums that also inspired the policies of Mrs Thatcher.

Mundell explained to me that, in fact, the euro is of a piece with Reaganomics:

“Monetary discipline forces fiscal discipline on the politicians as well.”

And when crises arise, economically disarmed nations have little to do but wipe away government regulations wholesale, privatize state industries en masse, slash taxes and send the European welfare state down the drain.

I had one of the most satisfying eureka experiences of my career while teaching flight instructors … about the psychology of effective training. I was telling them about an important principle of skill training: rewards for improved performance work better than punishment of mistakes…

When I finished my enthusiastic speech, one of the most seasoned instructors in the group raised his hand and made a short speech of his own. He began by conceding that rewarding improved performance might be good for the birds, but he denied that it was optimal for flight cadets. This is what he said: “On many occasions I have praised flight the next time thy try the same maneuver they usually do worse. On the other hand, I have often screamed into a cadet’s earphone for bad execution, and in general he does better on his next try. So please don’t tell us that reward works and punishment does not, because the opposite is the case” …

What he had observed is known as regression to the mean, which in that case was due to random fluctuations in the quality of performance. Naturally, he praised only a cadet whose performance was far better than average. But the cadet was probably just lucky on that particular attempt and therefore likely to deteriorate regardless of whether or not he was praised. Similarly, the instructor would shout into a cadet’s earphones only when the cadet’s performance was unusually bad and therefore likely to improve regardless of what the instructor did. The instructor had attached a causal interpretation to the inevitable fluctuations of a random process …

I had stumbled onto a significant fact of the human condition: the feedback to which life exposes us is perverse. Because we tend to be nice to other people when they please us and nasty when they do not, we are statistically punished for being nice and rewarded for being nasty …

It took Francis Galton several years to figure out that correlation and regression are not two concepts – they are different perspectives on the same concept: whenever the correlation between two scores is imperfect, there will be regression to the mean …

Causal explanations will be evoked when regression is detected, but they will be wrong because the truth is that regression to the mean has an explanation but does not have a cause.

25 Jul, 2015 at 18:46 | Posted in Economics | Comments Off on Why Real Business Cycle models can’t be taken seriously

They try to explain business cycles solely as problems of information, such as asymmetries and imperfections in the information agents have. Those assumptions are just as arbitrary as the institutional rigidities and inertia they find objectionable in other theories of business fluctuations … I try to point out how incapable the new equilibrium business cycles models are of explaining the most obvious observed facts of cyclical fluctuations … I don’t think that models so far from realistic description should be taken seriously as a guide to policy … I don’t think that there is a way to write down any model which at one hand respects the possible diversity of agents in taste, circumstances, and so on, and at the other hand also grounds behavior rigorously in utility maximization and which has any substantive content to it.

Real Business Cycle theory basically says that economic cycles are caused by technology-induced changes in productivity. It says that employment goes up or down because people choose to work more when productivity is high and less when it’s low. This is of course nothing but pure nonsense — and how on earth those guys that promoted this theory (Thomas Sargent et consortes) could be awarded The Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel is really beyond comprehension.

In yours truly’s History of Economic Theories (4th ed, 2007, p. 405) it was concluded that

the problem is that it has turned out to be very difficult to empirically verify the theory’s view on economic fluctuations as being effects of rational actors’ optimal intertemporal choices … Empirical studies have not been able to corroborate the assumption of the sensitivity of labour supply to changes in intertemporal relative prices. Most studies rather points to expected changes in real wages having only rather little influence on the supply of labour.

Rigorous models lacking relevance is not to be taken seriously. Or as Keynes had it — It is better to be vaguely right than precisely wrong …

Flash back to 1968 … The standard approach was the temporary equilibrium model that John Hicks developed in Value and Capital. In the temporary equilibrium model, time proceeds in a sequence of weeks. Each week, people meet in a market. They bring goods to market to trade. They also bring money and bonds. The crucial point of temporary equilibrium theory is that the future price is different from our belief of the future price. To complete a model of this kind, we must add an equation to explain how beliefs are determined. I call this equation, the belief function.

Now jump forward to 1972 … Lucas argued that, although we may not know the future exactly: we do know the exact probability distribution of future events. Following the work of John Muth, he called this idea, rational expectations.

Rational expectations is a powerful idea. If expectations are rational, then we do not need to know how people form their beliefs. The belief function that was so important in temporary equilibrium theory can be relegated to the dustbin of history. We don’t care how people form beliefs because whatever mechanism they use, that mechanism must be right on average. Who can argue with that?

That is a clever argument. But it suffers from a fatal flaw. General equilibrium models of money do not have a unique equilibrium. They have many. This problem was first identified by the English economist Frank Hahn, and despite the best attempts of the rational expectations school to ignore the problem: it reappears with a alarming regularity. Rational expectations economists who deny an independent role for beliefs are playing a game of whack a mole …

This is not an esoteric point. It is at the core of the question that I pose at the beginning of this post: If the Fed raises the interest rate will it cause more or less inflation? And it is a point that policy makers are well aware of as this piece by Fed President Jim Bullard makes clear.

What is the solution? It is one thing to recognize that the world is random, and quite another to assume that we have perfect knowledge. If we place our agents in models where many different things can happen, we must model the process by which they form beliefs.

I agree with Farmer on most of his critique of rational expectations. But although multiplicity of equilibria certainly is one important criticism that can we waged against the Muth-Lucas idea, I don’t think it is the core problem with rational expectations.

Assumptions in scientific theories/models are often based on (mathematical) tractability (and so necessarily simplifying) and used for more or less self-evidently necessary theoretical consistency reasons. But one should also remember that assumptions are selected for a specific purpose, and so the arguments (in economics shamelessly often totally non-existent) put forward for having selected a specific set of assumptions, have to be judged against that background to check if they are warranted.

This, however, only shrinks the assumptions set minimally – it is still necessary to decide on which assumptions are innocuous and which are harmful, and what constitutes interesting/important assumptions from an ontological & epistemological point of view (explanation, understanding, prediction). Especially so if you intend to refer your theories/models to a specific target system — preferably the real world. To do this one should start by applying a Solowian Smell Test: Is the theory/model reasonable given what we know about the real world? If not, why should we care about it? If not – we shouldn’t apply it (remember time is limited and economics is a science on scarcity & optimization …)

As Farmer notices, the concept of rational expectations was first developed by John Muth in an Econometrica article in 1961 — Rational expectations and the theory of price movements — and later — from the 1970s and onward — applied to macroeconomics. Muth framed his rational expectations hypothesis (REH) in terms of probability distributions:

Expectations of firms (or, more generally, the subjective probability distribution of outcomes) tend to be distributed, for the same information set, about the prediction of the theory (or the “objective” probability distributions of outcomes).

But Muth was also very open with the non-descriptive character of his concept:

The hypothesis of rational expectations] does not assert that the scratch work of entrepreneurs resembles the system of equations in any way; nor does it state that predictions of entrepreneurs are perfect or that their expectations are all the same.

To Muth, its main usefulness was its generality and ability to be applicable to all sorts of situations irrespective of the concrete and contingent circumstances at hand.

Muth’s concept was later picked up by New Classical Macroeconomics, where it soon became the dominant model-assumption and has continued to be a standard assumption made in many neoclassical (macro)economic models – most notably in the fields of (real) business cycles and finance (being a cornerstone of the “efficient market hypothesis”).

REH basically says that people on the average hold expectations that will be fulfilled. This makes the economist’s analysis enormously simplistic, since it means that the model used by the economist is the same as the one people use to make decisions and forecasts of the future.

But, strictly seen, REH only applies to ergodic – stable and stationary stochastic – processes. If the world was ruled by ergodic processes, people could perhaps have rational expectations, but no convincing arguments have ever been put forward, however, for this assumption being realistic.

Of course you can make assumptions based on tractability, but then you do also have to take into account the necessary trade-off in terms of the ability to make relevant and valid statements on the intended target system. Mathematical tractability cannot be the ultimate arbiter in science when it comes to modeling real world target systems. One could perhaps accept REH if it had produced lots of verified predictions and good explanations. But it has done nothing of the kind. Therefore the burden of proof is on those who still want to use models built on utterly unreal assumptions.

In models building on REH it is presupposed – basically for reasons of consistency – that agents have complete knowledge of all of the relevant probability distribution functions. And when trying to incorporate learning in these models – trying to take the heat of some of the criticism launched against it up to date – it is always a very restricted kind of learning that is considered. A learning where truly unanticipated, surprising, new things never take place, but only rather mechanical updatings – increasing the precision of already existing information sets – of existing probability functions.

Nothing really new happens in these ergodic models, where the statistical representation of learning and information is nothing more than a caricature of what takes place in the real world target system. This follows from taking for granted that people’s decisions can be portrayed as based on an existing probability distribution, which by definition implies the knowledge of every possible event (otherwise it is in a strict mathematical-statistically sense not really a probability distribution) that can be thought of taking place.

But in the real world it is – as shown again and again by behavioural and experimental economics – common to mistake a conditional distribution for a probability distribution. Mistakes that are impossible to make in the kinds of economic analysis that build on REH. On average REH agents are always correct. But truly new information will not only reduce the estimation error but actually change the entire estimation and hence possibly the decisions made. To be truly new, information has to be unexpected. If not, it would simply be inferred from the already existing information set.

In the real world, it is not possible to just assume — as Farmer puts it — “we do know the exact probability distribution of future events.” On the contrary. We can’t even assume that probability distributions are the right way to characterize, understand or explain acts and decisions made under uncertainty. When we simply do not know, when we have not got a clue, when genuine uncertainty prevail, REH simply is not “reasonable.” In those circumstances it is not a useful assumption, since under those circumstances the future is not like the past, and henceforth, we cannot use the same probability distribution – if it at all exists – to describe both the past and future.

Although in physics it may possibly not be straining credulity too much to model processes as taking place in “vacuum worlds” – where friction, time and history do not really matter – in social and historical sciences it is obviously ridiculous. If societies and economies were frictionless ergodic worlds, why do econometricians fervently discuss things such as structural breaks and regime shifts? That they do is an indication of the unrealisticness of treating open systems as analyzable with frictionless ergodic “vacuum concepts.”

If the intention of REH is to help us explain real economies, it has to be evaluated from that perspective. A model or hypothesis without a specific applicability is not really deserving our interest. Without strong evidence all kinds of absurd claims and nonsense may pretend to be science. We have to demand more of a justification than rather watered-down versions of “anything goes” when comes to rationality postulates. If one proposes REH one also has to support its underlying assumptions. None is given. REH economists are not particularly interested in empirical examinations of how real choices and decisions are made in real economies. REH has been transformed from an – in principle – testable hypothesis to an irrefutable proposition.

As shown already by Paul Davidson in the 1980s, REH implies that relevant distributions have to be time independent (which follows from the ergodicity implied by REH). This amounts to assuming that an economy is like a closed system with known stochastic probability distributions for all different events. In reality it is straining one’s beliefs to try to represent economies as outcomes of stochastic processes. An existing economy is a single realization tout court, and hardly conceivable as one realization out of an ensemble of economy-worlds, since an economy can hardly be conceived as being completely replicated over time. It’s really straining one’s imagination trying to see any similarity between these modelling assumptions and children’s expectations in the “tickling game.” In REH we are never disappointed in any other way than as when we lose at the roulette wheels, since, as Muth puts it, “averages of expectations are accurate.” But real life is not an urn or a roulette wheel, so REH is a vastly misleading analogy of real-world situations. It may be a useful assumption – but only for non-crucial and non-important decisions that are possible to replicate perfectly (a throw of dices, a spin of the roulette wheel etc).

Most models building on rational hypothesis are time-invariant and so give no room for any changes in expectations and their revisions. The only imperfection of knowledge they admit of is included in the error terms, error terms that are assumed to be additive and to have a give and known frequency distribution, so that the models can still fully pre-specify the future even when incorporating these stochastic variables into the models.

If we want to have anything of interest to say on real economies, financial crisis and the decisions and choices real people make, it is high time to replace the rational expectations hypothesis with more relevant and realistic assumptions concerning economic agents and their expectations.

Any model assumption — such as ‘rational expectations’ — that doesn’t pass the real world Smell Test is just silly nonsense on stilts.

Suppose someone sits down where you are sitting right now and announces to me that he is Napoleon Bonaparte. The last thing I want to do with him is to get involved in a technical discussion of cavalry tactics at the battle of Austerlitz. If I do that, I’m getting tacitly drawn into the game that he is Napoleon. Now, Bob Lucas and Tom Sargent like nothing better than to get drawn into technical discussions, because then you have tacitly gone along with their fundamental assumptions; your attention is attracted away from the basic weakness of the whole story. Since I find that fundamental framework ludicrous, I respond by treating it as ludicrous – that is, by laughing at it – so as not to fall into the trap of taking it seriously and passing on to matters of technique.

Because Finland has used the euro since its inception, the value of its currency cannot adjust in ways that would cushion the overall Finnish economy from those shocks. If Finland still had its old currency, the markka, it would have fallen in value on international markets. Suddenly other Finnish industries would have had a huge cost advantage over, say, German competitors, and they would have grown and created the jobs to help make up for those lost because of Nokia and the paper industry and Russian trade.

“Rubbish,” Mr. Stubb said. To evaluate the euro, you can’t just look at what he calls a current “rough patch” for the Finnish economy. You have to look at a longer time horizon. In his telling, the integration with Western Europe — of which the euro currency is a crucial element — deepened trade and diplomatic relations, making Finland both more powerful on the world stage and its industries better connected to the rest of the global economy. That made its people richer.

On the whole, the euro has, thus far, gone much better than many U.S. economists had predicted. We survey how U.S. economists viewed European monetary unification from the publication of the Delors Report in 1989 to the introduction of euro notes and coins in January 2002. U.S. academic economists concentrated on whether a single currency was a good or bad thing, usually using the theory of optimum currency areas, and most were skeptical towards the single currency …

We suggest that the use of the optimum currency area paradigm was the main source of U.S. pessimism towards the single currency in the 1990s. The optimum currency area approach was biased towards the conclusion that Europe was far from being an optimum currency area. The optimum currency area paradigm inspired a static view, overlooking the time-consuming nature of the process of monetary unification. The optimum currency area view ignored the fact that the Europe was facing a choice between permanently fixed exchange rates and semi-permanent fixed rates. The optimum currency area approach led to the view that the single currency was a political construct with little or no economic foundation. In short, by adopting the optimum currency area theory as their main engine of analysis, U.S. academic economists became biased against the euro.

It is surprising that U.S. economists, living in a large monetary union and enjoying the benefits from monetary integration, were (and still remain) skeptical towards the euro. U.S. economists took, and still take, the desirability of a single currency for their country to be self-evident. To our knowledge, no U.S. econ- omist, inspired by the optimum currency area approach, has proposed to break up the United States into smaller regional currency areas. Perhaps we should take this as a positive sign for the future of the euro: in due time it will be accepted as the normal state of monetary affairs in Europe just like the dollar is in the United States.

Neoclassical economics nowadays usually assumes that agents that have to make choices under conditions of uncertainty behave according to Bayesian rules, axiomatized by Ramsey (1931) and Savage (1954) – that is, they maximize expected utility with respect to some subjective probability measure that is continually updated according to Bayes theorem. If not, they are supposed to be irrational, and ultimately – via some “Dutch book” or “money pump”argument – susceptible to being ruined by some clever “bookie”.

Bayesianism reduces questions of rationality to questions of internal consistency (coherence) of beliefs, but – even granted this questionable reductionism – do rational agents really have to be Bayesian? As I have been arguing elsewhere (e. g. here, here and here) there is no strong warrant for believing so.

In many of the situations that are relevant to economics one could argue that there is simply not enough of adequate and relevant information to ground beliefs of a probabilistic kind, and that in those situations it is not really possible, in any relevant way, to represent an individual’s beliefs in a single probability measure.

Say you have come to learn (based on own experience and tons of data) that the probability of you becoming unemployed in Sweden is 10 %. Having moved to another country (where you have no own experience and no data) you have no information on unemployment and a fortiori nothing to help you construct any probability estimate on. A Bayesian would, however, argue that you would have to assign probabilities to the mutually exclusive alternative outcomes and that these have to add up to 1, if you are rational. That is, in this case – and based on symmetry – a rational individual would have to assign probability 10% to becoming unemployed and 90% of becoming employed.

That feels intuitively wrong though, and I guess most people would agree. Bayesianism cannot distinguish between symmetry-based probabilities from information and symmetry-based probabilities from an absence of information. In these kinds of situations most of us would rather say that it is simply irrational to be a Bayesian and better instead to admit that we “simply do not know” or that we feel ambiguous and undecided. Arbitrary an ungrounded probability claims are more irrational than being undecided in face of genuine uncertainty, so if there is not sufficient information to ground a probability distribution it is better to acknowledge that simpliciter, rather than pretending to possess a certitude that we simply do not possess.

I think this critique of Bayesianism is in accordance with the views of John Maynard Keynes’ A Treatise on Probability (1921) and General Theory (1937). According to Keynes we live in a world permeated by unmeasurable uncertainty – not quantifiable stochastic risk – which often forces us to make decisions based on anything but rational expectations. Sometimes we “simply do not know.” Keynes would not have accepted the view of Bayesian economists, according to whom expectations “tend to be distributed, for the same information set, about the prediction of the theory.” Keynes, rather, thinks that we base our expectations on the confidence or “weight” we put on different events and alternatives. To Keynes expectations are a question of weighing probabilities by “degrees of belief”, beliefs that have preciously little to do with the kind of stochastic probabilistic calculations made by the rational agents modeled by Bayesian economists.

Stressing the importance of Keynes’ view on uncertainty John Kay writes in Financial Times:

Keynes believed that the financial and business environment was characterised by “radical uncertainty”. The only reasonable response to the question “what will interest rates be in 20 years’ time?” is “we simply do not know” …

For Keynes, probability was about believability, not frequency. He denied that our thinking could be described by a probability distribution over all possible future events, a statistical distribution that could be teased out by shrewd questioning – or discovered by presenting a menu of trading opportunities. In the 1920s he became engaged in an intellectual battle on this issue, in which the leading protagonists on one side were Keynes and the Chicago economist Frank Knight, opposed by a Cambridge philosopher, Frank Ramsey, and later by Jimmie Savage, another Chicagoan.

Keynes and Knight lost that debate, and Ramsey and Savage won, and the probabilistic approach has maintained academic primacy ever since. A principal reason was Ramsey’s demonstration that anyone who did not follow his precepts – anyone who did not act on the basis of a subjective assessment of probabilities of future events – would be “Dutch booked” … A Dutch book is a set of choices such that a seemingly attractive selection from it is certain to lose money for the person who makes the selection.

I used to tell students who queried the premise of “rational” behaviour in financial markets – where rational means are based on Bayesian subjective probabilities – that people had to behave in this way because if they did not, people would devise schemes that made money at their expense. I now believe that observation is correct but does not have the implication I sought. People do not behave in line with this theory, with the result that others in financial markets do devise schemes that make money at their expense.

Although this on the whole gives a succinct and correct picture of Keynes’s view on probability, I think it’s necessary to somewhat qualify in what way and to what extent Keynes “lost” the debate with the Bayesians Frank Ramsey and Jim Savage.

In economics it’s an indubitable fact that few mainstream neoclassical economists work within the Keynesian paradigm. All more or less subscribe to some variant of Bayesianism. And some even say that Keynes acknowledged he was wrong when presented with Ramsey’s theory. This is a view that has unfortunately also been promulgated by Robert Skidelsky in his otherwise masterly biography of Keynes. But I think it’s fundamentally wrong. Let me elaborate on this point (the argumentation is more fully presented in my book John Maynard Keynes (SNS, 2007)).

It’s a debated issue in newer research on Keynes if he, as some researchers maintain, fundamentally changed his view on probability after the critique levelled against his A Treatise on Probability by Frank Ramsey. It has been exceedingly difficult to present evidence for this being the case.

Ramsey’s critique was mainly that the kind of probability relations that Keynes was speaking of in Treatise actually didn’t exist and that Ramsey’s own procedure (betting) made it much easier to find out the “degrees of belief” people were having. I question this both from a descriptive and a normative point of view.

What Keynes is saying in his response to Ramsey is only that Ramsey “is right” in that people’s “degrees of belief” basically emanates in human nature rather than in formal logic.

Patrick Maher, former professor of philosophy at the University of Illinois, even suggests that Ramsey’s critique of Keynes’s probability theory in some regards is invalid:

Keynes’s book was sharply criticized by Ramsey. In a passage that continues to be quoted approvingly, Ramsey wrote:

“But let us now return to a more fundamental criticism of Mr. Keynes’ views, which is the obvious one that there really do not seem to be any such things as the probability relations he describes. He supposes that, at any rate in certain cases, they can be perceived; but speaking for myself I feel confident that this is not true. I do not perceive them, and if I am to be persuaded that they exist it must be by argument; moreover, I shrewdly suspect that others do not perceive them either, because they are able to come to so very little agreement as to which of them relates any two given propositions.” (Ramsey 1926, 161)

I agree with Keynes that inductive probabilities exist and we sometimes know their values. The passage I have just quoted from Ramsey suggests the following argument against the existence of inductive probabilities. (Here P is a premise and C is the conclusion.)

P: People are able to come to very little agreement about inductive proba- bilities.
C: Inductive probabilities do not exist.

P is vague (what counts as “very little agreement”?) but its truth is still questionable. Ramsey himself acknowledged that “about some particular cases there is agreement” (28) … In any case, whether complicated or not, there is more agreement about inductive probabilities than P suggests.

Ramsey continued:

“If … we take the simplest possible pairs of propositions such as “This is red” and “That is blue” or “This is red” and “That is red,” whose logical relations should surely be easiest to see, no one, I think, pretends to be sure what is the probability relation which connects them.” (162)

I agree that nobody would pretend to be sure of a numeric value for these probabilities, but there are inequalities that most people on reflection would agree with. For example, the probability of “This is red” given “That is red” is greater than the probability of “This is red” given “That is blue.” This illustrates the point that inductive probabilities often lack numeric values. It doesn’t show disagreement; it rather shows agreement, since nobody pretends to know numeric values here and practically everyone will agree on the inequalities.

Ramsey continued:

“Or, perhaps, they may claim to see the relation but they will not be able to say anything about it with certainty, to state if it ismore or less than 1/3, or so on. They may, of course, say that it is incomparable with any numerical relation, but a relation about which so little can be truly said will be of little scientific use and it will be hard to convince a sceptic of its existence.” (162)

Although the probabilities that Ramsey is discussing lack numeric values, they are not “incomparable with any numerical relation.” Since there are more than three different colors, the a priori probability of “This is red” must be less than 1/3 and so its probability given “This is blue” must likewise be less than 1/3. In any case, the “scientific use” of something is not relevant to whether it exists. And the question is not whether it is “hard to convince a sceptic of its existence” but whether the sceptic has any good argument to support his position …

Ramsey concluded the paragraph I have been quoting as follows:

“Besides this view is really rather paradoxical; for any believer in induction must admit that between “This is red” as conclusion and “This is round” together with a billion propositions of the form “a is round and red” as evidence, there is a finite probability relation; and it is hard to suppose that as we accumulate instances there is suddenly a point, say after 233 instances, at which the probability relation becomes finite and so comparable with some numerical relations.” (162)

Ramsey is here attacking the view that the probability of “This is red” given “This is round” cannot be compared with any number, but Keynes didn’t say that and it isn’t my view either. The probability of “This is red” given only “This is round” is the same as the a priori probability of “This is red” and hence less than 1/3. Given the additional billion propositions that Ramsey mentions, the probability of “This is red” is high (greater than 1/2, for example) but it still lacks a precise numeric value. Thus the probability is always both comparable with some numbers and lacking a precise numeric value; there is no paradox here.

I have been evaluating Ramsey’s apparent argument from P to C. So far I have been arguing that P is false and responding to Ramsey’s objections to unmeasurable probabilities. Now I want to note that the argument is also invalid. Even if P were true, it could be that inductive probabilities exist in the (few) cases that people generally agree about. It could also be that the disagreement is due to some people misapplying the concept of inductive probability in cases where inductive probabilities do exist. Hence it is possible for P to be true and C false …

I conclude that Ramsey gave no good reason to doubt that inductive probabilities exist.

Ramsey’s critique made Keynes more strongly emphasize the individuals’ own views as the basis for probability calculations, and less stress that their beliefs were rational. But Keynes’s theory doesn’t stand or fall with his view on the basis for our “degrees of belief” as logical. The core of his theory – when and how we are able to measure and compare different probabilities – he doesn’t change. Unlike Ramsey he wasn’t at all sure that probabilities always were one-dimensional, measurable, quantifiable or even comparable entities.

It is, perhaps, not uninteresting to point to some of the economic implications which are included in “perfect foresight”. It will immediately be recognized that this assumption could never lie at the basis of the theory of equilibrium, and they who attribute this to such authors as Walras and Pareto, who are included as representatives of equilibrium theory, are in error. ln the first place, strange to say, it happens that even material assertions can be made about such an economy on the basis of the assumption of perfect foresight.They are fundamentally of the negative type. For example, no lotteries or gambling will exist, for who would play if it were well-established where the profit went? Telephone, telegraph, newspapers, bills, posters, etc. would, likewise, be superfluous, obviously; but, also, the very important industries, based on them, with all their affiliated industries, would be absent. Only packages and letters implying documentary evidence would need to be delivered by post, for to whom would letters be written? The tale need not be carried further, for it is obvious how little considered are the “fundamental assumptions” so frequently employed in theoretical economics, where really a matter of nonsense is at issue.

1. It is “an indisputable fact” that excessive state spending and unsustainable public debts did not cause the Eurozone crisis. Sovereign debt build-up was dwarfed by the accumulation of private-sector debts … which fuelled and were made possible by unsustainable asset-price booms. Fiscal austerity is the wrong medicine for the wrong disease, bringing only pain (first)—and no gain (later).

2. The rise in (relative) unit labour costs did not lead to the higher current account deficits in the Eurozone periphery. International competitiveness is not about wage costs, but about technology and innovation. Given a country’s technological capabilities as reflected by its productive structure, export growth and import growth are overwhelmingly determined, not by unit labour costs, but by (foreign and domestic) demand. Attempts to improve “competitiveness” by wage cuts and labour market deregulation can only backfire — no country has managed to climb up the technological ladder (upgrading exports and strengthening non-price competitiveness) based on low wages and footloose, fractured and flexible workers …

3. The only sensible way to think about the Eurozone crisis is in terms of a private-sector debt crisis, aided and abetted by the liberalization of (integrating) European financial markets and a “global banking glut”. With this understanding, we can start thinking about more effective, efficient and just ways to bring about economic recovery in the Eurozone. Without doubt, such a recovery package must include a coordinated demand stimulus, a directing of credit towards productive investments and “smart” innovation (also through a “socialization of investments” à la Keynes), a bailing out of cash-strapped and insolvent governments (by the ECB), and “mission-oriented” industrial policies to restructure and upgrade peripheral manufacturing … This requires, at the EU level, coordination of economic decision-making, or what Beck (2013) calls a “new social contract for Europe”— not the beggar-thy-neighbour competition propagated by JTDD or the “eat your peas” solution of Mr. Schäuble.

Striving for full disclosure, in subsequent years I included this statement in my course syllabus: “Exams will have a total of 137 points rather than the usual 100. This scoring system has no effect on the grade you get in the course, but it seems to make you happier.” And, indeed, after I made that change, I never got a complaint that my exams were too hard.

In the eyes of an economist, my students were “misbehaving.” By that I mean that their behavior was inconsistent with the idealized model at the heart of much of economics. Rationally, no one should be happier about a score of 96 out of 137 (70 percent) than 72 out of 100, but my students were. And by realizing this, I was able to set the kind of exam I wanted but still keep the students from grumbling.

This illustrates an important problem with traditional economic theory. Economists discount any factors that would not influence the thinking of a rational person. These things are supposedly irrelevant. But unfortunately for the theory, many supposedly irrelevant factors do matter.

Habermas: The Greek debt deal announced on Monday morning is damaging both in its result and the way in which it was reached. First, the outcome of the talks is ill-advised. Even if one were to consider the strangulating terms of the deal the right course of action, one cannot expect these reforms to be enacted by a government which by its own admission does not believe in the terms of the agreement.

Secondly, the outcome does not make sense in economic terms because of the toxic mixture of necessary structural reforms of state and economy with further neoliberal impositions that will completely discourage an exhausted Greek population and kill any impetus to growth.

Thirdly, the outcome means that a helpless European Council is effectively declaring itself politically bankrupt: the de facto relegation of a member state to the status of a protectorate openly contradicts the democratic principles of the European Union. Finally, the outcome is disgraceful because forcing the Greek government to agree to an economically questionable, predominantly symbolic privatisation fund cannot be understood as anything other than an act of punishment against a left-wing government. It’s hard to see how more damage could be done.

And yet the German government did just this when finance minister Schaeuble threatened Greek exit from the euro, thus unashamedly revealing itself as Europe’s chief disciplinarian. The German government thereby made for the first time a manifest claim for German hegemony in Europe – this, at any rate, is how things are perceived in the rest of Europe, and this perception defines the reality that counts. I fear that the German government, including its social democratic faction, have gambled away in one night all the political capital that a better Germany had accumulated in half a century – and by “better” I mean a Germany characterised by greater political sensitivity and a post-national mentality.

Yanis Varoufakis: I’m feeling on top of the world – I no longer have to live through this hectic timetable, which was absolutely inhuman, just unbelievable. I was on 2 hours sleep every day for five months. … I’m also relieved I don’t have to sustain any longer this incredible pressure to negotiate for a position I find difficult to defend, even if I managed to force the other side to acquiesce, if you know what I mean.

HL: What was it like? Did you like any aspect of it?

YV: Oh well a lot of it. But the inside information one gets… to have your worst fears confirmed … To have “the powers that be” speak to you directly, and it be as you feared – the situation was worse than you imagined! So that was fun, to have the front row seat.

HL: What are you referring to?

YV: The complete lack of any democratic scruples, on behalf of the supposed defenders of Europe’s democracy. The quite clear understanding on the other side that we are on the same page analytically – of course it will never come out at present. [And yet] To have very powerful figures look at you in the eye and say “You’re right in what you’re saying, but we’re going to crunch you anyway.”

Comments Policy

I like comments. Follow netiquette. Comments — especially anonymous ones — with pseudo argumentations, abusive language or irrelevant links will not be posted. And please remember — being a full-time professor leaves only limited time to respond to comments.