Students all over the world are increasingly questioning if the kind of economics they are taught — mainstream economics — really is of any value. Some have even started to question if economics is a science.

Two ‘Nobel laureates’ in economics — Robert Shiller and Paul Krugman — have lately tried to respond:

Critics of “economic sciences” sometimes refer to the development of a “pseudoscience” of economics, arguing that it uses the trappings of science, like dense mathematics, but only for show …

My belief is that economics is somewhat more vulnerable than the physical sciences to models whose validity will never be clear, because the necessity for approximation is much stronger than in the physical sciences, especially given that the models describe people rather than magnetic resonances or fundamental particles …

But all the mathematics in economics is not … charlatanism. Economics has an important quantitative side, which cannot be escaped …

While economics presents its own methodological problems, the basic challenges facing researchers are not fundamentally different from those faced by researchers in other fields.

Let’s grant that economics as practiced doesn’t look like a science. But that’s not because the subject is inherently unsuited to the scientific method. Sure, it’s highly imperfect — it’s a complex area, and our understanding is in its early stages …

No, the problem lies not in the inherent unsuitability of economics for scientific thinking as in the sociology of the economics profession — a profession that somehow, at least in macro, has ceased rewarding research that produces successful predictions and rewards research that fits preconceptions and uses hard math instead.

Economics — and especially mainstream economics — has lost immensely in terms of status and prestige during the last years. Not the least because of its manifest inability to foresee the latest financial and economic crises and its lack of constructive and sustainable policies to take us out of these. We all know that many activities, relations, processes and events are uncertain and that the data do not unequivocally single out one decision as the only ‘rational’ one. Neither the economist nor the deciding individual can fully pre-specify how people will decide when facing uncertainties and ambiguities that are ontological facts of the way the world works.

Mainstream economists, however, have wanted to use their hammer, and so decided to pretend that the world looks like a nail. But pretending that uncertainty can be reduced to risk and construct models on that assumption has only contributed to financial crises and economic havoc.

So how do we put an end to this intellectual cataclysm? How do we re-establish credence and trust in economics as a science?

Five changes are absolutely decisive.

(1) Stop pretending that we have exact and rigorous answers on everything. Because we don’t. We build models and theories and tell people that we can calculate and foresee the future. But we do this based on mathematical and statistical assumptions that often have little or nothing to do with reality. By pretending that there is no really important difference between model and reality we lull people into thinking that we have things under control. We haven’t! This false feeling of security was one of the factors that contributed to the financial crisis of 2008.

(2) Stop the childish and exaggerated belief in mathematics giving answers to important economic questions. Mathematics gives exact answers to exact questions. But the relevant and interesting questions we face in the economic realm are rarely of that kind. Questions like “Is 2 + 2 = 4?” are never posed in real economies. Instead of a fundamentally misplaced reliance on abstract mathematical-deductive-axiomatic models having anything of substance to contribute to our knowledge of real economies, it would be far better if we pursued “thicker” models and relevant empirical studies and observations.

(3) Stop pretending that there are laws in economics. There are no universal laws in economics. Economies are not like planetary systems or physics labs. The most we can aspire to in real economies is establishing possible tendencies with varying degrees of generalizability.

(4) Stop treating other social sciences as poor relatives. Economics has long suffered from hubris. A more broad-minded and multifarious science would enrich economics.

(5) Stop building models and making forecasts of the future based on totally unreal micro-founded macromodels with intertemporally optimizing robot-like representative actors equipped with rational expectations. This is pure nonsense. We have to build our models on assumptions that are not blatantly in contradiction to reality. Assuming that people are ‘lightning calculators of pleasures and pains’ is not a good — not even as a ‘successive approximation’ — modelling strategy.

Even with my various “grey” methods for “improving” the data, I wasn’t able to get the results the way I wanted them. I couldn’t resist the temptation to go a step further. I wanted it so badly …

I opened the file with the data that I had entered and changed an unexpected 2 into a 4; then, a little further along, I changed a 3 into a 5. It didn’t feel right.

I looked around me nervously. The data danced in front of my eyes. When the results are just not quite what you’d so badly hoped for … when you know that there are other people doing similar research elsewhere who are getting good results; then, surely, you’re entitled to adjust the results just a little?

No. I clicked on “Undo Typing.” And again. I felt very alone. I didn’t want this. I’d worked so hard. I’d done everything I could and it just hadn’t quite worked out the way I’d expected. It just wasn’t quite how everyone could see that it logically had to be … I looked at the array of data and made a few mouse clicks to tell the computer to run the statistical analyses. When I saw the results, the world had become logical again. I saw what I’d imagined.

In advanced economics the question would be: ‘What besides mathematics should be in an economics lecture?’ In physics the familiar spirit is Archimedes the experimenter. But in economics, as in mathematics itself, it is theorem-proving Euclid who paces the halls …

Economics … has become a mathematical game. The science has been drained out of economics, replaced by a Nintendo game of assumption-making …

Most thoughtful economists think that the games on the blackboard and the computer have gone too far, absurdly too far. It is time to bring economic observation, economic history, economic literature, back into the teaching of economics.

Economists would be less arrogant, and less dangerous as experts, if they had to face up to the facts of the world. Perhaps they would even become as modest as the physicists.

‘New Keynesian’ macroeconomists have for years been arguing (e.g. here) about the importance of the New Classical counter-revolution in economics. ‘Helping’ to change the way macroeconomics is done today — with rational expectations, Euler equations, intertemporal optimization and microfoundations — their main critique of New Classical macroeconomics is that it didn’t incorporate price stickiness into the Real Business Cycles models developed by the New Classicals. So — the ‘New Keynesians’ adopted the methodology suggested by the New Classicals and just added price stickiness!

But does putting a sticky-price DSGE lipstick on the RBC pig really help?

The real problem is not that prices are sticky but that trading takes place at disequilibrium prices and there is no mechanism by which to discover what the equilibrium prices are. Modern macroeconomics solves this problem, in its characteristic fashion, by assuming it away by insisting that expectations are “rational.”

Economists have allowed themselves to make this absurd assumption because they are in the habit of thinking that the simple rule of raising price when there is an excess demand and reducing the price when there is an excess supply inevitably causes convergence to equilibrium …

I regard the term “sticky prices” and other similar terms as very unhelpful and misleading; they are a kind of mental crutch that economists are too ready to rely on as a substitute for thinking about what are the actual causes of economic breakdowns, crises, recessions, and depressions. Most of all, they represent an uncritical transfer of partial-equilibrium microeconomic thinking to a problem that requires a system-wide macroeconomic approach. That approach should not ignore microeconomic reasoning, but it has to transcend both partial-equilibrium supply-demand analysis and the mathematics of intertemporal optimisation.

In deductive reasoning all knowledge obtainable is already latent in the postulates. Rigour is needed to prevent the successive inferences growing less and less accurate as we proceed. The conclusions are never more accurate than the data. In inductive reasoning we are performing part of the process by which new knowledge is created. The conclusions normally grow more and more accurate as more data are included. It should never be true, though it is still often said, that the conclusions are no more accurate than the data on which they are based.

In science we standardly use a logically non-valid inference — the fallacy of affirming the consequent — of the following form:

(1) p => q
(2) q
————-
p

or, in instantiated form

(1) ∀x (Gx => Px)

(2) Pa
————
Ga

Although logically invalid, it is nonetheless a kind of inference — abduction — that may be factually strongly warranted and truth-producing.

Following the general pattern ‘Evidence => Explanation => Inference’we infer something based on what would be the best explanation given the law-like rule (premise 1) and an observation (premise 2). The truth of the conclusion (explanation) is nothing that is logically given, but something we have to justify, argue for, and test in different ways to possibly establish with any certainty or degree. And as always when we deal with explanations, what is considered best is relative to what we know of the world. In the real world, all evidence is relational (evidence only counts as evidence in relation to a specific hypothesis) and has an irreducible holistic aspect. We never conclude that evidence follows from a hypothesis simpliciter, but always given some more or less explicitly stated contextual background assumptions. All non-deductive inferences and explanations are necessarily context-dependent.

If we extend the abductive scheme to incorporate the demand that the explanation has to be the best among a set of plausible competing potential and satisfactory explanations, we have what is nowadays usually referred to as inference to the best explanation.

In inference to the best explanation we start with a body of (purported) data/facts/evidence and search for explanations that can account for these data/facts/evidence. Having the best explanation means that you, given the context-dependent background assumptions, have a satisfactory explanation that can explain the evidence better than any other competing explanation — and so it is reasonable to consider the hypothesis to be true. Even if we (inevitably) do not have deductive certainty, our reasoning gives us a license to consider our belief in the hypothesis as reasonable.

Accepting a hypothesis means that you believe it does explain the available evidence better than any other competing hypothesis. Knowing that we — after having earnestly considered and analysed the other available potential explanations — have been able to eliminate the competing potential explanations, warrants and enhances the confidence we have that our preferred explanation is the best explanation, i. e., the explanation that provides us (given it is true) with the greatest understanding.

This, of course, does not in any way mean that we cannot be wrong. Of course, we can. Inferences to the best explanation are fallible inferences — since the premises do not logically entail the conclusion — so from a logical point of view, inference to the best explanation is a weak mode of inference. But if the arguments put forward are strong enough, they can be warranted and give us justified true belief, and hence, knowledge, even though they are fallible inferences. As scientists we sometimes — much like Sherlock Holmes and other detectives that use inference to the best explanation reasoning — experience disillusion. We thought that we had reached a strong conclusion by ruling out the alternatives in the set of contrasting explanations. But — what we thought was true turned out to be false.

That does not necessarily mean that we had no good reasons for believing what we believed. If we cannot live with that contingency and uncertainty, well, then we are in the wrong business. If it is deductive certainty you are after, rather than the ampliative and defeasible reasoning in inference to the best explanation — well, then get into math or logic, not science.

The fallacy of composition basically consists of the false belief that the whole is nothing but the sum of its parts. In the society and in the economy this is arguably not the case. An adequate analysis of society and economy a fortiori can’t proceed by just adding up the acts and decisions of individuals. The whole is more than a sum of parts.

This fact shows up when orthodox/mainstream/neoclassical economics tries to argue for the existence of The Law of Demand – when the price of a commodity falls, the demand for it will increase – on the aggregate. Although it may be said that one succeeds in establishing The Law for single individuals it soon turned out – in the Sonnenschein-Mantel-Debreu theorem firmly established already in 1976 – that it wasn’t possible to extend The Law of Demand to apply on the market level, unless one made ridiculously unrealistic assumptions such as individuals all having homothetic preferences – which actually implies that all individuals have identical preferences.

This could only be conceivable if all agents are identical (i. e. there is in essence only one actor) — the (in)famous representative actor. So, yes, it was possible to generalize The Law of Demand – as long as we assumed that on the aggregate level there was only one commodity and one actor. What generalization! Does this sound reasonable? Of course not. This is pure nonsense!

How has neoclassical economics reacted to this devastating finding​? Basically by looking the other way, ignoring it and hoping that no one sees that the emperor is naked.

Having gone through a handful of the most frequently used textbooks of economics at the undergraduate level today, I can only conclude that the models that are presented in these modern neoclassical textbooks try to describe and analyze complex and heterogeneous real economies with a single rational-expectations-robot-imitation-representative-agent.

That is, with something that has absolutely nothing to do with reality. And — worse still — something that is not even amenable to the kind of general equilibrium analysis that they are thought to give a foundation for, since Hugo Sonnenschein (1972), Rolf Mantel (1976) and Gerard Debreu (1974) unequivocally showed that there did not exist any condition by which assumptions on individuals would guarantee neither stability nor uniqueness of the equilibrium​ solution.

So what modern economics textbooks present to students are really models built on the assumption that an entire economy can be modelled​ as a representative actor and that this is a valid procedure. But it isn’t — as the Sonnenschein-Mantel-Debreu theorem irrevocably has shown.

Of course one could say that it is too difficult on undergraduate levels to show why the procedure is right and to defer it to masters and doctoral courses. It could justifiably be reasoned that way – if what you teach your students is true if The Law of Demand is generalizable to the market level and the representative actor is a valid modelling​ abstraction! But in this case, it’s demonstrably known to be false, and therefore this is nothing but a case of scandalous intellectual dishonesty. It’s like telling your students that 2 + 2 = 5 and hope that they will never run into Peano’s axioms of arithmetics.

Once the dust has settled, there is a strong case for an inquiry into whether the teaching of economics has been captured by a small but dangerous sect.

Samuelson’s reconciliation of the micro-economic ideal type with involuntary unemployment was repudiated, along with Keynesian prescriptions, in favor of a view that there could be no involuntary unemployment , hence that government action was unnecessary. The result was a doctrinaire derivation of the laissez-faire conclusions that had been overturned by the formalist revolution; economics was now cleansed of Keynesian impurities that had been introduced in the interest of realism.

Comments Policy

I like comments. Follow netiquette. Comments — especially anonymous ones — with pseudo argumentations, abusive language or irrelevant links will not be posted. And please remember — being a full-time professor leaves only limited time to respond to comments.