What guarantee is there … that economic concepts can be mapped unambiguously and subjectively – to be terribly and unnecessarily mathematical about it – into mathematical concepts? The belief in the power and necessity of formalizing economic theory mathematically has thus obliterated the distinction between cognitively perceiving and understanding concepts from different domains and mapping them into each other. Whether the age-old problem of the equality between supply and demand should be mathematically formalized as a system of inequalities or equalities is not something that should be decided by mathematical knowledge or convenience. Surely it would be considered absurd, bordering on the insane, if a surgical procedure was implemented because a tool for its implementation was devised by a medical doctor who knew and believed in topological fixed-point theorems? Yet, weighty propositions about policy are decided on the basis of formalizations based on ignorance and belief in the veracity of one kind of one-dimensional mathematics.

It is important, for the record, to recognize that key participants in the debate openly admitted their mistakes. Samuelson’s seventh edition of Economics was purged of errors. Levhari and Samuelson published a paper which began, ‘We wish to make it clear for the record that the nonreswitching theorem associated with us is definitely false’ … Leland Yeager and I jointly published a note acknowledging his earlier error and attempting to resolve the conflict between our theoretical perspectives … However, the damage had been done, and Cambridge, UK, ‘declared victory’: Levhari was wrong, Samuelson was wrong, Solow was wrong, MIT was wrong and therefore neoclassical economics was wrong. As a result there are some groups of economists who have abandoned neoclassical economics for their own refinements of classical economics. In the United States, on the other hand, mainstream economics goes on as if the controversy had never occurred. Macroeconomics textbooks discuss ‘capital’ as if it were a well-defined concept — which it is not, except in a very special one-capital-good world (or under other unrealistically restrictive conditions). The problems of heterogeneous capital goods have also been ignored in the ‘rational expectations revolution’ and in virtually all econometric work.

Why Discussions of Methodology Are RiskySo what does this mean for a discussion about methodology? I think that it would eventually be valuable to have a discussion about methodology, but only if we can trust that the people who participate are committed to the norms of science. It is too soon to start that discussion now.

We have clear evidence from the recent past that when someone who is secretly committed to the norms of politics can be trusted for advice about scientific methodology, things can turn out very badly for the discipline. Bad methodology can do a lot more harm than a bad model.

So, Paul Romer seems to be rather reluctant to have a methodological discussion — it’s too “risky”.

Well, maybe, but on the other hand, if we’re not prepared to take that risk, economics can’t progress, as Tony Lawson forcefully argues in his new book, Essays on the Nature and State of Modern Economics:

Twenty common myths and/or fallacies of modern economics

1. The widely observed crisis of the modern economics discipline turns on problems that originate at the level of economic theory and/or policy.

It does not. The basic problems mostly originate at the level of methodology, and in particular with the current emphasis on methods of mathematical modelling.The latter emphasis is an error given the lack of match of the methods in question to the conditions in which they are applied. So long as the critical focus remains only, or even mainly, at the level of substantive economic theory and/or policy matters, then no amount of alternative text books, popular monographs, introductory pocketbooks, journal or magazine articles … or whatever, are going to get at the nub of the problems and so have the wherewithal to help make economics a sufficiently relevant discipline. It is the methods and manner of their use that are the basic problem.

How selfish soever man may be supposed, there are evidently some principles in his nature, which interest him in the fortune of others, and render their happiness necessary to him, though he derives nothing from it except the pleasure of seeing it. Of this kind is pity or compassion, the emotion which we feel for the misery of others, when we either see it, or are made to conceive it in a very lively manner. That we often derive sorrow from the sorrow of others, is a matter of fact too obvious to require any instances to prove it; for this sentiment, like all the other original passions of human nature, is by no means confined to the virtuous and humane, though they perhaps may feel it with the most exquisite sensibility. The greatest ruffian, the most hardened violator of the laws of society, is not altogether without it.

25 May, 2015 at 10:01 | Posted in Economics | Comments Off on TPP and the Economics 101 ideology

I’ve written several times about what I call the Economics 101 ideology: the overuse of a few simplified concepts from an introductory course to make sweeping policy recommendations (while branding any opponents as ignorant simpletons). The most common way that first-year economics is misused in the public sphere is ignoring assumptions. For example, most arguments for financial deregulation are ultimately based on the idea that transactions between rational actors with perfect information are always good for both sides — and most of the people making those arguments have forgotten that people are not rational and do not have perfect information.

Mark Buchanan and Noah Smith have both called out Greg Mankiw for a different and more pernicious way of misusing first-year economics: simply ignoring what it teaches — or, in this case, what Mankiw himself teaches. At issue is Mankiw’s Times column claiming that all economists agree on the overall benefits of free trade, so everyone should be in favor of the Trans-Pacific Partnership, among other trade agreements.

This is what Mankiw writes about international trade in his textbook (p. 183 of the fifth edition):

“Trade can make everyone better off. … [T]he gains of the winners exceed the losses of the losers, so the winners could compensate the losers and still be better off. … But will trade make everyone better off? Probably not. In practice, compensation for the losers from international trade is rare. …
“We can now see why the debate over trade policy is often contentious. Whenever a policy creates winners and losers, the stage is set for a political battle.”

Yet, in his recent column, Mankiw says that opposition to free trade is because of irrational voters who are subject to “anti-foreign,” “anti-market,” and “make-work” biases. He doesn’t mention what he said clearly in his textbook: opposition to free trade is perfectly rational on the part of people who will be harmed by it, and they express that opposition through the political process. That’s how a democracy is supposed to work, by the way.

Mankiw’s column is a perfect example of how ideology works. It provides a simple way to interpret the world — people who don’t agree with you are idiots or xenophobes — while sweeping aside inconvenient evidence to the contrary. And first-year economics is as powerful an ideology as we have in this country today.

24 May, 2015 at 16:34 | Posted in Economics | Comments Off on ‘Doctor, it hurts when I p’

A low-powered study is only going to be able to see a pretty big effect. But sometimes you know that the effect, if it exists, is small. In other words, a study that accurately measures the effect … is likely to be rejected as statistically insignificant, while any result that passes the p < .05 test is either a false positive or a true positive that massively overstates the … effect.

…

A conventional boundary, obeyed long enough, can be easily mistaken for an actual thing in the world. Imagine if we talked about the state of the economy this way! Economists have a formal definition of a ‘recession,’ which depends on arbitrary thresholds just as ‘statistical significance’ does. One doesn’t say, ‘I don’t care about the unemployment rate, or housing starts, or the aggregate burden of student loans, or the federal deficit; if it’s not a recession, we’re not going to talk about it.’ One would be nuts to say so. The critics — and there are more of them, and they are louder, each year — say that a great deal of scientific practice is nuts in just this way.

If anything, this underlines how important it is not to equate science with statistical calculation. All science entail human judgement, and using statistical models doesn’t relieve us of that necessity. Working with misspecified models, the scientific value of significance testing is actually zero — even though you’re making valid statistical inferences! Statistical models and concomitant significance tests are no substitutes for doing real science. Or as a noted German philosopher once famously wrote:

There is no royal road to science, and only those who do not dread the fatiguing climb of its steep paths have a chance of gaining its luminous summits.

Statistical significance doesn’t say that something is important or true. Since there already are far better and more relevant testing that can be done (see e. g. here and here), it is high time to consider what should be the proper function of what has now really become a statistical fetish. Given that it anyway is very unlikely than any population parameter is exactly zero, and that contrary to assumption most samples in social science and economics are not random or having the right distributional shape – why continue to press students and researchers to do null hypothesis significance testing, testing that relies on a weird backward logic that students and researchers usually don’t understand?

In its standard form, a significance test is not the kind of “severe test” that we are looking for in our search for being able to confirm or disconfirm empirical scientific hypothesis. This is problematic for many reasons, one being that there is a strong tendency to accept the null hypothesis since it can’t be rejected at the standard 5% significance level. In their standard form, significance tests bias against new hypotheses by making it hard to disconfirm the null hypothesis.

As shown over and over again when it is applied, people have a tendency to read “not disconfirmed” as “probably confirmed.” And — most importantly — we should of course never forget that the underlying parameters we use when performing significance tests are model constructions. Our p-values mean next to nothing if the model is wrong. As David Freedman writes in Statistical Models and Causal Inference:

I believe model validation to be a central issue. Of course, many of my colleagues will be found to disagree. For them, fitting models to data, computing standard errors, and performing significance tests is “informative,” even though the basic statistical assumptions (linearity, independence of errors, etc.) cannot be validated. This position seems indefensible, nor are the consequences trivial. Perhaps it is time to reconsider.

24 May, 2015 at 11:06 | Posted in Economics | Comments Off on Solow and Krugman on inequality

Are you tired of people like walked-out Harvard economist Greg Mankiw and their repeated attempts at defending the 1 % by invoking Adam Smith’s invisible hand and arguing that a market economy is some kind of moral free zone where, if left undisturbed, people get what they “deserve”?

Then I suggest you listen to this great conversation on inequality:

Listening to Solow and Krugman is a healthy antidote to unashamed neoliberal inequality apologetics.

The outstanding faults of the economic society in which we live are its failure to provide for full employment and its arbitrary and inequitable distribution of wealth and incomes … I believe that there is social and psychological justification for significant inequalities of income and wealth, but not for such large disparities as exist to-day.

John Maynard Keynes General Theory (1936)

A society where we allow the inequality of incomes and wealth to increase without bounds, sooner or later implodes. The cement that keeps us together erodes and in the end we are only left with people dipped in the ice cold water of egoism and greed.

20 May, 2015 at 19:01 | Posted in Economics | Comments Off on Consistency and validity is not enough!

Neoclassical economic theory today is in the story-telling business whereby economic theorists create make-believe analogue models of the target system – usually conceived as the real economic system. This modeling activity is considered useful and essential. Since fully-fledged experiments on a societal scale as a rule are prohibitively expensive, ethically indefensible or unmanageable, economic theorists have to substitute experimenting with something else. To understand and explain relations between different entities in the real economy the predominant strategy is to build models and make things happen in these “analogue-economy models” rather than engineering things happening in real economies.

Formalistic deductive “Glasperlenspiel” can be very impressive and seductive. But in the realm of science it ought to be considered of little or no value to simply make claims about the model and lose sight of reality. As Julian Reiss writes:

There is a difference between having evidence for some hypothesis and having evidence for the hypothesis relevant for a given purpose. The difference is important because scientific methods tend to be good at addressing hypotheses of a certain kind and not others: scientific methods come with particular applications built into them … The advantage of mathematical modelling is that its method of deriving a result is that of mathemtical prof: the conclusion is guaranteed to hold given the assumptions. However, the evidence generated in this way is valid only in abstract model worlds while we would like to evaluate hypotheses about what happens in economies in the real world … The upshot is that valid evidence does not seem to be enough. What we also need is to evaluate the relevance of the evidence in the context of a given purpose.

Neoclassical economics has since long given up on the real world and contents itself with proving things about thought up worlds. Empirical evidence only plays a minor role in economic theory, where models largely function as a substitute for empirical evidence. Hopefully humbled by the manifest failure of its theoretical pretences, the one-sided, almost religious, insistence on axiomatic-deductivist modeling as the only scientific activity worthy of pursuing in economics will give way to methodological pluralism based on ontological considerations rather than formalistic tractability. To have valid evidence is not enough. What economics needs is sound evidence.

Discussing Paul Romer’s “mathiness” concept, Peter Dorman yesterday criticized economists’ belief that theories and models being “consistent with” data somehow make the theories and models a success story. And Chris Dillow elaborates on the weakness of this “consistent with” error in a post today:

If a man has no money, this is “consistent with” the theory that he has given it away. But if in fact he has been robbed, that theory is grievously wrong. Mere consistency with the facts is not sufficient.

This is a point which some defenders of inequality miss. Of course, you can devise theories which are “consistent with” inequality arising from reasonable differences in choices and marginal products. Such theories, though, beg the question: is that how inequality really emerged?** And the answer, to put it mildly, is: only partially. It also arose from luck, inefficient selection, rigged markets, rent-seeking and outright theft …

The Duhem-Quine thesis warns us that facts under-determine theory: they are “consistent with” multiple theories. This is perhaps especially true when those facts are snapshots. For example, a Gini coefficient – being a mere snapshot of inequality – tells us nothing about how the inequality emerged.

So, how can we guard against the “consistent with” error? One thing we need is history: this helps tell us how things actually happened. And – horrific as it might seem to some economists – we also need sociology: we need to know how people actually behave and not merely that their behaviour is “consistent with” some theory. Economics, then, cannot be a stand-alone discipline but part of the social sciences and humanities – a point which is lost in the discipline’s mathiness.

Yes indeed, history helps. And if we’re not to ‘busy’ doing the things we do, but once in a while take a brake and do some methodological reflection on why we do what we do — well, that takes us a long way too.

To me this sounds more like a person afraid of methodological self-reflection, rather than an open-minded and pluralist person.

Where does this methodology-aversion come from?

As far as yours truly can see it all grinds down to a misplaced belief in deductivist mathematical reasoning being the only kind of scientific economics around. If economics isn’t performed as a mathematical modeling it’s not really science in Romer’s world-view. There is no problem with that view — as long as you have done some ontological and methodological reflection and presented arguments for the appropriateness of insisting on deductivist-mathematical modeling being the preferred scientific procedure in economics. No such argumentation is presented.

When applying deductivist thinking to economics, Romer and other mainstream economists usually set up “as if” models based on a set of tight axiomatic assumptions from which consistent and precise inferences are made. The beauty of this procedure is of course that if the axiomatic premises are true, the conclusions necessarily follow. The snag is that if the models are to be relevant, we also have to argue that their precision and rigour still holds when they are applied to real-world situations. They often don’t. When addressing real economies, the idealizations necessary for the deductivist machinery to work, simply don’t hold.

So how should we evaluate the search for ever greater precision and the concomitant arsenal of mathematical and formalist models? To a large extent, the answer hinges on what we want our models to perform and how we basically understand the world.

The world in which we live is inherently uncertain and quantifiable probabilities are the exception rather than the rule. To every statement about it is attached a “weight of argument” that makes it impossible to reduce our beliefs and expectations to a one-dimensional stochastic probability distribution. If “God does not play dice” as Einstein maintained, I would add “nor do people”. The world as we know it, has limited scope for certainty and perfect knowledge. Its intrinsic and almost unlimited complexity and the interrelatedness of its organic parts prevent the possibility of treating it as constituted by “legal atoms” with discretely distinct, separable and stable causal relations. Our knowledge accordingly has to be of a rather fallible kind.

To search for precision and rigour in such a world is self-defeating, at least if precision and rigour are supposed to assure external validity. The only way to defend such an endeavour is to take a blind eye to ontology and restrict oneself to prove things in closed model-worlds. Why we should care about these and not ask questions of relevance is hard to see. We have to at least justify our disregard for the gap between the nature of the real world and our theories and models of it.

Now, if the real world is fuzzy, vague and indeterminate, then why should our models build upon a desire to describe it as precise and predictable? Even if there always has to be a trade-off between theory-internal validity and external validity, we have to ask ourselves if our models are relevant.

Models preferably ought to somehow reflect/express/correspond to reality. I’m not saying that the answers are self-evident, but at least you have to do some methodological and philosophical under-labouring to rest your case. Too often that is wanting in modern economics, where methodological justifications of chosen models and methods as a rule are non-existent.

“Human logic” has to supplant the classical, formal, logic of deductivism if we want to have anything of interest to say of the real world we inhabit. Logic is a marvellous tool in mathematics and axiomatic-deductivist systems, but a poor guide for action in real-world systems, in which concepts and entities are without clear boundaries and continually interact and overlap. In this world I would say we are better served with a methodology that takes into account that “the more we know the more we know we don’t know”.

The models and methods we choose to work with have to be in conjunction with the economy as it is situated and structured. Epistemology has to be founded on ontology. Deductivist closed-system theories, as all the varieties of the Walrasian general equilibrium kind, could perhaps adequately represent an economy showing closed-system characteristics. But since the economy clearly has more in common with an open-system ontology we ought to look out for other theories – theories who are rigorous and precise in the meaning that they can be deployed for enabling us to detect important causal mechanisms, capacities and tendencies pertaining to deep layers of the real world.

Rigour, coherence and consistency have to be defined relative to the entities for which they are supposed to apply. Too often they have been restricted to questions internal to the theory or model. But clearly the nodal point has to concern external questions, such as how our theories and models relate to real-world structures and relations. Applicability rather than internal validity ought to be the arbiter of taste.

But obviosly Paul Romer doesn’t want to talk about these scary methodological-philosophical issues. He is ‘busy’ …

Comments Policy

I like comments. Follow netiquette. Comments — especially anonymous ones — with pseudo argumentations, abusive language or irrelevant links will not be posted. And please remember — being a full-time professor leaves only limited time to respond to comments.