Interesting view. But obviously not all economists are convinced that Libra would be a blessing:

Every currency is based on confidence that the hard-earned dollars “deposited” into it will be redeemable on demand. The private banking sector has long shown that it is untrustworthy in this respect, which is why new prudential regulations have been necessary.

But, in just a few short years, Facebook has earned a level of distrust that took the banking sector much longer to achieve. Time and again, Facebook’s leaders, faced with a choice between money and honoring their promises, have grabbed the money. And nothing could be more about money than creating a new currency. Only a fool would trust Facebook with his or her financial wellbeing. But maybe that’s the point: with so much personal data on some 2.4 billion monthly active users, who knows better than Facebook just how many suckers are born every minute?

Well, if we are to believe most mainstream economists, models are what make economics a science.

In a Journal of Economic Literaturereview of Dani Rodrik’s Economics Rules, renowned game theorist Ariel Rubinstein discusses Rodrik’s justifications for the view that “models make economics a science.” Although Rubinstein has some doubts about those justifications — models are not indispensable for telling good stories or clarifying things in general; logical consistency does not determine whether economic models are right or wrong; and being able to expand our set of ‘plausible explanations’ doesn’t make economics more of a science than good fiction does — he still largely subscribes to the scientific image of economics as a result of using formal models that help us achieve ‘clarity and consistency’.

There’s much in the review I like — Rubinstein shows a commendable scepticism on the prevailing excessive mathematization​ of economics, and he is much more in favour of a pluralist teaching of economics than most other mainstream economists — but on the core question, “the model is the message,” I beg to differ with the view put forward by both Rodrik and Rubinstein.

Economics is more than any other social science model-oriented. There are many reasons for this — the history of the discipline, having ideals coming from the natural sciences (especially physics), the search for universality (explaining as much as possible with as little as possible), rigour, precision, etc.

Mainstream economists want to explain social phenomena, structures and patterns, based on the assumption that the agents are acting in an optimizing (rational) way to satisfy given, stable and well-defined goals.

The procedure is analytical. The whole is broken down into its constituent parts so as to be able to explain (reduce) the aggregate (macro) as the result of interaction of its parts (micro).

Modern mainstream (neoclassical) economists ground their models on a set of core assumptions (CA) — basically describing the agents as ‘rational’ actors — and a set of auxiliary assumptions (AA). Together CA and AA make up what might be called the ‘ur-model’ (M) of all mainstream neoclassical economic models. Based on these two sets of assumptions, they try to explain and predict both individual (micro) and — most importantly — social phenomena (macro).

The core assumptions typically consist of:

CA1 Completeness — rational actors are able to compare different alternatives and decide which one(s) he prefers

CA2 Transitivity — if the actor prefers A to B, and B to C, he must also prefer A to C.

CA4 Consistent efficiency equilibria — the actions of different individuals are consistent, and the interaction between them results​ in an equilibrium.

When describing the actors as rational in these models, the concept of rationality used is instrumental rationality – choosing consistently the preferred alternative, which is judged to have the best consequences for the actor given his in the model exogenously given wishes/interests/goals. How these preferences/wishes/interests/goals are formed is typically not considered to be within the realm of rationality, and a fortiori not constituting part of economics proper.

The picture given by this set of core assumptions (rational choice) is a rational agent with strong cognitive capacity that knows what alternatives he is facing, evaluates them carefully, calculates the consequences and chooses the one — given his preferences — that he believes has the best consequences according to him.

Weighing the different alternatives against each other, the actor makes a consistent optimizing (typically described as maximizing some kind of utility function) choice ​and acts accordingly.

Beside​ the core assumptions (CA) the model also typically has a set of auxiliary assumptions (AA) spatio-temporally specifying the kind of social interaction between ‘rational actors’ that take place in the model. These assumptions can be seen as giving answers to questions such as

For the sake of balancing the overly rosy picture of econometric achievements given in the usual econometrics textbooks today, it may be interesting to see how Trygve Haavelmo — with the completion (in 1958) of the twenty-fifth volume of Econometrica — assessed the role of econometrics in the advancement of economics.

We have found certain general principles which would seem to make good sense. Essentially, these principles are based on the reasonable idea that, if an economic model is in fact “correct” or “true,” we can say something a priori about the way in which the data emerging from it must behave. We can say something, a priori, about whether it is theoretically possible to estimate the parameters involved. And we can decide, a priori, what the proper estimation procedure should be … But the concrete results of these efforts have often been a seemingly lower degree of accuracy of the would-be economic laws (i.e., larger residuals), or coefficients that seem a priori less reasonable than those obtained by using cruder or clearly inconsistent methods.

There is the possibility that the more stringent methods we have been striving to develop have actually opened our eyes to recognize a plain fact: viz., that the “laws” of economics are not very accurate in the sense of a close fit, and that we have been living in a dream-world of large but somewhat superficial or spurious correlations.

Since statisticians and econometricians have not been able to convincingly warrant their assumptions — homogeneity, stability, invariance, independence, additivity, and so on — as being ontologically isomorphic to real-world economic systems, there are still strong reasons to be critical of the econometric project. There are deep epistemological and ontological problems of applying statistical methods to a basically unpredictable, uncertain, complex, unstable, interdependent, and ever-changing social reality. Methods designed to analyse repeated sampling in controlled experiments under fixed conditions are not easily extended to an organic and non-atomistic world where time and history play decisive roles.

Econometric modelling should never be a substitute for thinking.

The general line you take is interesting and useful. It is, of course, not exactly comparable with mine. I was raising the logical difficulties. You say in effect that, if one was to take these seriously, one would give up the ghost in the first lap, but that the method, used judiciously as an aid to more theoretical enquiries and as a means of suggesting possibilities and probabilities rather than anything else, taken with enough grains of salt and applied with superlative common sense, won’t do much harm. I should quite agree with that. That is how the method ought to be used.

The quasi-peaceable gentleman of leisure, then, not only consumes of the staff of life beyond the minimum required for subsistence and physical efficiency, but his consumption also undergoes a specialisation as regards the quality of the goods consumed. He consumes freely and of the best, in food, drink, narcotics, shelter, services, ornaments, apparel, weapons and accoutrements, amusements, amulets, and idols or divinities.

“[G]iven the realities of real-world research, it seems goofy to say that a result with, say, only a 4.8% probability of happening by chance is “significant,” while if the result had a 5.2% probability of happening by chance it is “not significant.” Uncertainty is a continuum, not a black-and-white difference” …

My problem with the 0.048 vs. 0.052 thing is that it way, way, way understates the problem.

Yes, there’s no stable difference between p = 0.048 and p = 0.052.

But there’s also no stable difference between p = 0.2 (which is considered non-statistically significant by just about everyone) and p = 0.005 (which is typically considered very strong evidence) …

If these two p-values come from two identical experiments, then the standard error of their difference is sqrt(2) times the standard error of each individual estimate, hence that difference in p-values itself is only (2.81 – 1.28)/sqrt(2) = 1.1 standard errors away from zero …

So. Yes, it seems goofy to draw a bright line between p = 0.048 and p = 0.052. But it’s also goofy to draw a bright line between p = 0.2 and p = 0.005. There’s a lot less information in these p-values than people seem to think.

So, when we say that the difference between “significant” and “not significant” is not itself statistically significant, “we are not merely making the commonplace observation that any particular threshold is arbitrary—for example, only a small change is required to move an estimate from a 5.1% significance level to 4.9%, thus moving it into statistical significance. Rather, we are pointing out that even large changes in significance levels can correspond to small, nonsignificant changes in the underlying quantities.”

Imposing a hard target can bind the central bank, but the government must then act on failures to hit the target. Why would it if it is self-interested? If it does, that amounts to saying it is not selfish, which undermines the argument that independence is needed. The same argument can be used to deconstruct independence itself. Suppose independence is a solution to time inconsistency. Why would a selfish politician ever agree to independence in the first place? If they did, that would be tantamount to saying they are not selfish, in which case independence is not needed. In other words, only non-self-interested politicians choose independence, making independence redundant …

Even if the banker is honest, there still remains the fundamental question of why would selfish politicians go against their own interests and appoint a conservative independent central banker? Doing so is tantamount to proving they are not selfish, in which case there is no need for an independent central bank.
That microeconomic contradiction suggests something else is going on with the shift to central bank independence. By definition, selfish politicians cannot be authorizing it out of public interest. Instead, they are doing so out of self-interest, which is the clue to understanding the real reasons for the shift to central bank independence … That implies central bank independence is not the socially benevolent phenomenon mainstream economists and central bankers claim it to be. Instead, somewhat obviously, it is a highly political development serving partisan interests …

The real issues are why do independent banks go after inflation harder, and what is the role of independence?

The reason they go after inflation harder is they are aligned with capital. That is because capital is politically in charge and sets the goals for central banks. It is also because central bankers and their economic advisers have bought into the Chicago School monetary policy framework which implicitly sides with capital (i.e. views the problem as being inflation prone government). That explains why there is central bank independence …

Democratic countries may still decide to implement central bank independence, but that decision is a political one with non-neutral economic and political consequences. It is a grave misrepresentation to claim independence solves a fundamental public interest economic problem, and economists make themselves accomplices by claiming it does.

When I present this argument … one or more scholars say, “But shouldn’t I control for everything I can in my regressions? If not, aren’t my coefficients biased due to excluded variables?” This argument is not as persuasive as it may seem initially. First of all, if what you are doing is misspecified already, then adding or excluding other variables has no tendency to make things consistently better or worse … The excluded variable argument only works if you are sure your specification is precisely correct with all variables included. But no one can know that with more than a handful of explanatory variables.Still more importantly, big, mushy linear regression and probit equations seem to need a great many control variables precisely because they are jamming together all sorts of observations that do not belong together. Countries, wars, racial categories, religious preferences, education levels, and other variables that change people’s coefficients are “controlled” with dummy variables that are completely inadequate to modeling their effects. The result is a long list of independent variables, a jumbled bag of nearly unrelated observations, and often a hopelessly bad specification with meaningless (but statistically significant with several asterisks!) results.

A preferable approach is to separate the observations into meaningful subsets—internally compatible statistical regimes … If this can’t be done, then statistical analysis can’t be done. A researcher claiming that nothing else but the big, messy regression is possible because, after all, some results have to be produced, is like a jury that says, “Well, the evidence was weak, but somebody had to be convicted.”

The empirical and theoretical evidence is clear. Predictions and forecasts are inherently difficult to make in a socio-economic domain where genuine uncertainty and unknown unknowns often rule the roost. The real processes that underly the time series that economists use to make their predictions and forecasts do not conform with the assumptions made in the applied statistical and econometric models. Much less is a fortiori predictable than standardly — and uncritically — assumed. The forecasting models fail to a large extent because the kind of uncertainty that faces humans and societies actually makes the models strictly seen inapplicable. The future is inherently unknowable — and using statistics, econometrics, decision theory or game theory, does not in the least overcome this ontological fact. The economic future is not something that we normally can predict in advance. Better then to accept that as a rule ‘we simply do not know.’

We could, of course, just assume that the world is ergodic and hence convince ourselves that we can predict the future by looking at the past. Unfortunately, economic systems do not display that property. So we simply have to accept that all our forecasts are fragile.

What does concern me about my discipline … is that its current core — by which I mainly mean the so-called dynamic stochastic general equilibrium approach — has become so mesmerized with its own internal logic that it has begun to confuse the precision it has achieved about its own world with the precision that it has about the real one …

While it often makes sense to assume rational expectations for a limited application to isolate a particular mechanism that is distinct from the role of expectations formation, this assumption no longer makes sense once we assemble the whole model. Agents could be fully rational with respect to their local environments and everyday activities, but they are most probably nearly clueless with respect to the statistics about which current macroeconomic models expect them to have full information and rational information.

This issue is not one that can be addressed by adding a parameter capturing a little bit more risk aversion about macro-economic, rather than local, phenomena. The reaction of human beings to the truly unknown is fundamentally different from the way they deal with the risks associated with a known situation and environment … In realistic, real-time settings, both economic agents and researchers have a very limited understanding of the mechanisms at work. This is an order-of-magnitude less knowledge than our core macroeconomic models currently assume, and hence it is highly likely that the optimal approximation paradigm is quite different from current workhorses, both for academic andpolicy work. In trying to add a degree of complexity to the current core models, by bringing in aspects of the periphery, we are simultaneously making the rationality assumptions behind that core approach less plausible …

The challenges are big, but macroeconomists can no longer continue playing internal games. The alternative of leaving all the important stuff to the “policy”-typ and informal commentators cannot be the right approach. I do not have the answer. But I suspect that whatever the solution ultimately is, we will accelerate our convergence to it, and reduce the damage we do along the transition, if we focus on reducing the extent of our pretense-of-knowledge syndrome.

A great article that also underlines — especially when it comes to forecasting and implementing economic policies — that the future is inherently unknowable, and using statistics, econometrics, decision theory or game theory, does not in the least overcome this ontological fact.

According to Keynes we live in a world permeated by unmeasurable uncertainty – not quantifiable stochastic risk – which often forces us to make decisions based on anything but “rational expectations.” Keynes rather thinks that we base our expectations on the confidence or “weight” we put on different events and alternatives. To Keynes, expectations are a question of weighing probabilities by “degrees of belief,” beliefs that often have preciously little to do with the kind of stochastic probabilistic calculations made by the rational agents as modelled by “modern” social sciences. And often we “simply do not know.”

So why do economists, companies and governments continue with the expensive, but obviously worthless, activity of trying to forecast/predict the future?

A couple of years ago yours truly was interviewed by a public radio journalist working on a series on Great Economic Thinkers. We were discussing the monumental failures of the predictions-and-forecasts-business. But — the journalist asked — if these cocksure economists with their “rigorous” and “precise” mathematical-statistical-econometric models are so wrong again and again — why do they persist wasting time on it?

In a discussion on uncertainty and the hopelessness of accurately modelling what will happen in the real world — in M. Szenberg’s Eminent Economists: Their Life Philosophies — Nobel laureateKenneth Arrow comes up with what is probably the most plausible reason:

It is my view that most individuals underestimate the uncertainty of the world. This is almost as true of economists and other specialists as it is of the lay public. To me our knowledge of the way things work, in society or in nature, comes trailing clouds of vagueness … Experience during World War II as a weather forecaster added the news that the natural world as also unpredictable. An incident illustrates both uncer-tainty and the unwilling-ness to entertain it. Some of my colleagues had the responsi-bility of preparing long-range weather forecasts, i.e., for the following month. The statisticians among us subjected these forecasts to verification and found they differed in no way from chance. The forecasters themselves were convinced and requested that the forecasts be discontinued. The reply read approximately like this: ‘The Commanding General is well aware that the forecasts are no good. However, he needs them for planning purposes.’

While both authors subscribe to realism, they practise two types of realism. The realism supported by Duflo is akin to a naive ‘metrological realism’ … in which quantification is seen as merely mirroring reality within a margin of error, whereas Ostrom seems closer to critical realism and constructivism: the way we perceive and quantify reality is moulded by our cognitive maps and conventions. The rationales of the social scientist and of the economic actors are also distinctive. Whereas, Duflo underlines the objectivity and rightness of the scientist applying sound techniques – which contrasts with the lack of information and the restrained horizon of local actors – Ostrom emphasises the processual, bounded and interpretative rationality of both the researcher and the observed actors. This leads to diverging views and normative agendas regarding development, politics and economics.

Duflo sees development as the implementation and replication of expert-led fixes to provide basic goods for the poor who are often blinded by their exacting situation. It is a technical quest for certainty and optimal measures in a fairly static framework. For the Ostroms, there are no best practices, only a few architectonic principles to build locally resilient orders. They view development as a situated learning process under uncertainty …

In Duflo’s science-based ‘benevolent paternalism’, the experimental technique works as an ‘anti-politics machine’ … social goals being predefined and RCT outcomes settling ideally ambiguities and conflicts. Real-world politics – disregarding or instrumentalising RCTs – and institutions – resulting from social compromises instead of evidence – are thus often perceived as external disturbances and constraints to economic science and evidence-based policy. This depoliticising stance is at odds with the significance of political economy for the Ostroms, their emphasis on deliberation to co-construct the aspirations and agencies of communities. While Duflo and Banerjee are in line with a technocratic democracy, the Ostroms sustain a Tocquevillean democratic self-governance. For the latter, institutions emanating from democratic processes, far from being straitjackets, are the core of economic processes. They simultaneously constraint and enable human action.

Comments Policy

I like comments. Follow netiquette. Comments — especially anonymous ones — with pseudo argumentations, abusive language or irrelevant links will not be posted. And please remember — being a full-time professor leaves only limited time to respond to comments.