It quotes Nicolas Mombrial, head of Oxfam International’s office in Washington DC, saying that (my emphasis): “the IMF proves that making the rich richer does not work for growth, while focusing on the poor and the middle class does” and that “the IMF has shown that `trickle down’ economics is dead; you cannot rely on the spoils of the extremely wealthy to benefit the rest of us.”

The aim of this blog post is to clarify that the results in Table 1 of the paper, which are based on system GMM estimation, rely on assumptions that are not spelled out explicitly and whose validity is therefore very difficult to assess. In not reporting this and other relevant information, the paper’s application of system GMM falls short of current best practices. As a result, without this additional information, I would be wary to update my prior on the effect of inequality on growth based on the new results reported in this paper.

The paper attempts to establish the causal effect of various income quintiles (the share of income accruing to the bottom 20%, the next 20% etc.) on economic growth. It finds that a country will grow faster if the share of income held by the bottom three quintiles increases. In contrast, a higher income share for the richest 20% reduces growth. As you can imagine, establishing such a causal effect is difficult: growth might affect how income is distributed, and numerous other variables (openness to trade, institutions, policy choices…) might affect both growth and the distribution of income. Clearly, this implies that any association found between the income distribution and growth might reflect things other than just the causal effect of the former on the latter.

To try to get around this problem, the authors use a system GMM estimator. This estimator consists of (i) differenced equations where the changes in the variables are instrumented by their lagged levels and (ii) equations in levels where the levels of variables are instrumented by their lagged differences (Bond, 2002, is an excellent introduction). Roughly speaking, the hope is that these lagged levels and differences isolate bits of variation in income share quintiles that are not affected by growth or any of the omitted variables. These bits of variation can then be used to identify the causal effect of the income distribution on growth. The problem with the IMF paper is that it does not tell you exactly which lagged levels and differences it uses as instruments, making it hard for readers to assess how plausible it is that the paper has identified a causal effects.

The paper also omits to report a number of routine tests (for serial correlation in the error term, and for the validity of overidentifying restrictions) that could help to shed some light on the likely validity of the assumptions that are made in order to identify a causal effect. It does not report an instrument count, nor does it mention efforts to reduce the number of instruments used (by restricting the number of lags used as instruments, by collapsing the instrument matrix, or by replacing candidate instruments by their principal components). Without such adjustments, system GMM tends to generate too many instruments, weakening the usefulness of the aforementioned tests and reducing the likelihood that the estimated coefficients are causal. It also does not mention which type of system GMM estimator (e.g. 1-step or 2-step) is used, nor how standard errors are obtained. This matters: it is, for instance, well-known that, without an adjustment, the 2-step GMM estimator yields standard errors that are much too low, giving a false sense of precision (Windmeijer, 2005).

Especially in samples of this size, system GMM can be very sensitive to the exact assumptions made. It is good practice to examine how results change when using slightly different assumptions and different sets of instruments. While on page 7 the paper mentions that the results “survive a variety of robustness checks”, no details are provided. Footnote 2 claims that similar results are obtained when controlling for standard growth determinants, such as human and physical capital, but this is rather puzzling: the theory discussed in later pages suggests that human and physical capital accumulation are two of the primary channels through which inequality has an effect on growth. If that is true, then controlling for them should weaken the impact of inequality on growth.

To sum up, when applying system GMM, researchers are required to make a range of choices, in part reflecting the assumptions that they are willing to make about how the different variables are generated in the real world. These choices often matter a great deal in practice, and a rich literature is available that offers guidance on how to make these choices (see e.g. Roodman, 2009a,b). Even a judicious use of system GMM to identify the causal effect of the income distribution on growth is likely to leave at least some unconvinced, as the assumptions involved are not trivial. By not being transparent about these assumptions, which could have been described and motivated in a few pages of appendix, this paper falls short of current best practice in applying system GMM estimation.

It is therefore surprising to see how quickly some readers have treated these results as a breakthrough in establishing the deleterious effects of inequality (in the section on the drivers of inequality, though not in the section discussed in this blog post, the paper itself advises caution about “drawing definitive policy implications from cross-country regression analysis”). Inequality may be an important growth deterrent and this paper’s results may well stand up to further scrutiny but, in the absence of further information, at this point in time I would hesitate to update my priors on the effect on inequality on the basis of the new results that are reported in this paper.

ROODMAN , D. (2009a): “How to do xtabond2: an introduction to difference and system GMM in Stata,” Stata Journal, 9(1), 86–136.
——(2009b): “A note on the theme of too many instruments,” Oxford Bulletin of Economics and Statistics, 71(1), 135–158.

]]>0Matthttp://goodlyframe.nethttp://aidthoughts.org/?p=41402015-06-26T09:45:32Z2015-06-26T09:45:32ZA few months ago I did an interview for European CEO about foreign aid. I meant to post this at the time, but forgot to hit (publish) on the blog, so here you go.

The editor did a wonderful job of making me sound somewhat coherent, although some of my answers are still pretty general.

Another shooting happens. This time in Charleston. I grew up about two hours away and often visited the town with my parents. It’s a lovely place, although like most places in South Carolina it has a difficult, disturbing past.

There are two opposing views which typically surface after a mass shooting. The first is that gun violence is driven by gun ownership, and that effective gun control will reduce the number of people killed by firearms every year. A simple mathematical way of describing this relationship would be to say that gun violence is a function of the number of guns in a country:

V = F(G)

The opposing view is that there are all sorts of other things that determine gun violence. Proponents look to countries with high levels of gun ownership but low levels of violence, such as Canada. Holders of this view assert a relationship that looks like this:

V = F(S)

Where S is “other stuff” which influences gun violence. This is somewhat consistent with the “guns don’t kill people, people kill people” argument, which includes the unstated third statement: “and there are lots of things that determine whether people want to kill each other.”

Setting aside any preoccupations with the Second Amendment, the gun control debate can be characterized as a fight over whether V = F(G) or V = F(S). But this is a mischaracterization which gives more legitimacy to those opposed to gun control. In reality, gun violence is a function of both the number of guns in circulation and all the “other stuff,” and that, by construction, fewer guns makes it more difficult to commit gun violence, so that.

V = F(G,S) and V = 0 if G = 0 or S=0

That is: it doesn’t matter if Canada can have its cake and eat it. If there is some special ingredient to having guns without the violence (S = 0), we don’t know what it is, and won’t know any time soon. But that doesn’t mean that reducing G will not reduce violence. Whether it is a cost-effective way to reduce violence is another question, but unless someone identifies what goes into S, the best bet is for the US to focus on G.

The Ethiopians appear to be close to finalizing construction of a large hydroelectric dam on the Omo river, primarily to generate power but also to support local irrigation efforts. Over the past five years the project has received substantial foreign financing and investment by China and indirectly by the World Bank. However, there appears to have been little consideration of the potential downstream impacts: the Omo river feeds Lake Turkana, which is a source of livelihood for a large number of communities in northern Kenya. The possibility that the lake may be partially drained is obviously upsetting a lot of people, although it does not seem that the Kenyan government is making a big fuss over the project.

This is a typical problem of negative externalities: the Ethiopians aren’t factoring in the welfare of Kenyan Turkana residents in the decision to build the dam. There’s actually some research showing that this is a common problem. From a recent World Bank paper by Sheila Olmstead and Hilary Sigman:

This paper examines whether countries consider the welfare of other nations when they make water development decisions. The paper estimates econometric models of the location of major dams around the world as a function of the degree of international sharing of rivers. The analysis finds that dams are more prevalent in areas of river basins upstream of foreign countries, supporting the view that countries free ride in exploiting water resources. There is weak evidence that international water management institutions reduce the extent of such free-riding.

By their very nature dams generate inequality in the flow of water between upstream and downstream areas. It is easier to pay the cost of hurting downstream communities when they are are in a different country (hey, they don’t vote for you). Ergo, countries are more likely to build dams when the costs are external.

It would be interesting to see what mitigates these effects – it is possible that Kenya’s relative indifference is due to lack of political power on the part of the northern tribes. Are dams with substantial cross-border costs less likely in areas where the proximate ethnic group is quite powerful?

This photo made my day. It is of staff from a zoo in Taiyuan, China taking part in a drill for animal-related emergencies. It’s part of an excellent photo essay in The Atlantic on similar efforts throughout China and Japan.

Of course I edit all my documents using the original Nintendo Power Glove

Throughout the mid-90s, my father used a DOS-based typesetting program called PC-Write to produce his books and journal articles. In stark contrast to more-popular word processing programs, PC-Write relied on a what-you-get-is-what-you-mean approach to typesetting: dad would indicate his formatting preferences as he wrote, but he would be forced to print out a page in order to see his formatting options being applied. By contrast, I grew up working with Microsoft Word and so with each passing year I found my father’s system to be increasingly archaic. Eventually, after a substantial amount of healthy mockery from his son, he migrated over to Word and hasn’t looked back since.

However, by the time I arrived in grad school an increasing number of other (economics) students were using LaTeX, a typesetting language that was much closer in design to the old-fashioned PC-Write than to the what-you-see-is-what-you-get format of Word. Although I suspected that LaTeX was another manifestation of the academic economist’s tendency to choose overly-complex methods and technical mastery over user-friendliness, I eventually became a convert. Somehow, I found my preferences begun to mirror Dad’s original love of PC-Write.

If you ever feel like experiencing a wonderfully-arbitrary argument, ask a group of economists if they prefer LaTeX or Word. Within the profession there is a pretty serious division between those who prefer the look and workflow of the former and those who prefer the accessibility of the latter. While there are some of us who are comfortable working in both formats, each camp has its stalwarts who find members of the other camp to be bizarrely inefficient.

The two sides appeared to be in a stable stalemate until recently, when a new study comparing the efficiency and error rates among LaTeX and Word users appeared in PLOS One. The headline result: Word users work faster AND make less errors than LaTeX users.

Ooof – I hear the sound of a thousand co-authors crying out with righteous indignation. The Word camp was quick to seize upon this study as clear evidence that LaTeX users were probably deluding themselves and that now would be a good time for everyone to get off of their high horse. The authors of report even went as far to suggest that LaTeX users were wasting public resources and that journals should consider not accepting manuscripts written up using LaTex:

Given these numbers it remains an open question to determine the amount of taxpayer money that is spent worldwide for researchers to use LaTeX over a more efficient document preparation system, which would free up their time to advance their respective field. Some publishers may save a significant amount of money by requesting or allowing LaTeX submissions because a well-formed LaTeX document complying with a well-designed class file (template) is much easier to bring into their publication workflow. However, this is at the expense of the researchers’ labor time and effort. We therefore suggest that leading scientific journals should consider accepting submissions in LaTeX only if this is justified by the level of mathematics presented in the paper.

Pretty damning, eh? Not so fast! There are several reasons we should doubt the headline result.

For one, rather than randomly assigning participants to Word or LaTex, the researchers decided to allow participants to self-select into their respective groups. On one hand, this makes the result even more damning: even basic Word users outperformed expert LaTeX users. The authors themselves admit that preference for the two typesetting programs varied wildly across disciplines (e.g. computer scientists love LaTeX and health researchers prefer Word). It’s perfectly possible that the types of people that select into more math-based disciplines are inherently less efficient at performing the sort of formatting tasks set by the researchers. Indeed, the researchers found that LaTeX users actually outperformed Word users when it came to more complex operations such as formatting equations.

Furthermore, the researchers only evaluated these typesetting programs along two basic dimensions: formatting speed and error-rates, ignoring other advantages that LaTeX might have over Word. As an empirical researcher, I find it enormously easier to link LaTeX documents to automated data output from programs like Stata, making it simple to update results in a document without having to copy and paste all the time. Word can also do this, but it has always been far clunkier.

So, in short, the jury is still out. Feel free to return to your respective camps and let the war continue.

This paper sheds light on the relationship between oil rent and the allocation of talent, toward rent-seeking versus more productive activities, conditional on the quality of institutions. Using a sample of 69 developing countries, we demonstrate that oil resources orient university students toward specializations that provide better future access to rents when institutions are weak. The results are robust to various specifications, datasets on governance quality and estimation methods. Oil affects the demand for each profession through a technological effect, indicating complementarity between oil and engineering, manufacturing and construction; however, it also increases the ‘size of the cake’. Therefore, when institutions are weak, oil increases the incentive to opt for professions with better access to rents (law, business, and the social sciences), rather than careers in engineering, creating a deviation from the optimal allocation between the two types of specialization.

In plain speak, the authors posit that when there are large windfalls from natural resources, people will choose careers (and the necessary education) which will allow them to reap the benefits from those windfalls. Normally this involves choosing careers associated with oil extraction, like engineering. However, in weak states where it’s possible to gain access to oil rents in a less-than-legitimate manner, people choose to go into careers which better allow them to get access to those rents, like law or business. Hence talent is `misallocated’ in developing countries with weak institutions and oil booms, as the possibility of getting access to oil rents sends people into careers which they are less fit for.

I would not despair so quickly – the empirical results in the paper are more suggestive than definitive, dependent on a handful of mainly cross-country regressions. Still, the results are disconcerting – the authors do not investigate further, but the prospect of societies re-orienting themselves into a structure better suited for rent-seeking likely means that true institutional reform becomes all the more difficult.

The Random Darknet Shopper, an automated online shopping bot with a budget of $100 a week in Bitcoin, is programmed to do a very specific task: go to one particular marketplace on the Deep Web and make one random purchase a week with the provided allowance. The purchases have all been compiled for an art show in Zurich, Switzerland titled The Darknet: From Memes to Onionland, which runs through January 11.
The concept would be all gravy if not for one thing: the programmers came home one day to find a shipment of 10 ecstasy pills, followed by an apparently very legit falsified Hungarian passport– developments which have left some observers of the bot’s blog a little uneasy.

The title of the piece (Robots are starting to break the law and nobody knows what to do about it) elicits worries of AIs gone amok, but the basic conundrum of this piece and others about the Random Darknet Shopper is more complex: if I design an AI which takes a random, blind action in a space which is largely – but not uniformly – illicit, am I legally culpable?

Take this thought experiment: imagine going around your office with a ten dollar bill, offering to buy whatever your colleagues would be willing to sell to you at that price, but under the condition that you do not see the item until the transaction has taken place. If one of your colleagues slipped you some cocaine, who would be at fault? What if you chose to repeat the experiment in an area of town infamous for drug-deals, are you suddenly more culpable?

When I was young, I used to order what they called “Grab Bag” comic packs, where I would pay a set amount of money for an unknown, random assortment of comic books. If someone had slipped a pornographic comic into my grab bag, it’s hard to see how I would be at fault. But where I choose to make my blind transactions seems to augment how we perceive culpability.