Category Archives: Uncategorized

Cities have two important properties: they are enormously consequential for people’s economic prosperity, and they are very sticky. That stickiness is twofold: cities do not change their shape rapidly in response to changing economic or technological opportunities (consider, e.g., Hornbeck and Keniston on the positive effects of the Great Fire of Boston), and people are hesitant to leave their existing non-economic social network (Deryagina et al show that Katrina victims, a third of whom never return to New Orleans, are materially better off as soon as three years after the hurricane, earning more and living in less expensive cities; Shoag and Carollo find that Japanese-Americans randomly placed in internment camps in poor areas during World War 2 see lower incomes and children’s educational outcomes even many years later).

A lot of recent work in urban economics suggests that the stickiness of cities is getting worse, locking path dependent effects in with even more vigor. A tour-de-force by Shoag and Ganong documents that income convergence across cities in the US has slowed since the 1970s, that this only happened in cities with restrictive zoning rules, and that the primary effect has been that as land use restrictions make housing prices elastic to income, working class folks no longer move from poor to rich cities because the cost of housing makes such a move undesirable. Indeed, they suggest a substantial part of growing income inequality, in line with work by Matt Rognlie and others, is due to the fact that owners of land have used political means to capitalize productivity gains into their existing, tax-advantaged asset.

Now, one part of urban stickiness over time may simply be reflecting that certain locations are very productive, that they have a large and valuable installed base of tangible and intangible assets that make their city run well, and hence we shouldn’t be surprised to see cities retain their prominence and nature over time. So today, let’s discuss a new paper by Michaels and Rauch which uses a fantastic historical case to investigate this debate: the rise and fall of the Roman Empire.

The Romans famously conquered Gaul – today’s France – under Caesar, and Britain in stages up through Hadrian (and yes, Mary Beard’s SPQR is worthwhile summer reading; the fact that Nassim Taleb and her do not get along makes it even more self-recommending!). Roman cities popped up across these regions, until the 5th century invasions wiped out Roman control. In Britain, for all practical purposes the entire economic network faded away: cities hollowed out, trade came to a stop, and imports from outside Britain and Roman coin are near nonexistent in the archaeological record for the next century and a half. In France, the network was not so cleanly broken, with Christian bishoprics rising in many of the old Roman towns.

Here is the amazing fact: today, 16 of France’s 20 largest cities are located on or near a Roman town, while only 2 of Britain’s 20 largest are. This difference existed even back in the Middle Ages. So who cares? Well, Britain’s cities in the middle ages are two and a half times more likely to have coastal access than France’s cities, so that in 1700, when sea trade was hugely important, 56% of urban French lived in towns with sea access while 87% of urban Brits did. This is even though, in both countries, cities with sea access grew faster and huge sums of money were put into building artificial canals. Even at a very local level, the France/Britain distinction holds: when Roman cities were within 25km of the ocean or a navigable river, they tended not to move in France, while in Britain they tended to reappear nearer to the water. The fundamental factor for the shift in both places was that developments in shipbuilding in the early middle ages made the sea much more suitable for trade and military transport than the famous Roman Roads which previously played that role.

Now the question, of course, is what drove the path dependence: why didn’t the French simply move to better locations? We know, as in Ganong and Shoag’s paper above, that in the absence of legal restrictions, people move toward more productive places. Indeed, there is a lot of hostility to the idea of path dependence more generally. Consider, for example, the case of the typewriter, which “famously” has its QWERTY form because of an idiosyncracy in the very early days of the typewriter. QWERTY is said to be much less efficient than alternative key layouts like Dvorak. Liebowitz and Margolis put this myth to bed: not only is QWERTY fairly efficient (you can think much faster than you can type for any reasonable key layout), but typewriting companies spent huge amounts of money on training schools and other mechanisms to get secretaries to switch toward the companies’ preferred keyboards. That is, while it can be true that what happened in the past matters, it is also true that there are many ways to coordinate people to shift to a more efficient path if a suitable large productivity improvement exists.

With cities, coordinating on the new productive location is harder. In France, Michaels and Rauch suggest that bishops and the church began playing the role of a provider of public goods, and that the continued provision of public goods in certain formerly-Roman cities led them to grow faster than they otherwise would have. Indeed, Roman cities in France with no bishop show a very similar pattern to Roman cities in Britain: general decline. That sunk costs and non-economic institutional persistence can lead to multiple steady states in urban geography, some of which are strictly worse, has been suggested in smaller scale studies (e.g., Redding et al RESTAT 2011 on Germany’s shift from Berlin to Frankfurt, or the historical work of Engerman and Sokoloff).

I loved this case study, and appreciate the deep dive into history that collecting data on urban locations over this period required. But the implications of this literature broadly are very worrying. Much of the developed world has, over the past forty years, pursued development policies that are very favorable to existing landowners. This has led to stickiness which makes path dependence more important, and reallocation toward more productive uses less likely, both because cities cannot shift their geographic nature and because people can’t move to cities that become more productive. We ought not artificially wind up like Dijon and Chartres in the middle ages, locking our population into locations better suited for the economy of the distant past.

2016 working paper (RePEc IDEAS). Article is forthcoming in Economic Journal. With incredible timing, Michaels and Rauch, alongside two other coauthors, have another working paper called Flooded Cities. Essentially, looking across the globe, there are frequent very damaging floods, occurring every 20 years or so in low-lying areas of cities. And yet, as long as those areas are long settled, people and economic activity simply return to those areas after a flood. Note this is true even in countries without US-style flood insurance programs. The implication is that the stickiness of urban networks, amenities, and so on tends to be very strong, and if anything encouraged by development agencies and governments, yet this stickiness means that we wind up with many urban neighborhoods, and many cities, located in places that are quite dangerous for their residents without any countervailing economic benefit. You will see their paper in action over the next few years: despite some neighborhoods flooding three times in three years, one can bet with confidence that population and economic activity will remain on the floodplains of Houston’s bayou. (And in the meanwhile, ignoring our worries about future economic efficiency, I wish only the best for a safe and quick recovery to friends and colleagues down in Houston!)

Like this:

A great announcement last week, as Dave Donaldson, an economic historian and trade economist, has won the 2017 John Bates Clark medal! This is an absolutely fantastic prize: it is hard to think of any young economist whose work is as serious as Donaldson’s. What I mean by that is that in nearly all of Donaldson’s papers, there is a very specific and important question, a deep collection of data, and a rigorous application of theory to help identify the precise parameters we are most concerned with. It is the modern economic method at its absolute best, and frankly is a style of research available to very few researchers, as the specific combination of theory knowledge and empirical agility required to employ this technique is very rare.

A canonical example of Donaldson’s method is his most famous paper, written back when he was a graduate student: “The Railroads of the Raj”. The World Bank today spends more on infrastructure than on health, education, and social services combined. Understanding the link between infrastructure and economic outcomes is not easy, and indeed has been a problem that has been at the center of economic debates since Fogel’s famous accounting on the railroad. Further, it is not obvious either theoretically or empirically that infrastructure is good for a region. In the Indian context, no less a sage than the proponent of traditional village life Mahatma Gandhi felt the British railroads, rather than help village welfare, “promote[d] evil”, and we have many trade models where falling trade costs plus increasing returns to scale can decrease output and increase income volatility.

Donaldson looks at the setting of British India, where 67,000 kilometers of rail were built, largely for military purposes. India during the British Raj is particularly compelling as a setting due to its heterogeneous nature. Certain seaports – think modern Calcutta – were built up by the British as entrepots. Many internal regions nominally controlled by the British were left to rot via, at best, benign neglect. Other internal regions were quasi-independent, with wildly varying standards of governance. The most important point, though, is that much of the interior was desperately poor and in a de facto state of autarky: without proper roads or rail until the late 1800s, goods were transported over rough dirt paths, leading to tiny local “marketing regions” similar to what Skinner found in his great studies of China. British India is also useful since data on goods shipped, local on weather conditions, and agricultural prices were rigorously collected by the colonial authorities. Nearly all that local economic data is in dusty tomes in regional offices across the modern subcontinent, but it is at least in principle available.

Let’s think about how many competent empirical microeconomists would go about investigating the effects of the British rail system. It would be a lot of grunt work, but many economists would spend the time collecting data from those dusty old colonial offices. They would then worry that railroads are endogenous to economic opportunity, so would hunt for reasonable instruments or placebos, such as railroads that were planned yet unbuilt, or railroad segments that skipped certain areas because of temporary random events. They would make some assumptions on how to map agricultural output into welfare, probably just restricting the dependent variable in their regressions to some aggregate measure of agricultural output normalized by price. All that would be left to do is run some regressions and claim that the arrival of the railroad on average raised agricultural income by X percent. And look, this wouldn’t be a bad paper. The setting is important, the data effort heroic, the causal factors plausibly exogenous: a paper of this form would have a good shot at a top journal.

When I say that Donaldson does “serious” work, what I mean is that he didn’t stop with those regressions. Not even close! Consider what we really want to know. It’s not “What is the average effect of a new railroad on incomes?” but rather, “How much did the railroad reduce shipping costs, in each region?”, “Why did railroads increase local incomes?”, “Are there alternative cheaper policies that could have generated the same income benefit?” and so on. That is, there are precise questions, often involving counterfactuals, which we would like to answer, and these questions and counterfactuals necessarily involve some sort of model mapping the observed data into hypotheticals.

Donaldson leverages both reduced-form, well-identified evidence, and that broader model we suggested was necessary, and does so with a paper which is beautifully organized. First, he writes down an Eaton-Kortum style model of trade (Happy 200th Birthday to the theory of comparative advantage!) where districts get productivity draws across goods then trade subject to shipping costs. Consider this intuition: if a new rail line connect Gujarat to Bihar, then the existence of this line will change Gujarat’s trade patterns with every other state, causing those other states to change their own trade patterns, causing a whole sequence of shifts in relative prices that depend on initial differences in trade patterns, the relative size of states, and so on. What Donaldson notes is that if you care about welfare in Gujarat, all of those changes only affect Gujaratis if they affect what Gujaratis end up consuming, or equivalently if it affects the real income they earn from their production. Intuitively, if pre-railroad Gujarat’s local consumption was 90% locally produced, and after the railroad was 60% locally produced, then declining trade costs permitted the magic of comparative advantage to permit additional specialization and hence additional Ricardian rents. This is what is sometimes called a sufficient statistics approach: the model suggests that the entire effect of declining trade costs on welfare can be summarized by knowing agricultural productivity for each crop in each area, the local consumption share which is imported, and a few elasticity parameters. Note that the sufficient statistic is a result, not an assumption: the Eaton-Kortum model permits taste for variety, for instance, so we are not assuming away any of that. Now of course the model can be wrong, but that’s something we can actually investigate directly.

So here’s what we’ll do: first, simply regress time and region dummies plus a dummy for whether rail has arrived in a region on real agricultural production in that region. This regression suggests a rail line increases incomes by 16%, whereas placebo regressions for rail lines that were proposed but canceled see no increase at all. 16% is no joke, as real incomes in India over the period only rose 22% in total! All well and good. But what drives that 16%? Is it really Ricardian trade? To answer that question, we need to estimate the parameters in that sufficient statistics approach to the trade model – in particular, we need the relative agricultural productivity of each crop in each region, elasticities of trade flows to trade costs (and hence the trade costs themselves), and the share of local consumption which is locally produced (the “trade share”). We’ll then note that in the model, real income in a region is entirely determined by an appropriately weighted combination of local agricultural productivity and changes in the weighted trade share, hence if you regress real income minus the weighted local agricultural productivity shock on a dummy for the arrival of a railroad and the trade share, you should find a zero coefficient on the rail dummy if, in fact, the Ricardian model is capturing why railroads affect local incomes. And even more importantly, if we find that zero, then we understand that efficient infrastructure benefits a region through the sufficient statistic of the trade share, and we can compare the cost-benefit ratio of the railroad to other hypothetical infrastructure projects on the basis of a few well-known elasticities.

So that’s the basic plot. All that remains is to estimate the model parameters, a nontrivial task. First, to get trade costs, one could simply use published freight rates for boats, overland travel, and rail, but this wouldn’t be terribly compelling; bandits, and spoilage, and all the rest of Samuelson’s famous “icebergs” like linguistic differences raise trade costs as well. Donaldson instead looks at the differences in origin and destination prices for goods produced in only one place – particular types of salt – before and after the arrival of a railroad. He then uses a combination of graph theory and statistical inference to estimate the decline in trade costs between all region pairs. Given massive heterogeneity in trade costs by distance – crossing the Western Ghats is very different from shipping a boat down the Ganges! – this technique is far superior to simply assuming trade costs linear in distance for rail, road, or boat.

Second, he checks whether lowered trade costs actually increased trade volume, and at what elasticity, using local rainfall as a proxy for local productivity shocks. The use of rainfall data is wild: for each district, he gathers rainfall deviations for the sowing to harvest times individually for each crop. This identifies the agricultural productivity distribution parameters by region, and therefore, in the Eaton-Kortum type model, lets us calculate the elasticity of trade volume to trade shocks. Salt shipments plus crop-by-region specific rain shocks give us all of the model parameters which aren’t otherwise available in the British data. Throwing these parameters into the model regression, we do in fact find that once agricultural productivity shocks and the weighted trade share are accounted for, the effect of railroads on local incomes are not much different from zero. The model works, and note that real incomes changes based on the timing of the railroad were at no point used to estimate any of the model parameters! That is, if you told me that Bihar had positive rain shocks which increased output on their crops by 10% in the last ten years, and that the share of local production which is eaten locally went from 60 to 80%, I could tell you with quite high confidence the change in local real incomes without even needing to know when the railroad arrived – this is the sense in which those parameters are a “sufficient statistic” for the full general equilibrium trade effects induced by the railroad.

Now this doesn’t mean the model has no further use: indeed, that the model appears to work gives us confidence to take it more seriously when looking at counterfactuals like, what if Britain had spent money developing more effective seaports instead? Or building a railroad network to maximize local economic output rather than on the basis of military transit? Would a noncolonial government with half the resources, but whose incentives were aligned with improving the domestic economy, have been able to build a transport network that improved incomes more even given their limited resources? These are first order questions about economic history which Donaldson can in principle answer, but which are fundamentally unavailable to economists who do not push theory and data as far as he was willing to push them.

The Railroads of the Raj paper is canonical, but far from Donaldson’s only great work. He applies a similar Eaton-Kortum approach to investigate how rail affected the variability of incomes in India, and hence the death rate. Up to 35 million people perished in famines in India in the second half of the 19th century, as the railroad was being built, and these famines appeared to end (1943 being an exception) afterwards. Theory is ambiguous about whether openness increases or decreases the variance of your welfare. On the one hand, in an open economy, the price of potatoes is determined by the world market and hence the price you pay for potatoes won’t swing wildly up and down depending on the rain in a given year in your region. On the other hand, if you grow potatoes and there is a bad harvest, the price of potatoes won’t go up and hence your real income can be very low during a drought. Empirically, less variance in prices in the market after the railroad arrives tends to be more important for real consumption, and hence for mortality, than the lower prices you can get for your own farm goods when there is a drought. And as in the Railroads of the Raj paper, sufficient statistics from a trade model can fully explain the changes in mortality: the railroad decreased the effect of bad weather on mortality completely through Ricardian trade.

Leaving India, Donaldson and Richard Hornbeck took Fogel’s intuition that the the importance of the railroad to the US depends on trade that is worthwhile when the railroad exists versus trade that is worthwhile when only alternatives like better canals or roads exist. That is, if it costs $9 to ship a wagonful of corn by canal, and $8 to do the same by rail, then even if all corn is shipped by rail once the railroad is built, we oughtn’t ascribe all of that trade to the rail. Fogel assumed relationships between land prices and the value of the transportation network. Hornbeck and Donaldson alternatively estimate that relationship, again deriving a sufficient statistic for the value of market access. The intuition is that adding a rail link from St. Louis to Kansas City will also affect the relative prices, and hence agricultural production, in every other region of the country, and these spatial spillovers can be quite important. Adding the rail line to Kansas City affects market access costs in Kansas City as well as relative prices, but clever application of theory can still permit a Fogel-style estimate of the value of rail to be made.

Moving beyond railroads, Donaldson’s trade work has also been seminal. With Costinot and Komunjer, he showed how to rigorously estimate the empirical importance of Ricardian trade for overall gains from trade. Spoiler: it isn’t that important, even if you adjust for how trade affects market power, a result seen in a lot of modern empirical trade research which suggests that aspects like variety differences are more important than Ricardian productivity differences for gains from international trade. There are some benefits to Ricardian trade across countries being relatively unimportant: Costinot, Donaldson and Smith show that changes to what crops are grown in each region can massively limit the welfare harms of climate change, whereas allowing trade patterns to change barely matters. The intuition is that there is enough heterogeneity in what can be grown in each country when climate changes to make international trade relatively unimportant for mitigating these climate shifts. Donaldson has also rigorously studied in a paper with Atkin the importance of internal rather than international trade costs, and has shown in a paper with Costinot that economic integration has been nearly as important as productivity improvements in increasing the value created by American agriculture over the past century.

Donaldson’s CV is a testament to how difficult this style of work is. He spent eight years at LSE before getting his PhD, and published only one paper in a peer reviewed journal in the 13 years following the start of his graduate work. “Railroads of the Raj” has been forthcoming at the AER for literally half a decade, despite the fact that this work is the core of what got Donaldson a junior position at MIT and a tenured position at Stanford. Is it any wonder that so few young economists want to pursue a style of research that is so challenging and so difficult to publish? Let us hope that Donaldson’s award encourages more of us to fully exploit both the incredible data we all now have access to, but also the beautiful body of theory that induces deep insights from that data.

Like this:

This site is seven years old, during which time I have not written a single post which is not explicitly about economics research. The posts have collectively reached well over a half million readers in this time, and I have been incredibly encouraged to see how many folks, even outside of academia, are interested in how economics, and economic theory in particular, can help explain the social world.

I hope you’ll permit me to take one post where I break the “economic research only” rule. The executive order issued yesterday banning entry into the United States for citizens of seven nations is an abomination, and directly contrary to both the words of Lazarus’ poem on the Statue of Liberty and the 1965 immigration reform which banned discrimination on the basis of national origin. It is an absolute disgrace, particularly to me as an American who, like the majority of my countrymen, see the immigrant experience as the greatest source of pride the country has to offer. Every academic, including myself, has friends and colleagues and coauthors from the countries included on this ban.

I understand that there are citizens of the affected countries worried about how their studies will be able to continue given these immigration restrictions. While my hope is that the courts will overturn this un-American executive order, I want our friends from these countries to know that there are currently plans in the works to assist you. If you are a economics or strategy student affected by this order, or have students in those fields who may need temporary academic accommodation elsewhere, please email me at kevin.bryan@rotman.utoronto.ca . This is of particular importance for students from the affected countries who are unable to return to the United States from present foreign travel. I can’t make any promises, but I have been in contact with a number of universities who may be able to help. If you are a PhD program director who may be able to help, I’d ask you to also contact me and I can keep you informed as to how things are progressing and how you can assist.

There is a troubling, nativist, anti-liberal (in the sense of Hume and Smith and Mill) streak in the world at the moment. The progress of knowledge depends on an open, free, and international system of cooperation. We in academia must stand up for this system, and for our friends who are being shut out of it.

Empirically, bonus pay as a component of overall renumeration has become more common over time, especially in highly competitive industries which involve high levels of human capital; think of something like management of Fortune 500 firms, where the managers now have their salary determined globally rather than locally. This doesn’t strike most economists as a bad thing at first glance: as long as we are measuring productivity correctly, workers who are compensated based on their actual output will both exert the right amount of effort and have the incentive to improve their human capital.

In an intriguing new theoretical paper, however, Benabou and Tirole point out that many jobs involve multitasking, where workers can take hard-to-measure actions for intrinsic reasons (e.g., I put effort into teaching because I intrinsically care, not because academic promotion really hinges on being a good teacher) or take easy-to-measure actions for which there might be some kind of bonus pay. Many jobs also involve screening: I don’t know who is high quality and who is low quality, and although I would optimally pay people a bonus exactly equal to their cost of effort, I am unable to do so since I don’t know what that cost is. Multitasking and worker screening interact among competitive firms in a really interesting way, since how other firms incentivize their workers affects how workers will respond to my contract offers. Benabou and Tirole show that this interaction means that more competition in a sector, especially when there is a big gap between the quality of different workers, can actually harm social welfare even in the absence of any other sort of externality.

Here is the intuition. For multitasking reasons, when different things workers can do are substitutes, I don’t want to give big bonus payments for the observable output, since if I do the worker will put in too little effort on the intrinsically valuable task: if you pay a trader big bonuses for financial returns, she will not put as much effort into ensuring all the laws and regulations are followed. If there are other finance firms, though, they will make it known that, hey, we pay huge bonuses for high returns. As a result, workers will sort, with all of the high quality traders will move to the high bonus firm, leaving only the low quality traders at the firm with low bonuses. Bonuses are used not only to motivate workers, but also to differentially attract high quality workers when quality is otherwise tough to observe. There is a tradeoff, then: you can either have only low productivity workers but get the balance between hard-to-measure tasks and easy-to-measure tasks right, or you can retain some high quality workers with large bonuses that make those workers exert too little effort on hard-to-measure tasks. When the latter is more profitable, all firms inefficiently begin offering large, effort-distorting bonuses, something they wouldn’t do if they didn’t have to compete for workers.

How can we fix things? One easy method is with a bonus cap: if the bonus is capped at the monopsony optimal bonus, then no one can try to screen high quality workers away from other firms with a higher bonus. This isn’t as good as it sounds, however, because there are other ways to screen high quality workers (such as offering lower clawbacks if things go wrong) which introduce even worse distortions, hence bonus caps may simply cause less efficient methods to perform the same screening and same overincentivization of the easy-to-measure output.

When the individual rationality or incentive compatibility constraints in a mechanism design problem are determined in equilibrium, based on the mechanisms chosen by other firms, we sometimes called this a “competing mechanism”. It seems to me that there are quite a number of open questions concerning how to make these sorts of problems tractable; a talented young theorist looking for a fun summer project might find it profitable to investigate this as-yet small literature.

Beyond the theoretical result on screening plus multitasking, Tirole and Benabou also show that their results hold for market competition more general than just perfect competition versus monopsony. They do this through a generalized version of the Hotelling line which appears to have some nice analytic properties, at least compared to the usual search-theoretic models which you might want to use when discussing imperfect labor market competition.

Like this:

William Baumol, who strikes me as one of the leading contenders for a Nobel in the near future, has written a surprising amount of interesting economic history. Many economic historians see innovation – the expansion of ideas and the diffusion of products containing those ideas, generally driven by entrepreneurs – as critical for growth. But we find it very difficult to see any reason why the “spirit of innovation” or the net amount of cleverness in society is varying over time. Indeed, great inventions, as undeveloped ideas, occur almost everywhere at almost all times. The steam engine of Heron of Alexandria, which was used for parlor tricks like opening temple doors and little else, is surely the most famous example of a great idea, undeveloped.

Why, then, do entrepreneurs develop ideas and cause products to diffuse widely at some times in history and not at others? Schumpeter gave five roles for an entrepreneur: introducing new products, new production methods, new markets, new supply sources or new firm and industry organizations. All of these are productive forms of entrepreneurship. Baumol points out that clever folks can also spend their time innovating new war implements, or new methods of rent seeking, or new methods of advancing in government. If incentives are such that those activities are where the very clever are able to prosper, both financially and socially, then it should be no surprise that “entrepreneurship” in this broad sense is unproductive or, worse, destructive.

History offers a great deal of support here. Despite quite a bit of productive entrepreneurship in the Middle East before the rise of Athens and Rome, the Greeks and Romans, especially the latter, are well-known for their lack of widespread diffusion of new productive innovations. Beyond the steam engine, the Romans also knew of the water wheel yet used it very little. There are countless other examples. Why? Let’s turn to Cicero: “Of all the sources of wealth, farming is the best, the most able, the most profitable, the most noble.” Earning a governorship and stripping assets was also seen as noble. What we now call productive work? Not so much. Even the freed slaves who worked as merchants had the goal of, after acquiring enough money, retiring to “domum pulchram, multum serit, multum fenerat”: a fine house, land under cultivation and short-term loans for voyages.

Baumol goes on to discuss China, where passing the imperial exam and moving into government was the easiest way to wealth, and the early middle ages of Europe, where seizing assets from neighboring towns was more profitable than expanding trade. The historical content of Baumol’s essay was greatly expanded in a book he edited alongside Joel Mokyr and David Landes called The Invention of Enterprise, which discusses the relative return to productive entrepreneurship versus other forms of entrepreneurship from Babylon up to post-war Japan.

The relative incentives for different types of “clever work” are relevant today as well. Consider Luigi Zingales’ new lecture, Does Finance Benefit Society? I can’t imagine anyone would consider Zingales hostile to the financial sector, but he nonetheless discusses in exhaustive detail the ways in which incentives push some workers in that sector toward rent-seeking and fraud rather than innovation which helps the consumer.

Final JPE copy (RePEc IDEAS). Murphy, Schleifer and Vishny have a paper, also from the JPE in 1990, on the topic of how clever people in many countries are incentivized toward rent-seeking; their work is more theoretical and empirical than historical. If you are interested in innovation and entrepreneurship, I uploaded the reading list for my PhD course on the topic here.

Like this:

Before discussing a lovely application of High Micro Theory to a long-standing debate in macro in a post coming right behind this one, a personal note: starting this summer, I am joining the Strategy group at the University of Toronto Rotman School of Management as an Assistant Professor. I am, of course, very excited about the opportunity, and am glad that Rotman was willing to give me a shot even though I have a fairly unusual set of interests. Some friends asked recently if I have any job market advice, and I told them that I basically just spent five years reading interesting papers, trying to develop a strong toolkit, and using that knowledge base to attack questions I am curious about as precisely as I could, with essentially no concern about how the market might view this. Even if you want to be strategic, though, this type of idiosyncrasy might not be a bad strategy.

Consider the following model: any school evaluates you according to v+e(s), where v is a common signal of your quality and e(s) is a school-specific taste shock. You get an offer if v+e(s) is maximized for some school s; you are maximizing a first-order statistic, essentially. What this means is that increasing v (by being smarter, or harder-working, or in a hotter field) and increasing the variance of e (by, e.g., working on very specific topics even if they are not “hot”, or by developing an unusual set of talents) are equally effective in garnering a job you will be happy with. And, at least in my case, increasing v provides disutility whereas increasing the variance of e can be quite enjoyable! If you do not want to play such a high-variance strategy, though, my friend James Bailey (heading from Temple’s PhD program to work at Creighton) has posted some more sober yet still excellent job market advice. I should also note that writing a research-oriented blog seemed to be weakly beneficial as far as interviews were concerned; in perhaps a third of my interviews, someone mentioned this site, and I didn’t receive any negative feedback. Moving from personal anecdote to the minimal sense of the word data, Jonathan Dingel of Trade Diversion also seems to have had a great deal of success. Given this, I would suggest that there isn’t much need to worry that writing publicly about economics, especially if restricted to technical content, will torpedo a future job search.

Like this:

Petra Moser is unquestionably doing the most interesting data-driven work on invention and growth of any economist working today; indeed, if only she applied her great data more directly to puzzles in theory, I think she would become a good bet for a Clark medal in a few years. The present paper, with Alessandra Voena, a star on last year’s junior job market, is forthcoming in the AER, and deservedly.

The problem at hand is compulsory licensing. This is a big deal in the Doha Round of WTO negotiations, since many poor and middle-income countries (think Thailand and Brazil) force drugmakers to license some particularly important drugs to local manufacturers. This helps lower the cost of AIDS retrovirals, but probably also has some negative effect on the incentive to develop newer drugs for diseases prevalent in the third world. But the tradeoff is not this simple! Because the drugs are licensed to local firms, who then produce them, there is some technology transfer and presumably some learning-by-doing. Does compulsory licensing help infant industries grow in the recipient country? And by how much?

The historical experiment is the Trading with the Enemy Act. During WWI, the US government seized a bunch of property owned by German firms, including their patents. They then licensed these at low cost to US firms. Germany was well ahead of the US technologically in organic chemistry, and Moser and Voena use this fact to study the impact of compulsory licenses for a variety of chemical dyes. They find that in (very narrowly defined) technological areas where patents where licensed, future propensity to patent by US firms roughly doubled. No such increase was seen in non-American firms who didn’t have access to such cheap licenses. The impact on future patents occurred a few years after WWI, consistent with a learning by doing story. Relevant for the Doha Round debate, German firms quickly began working on new chemistry inventions after the war, which you might interpret as consistent with a one-time seizure of IP having no long-term impact on invention if it truly an exceptional circumstance.

Why the delay if you have a patent explaining what to do? It turns out that patents – and this is true even today – are often woefully insufficient to replicate an original invention. DuPont’s first attempt at (German-invented) indigo dye turned out green instead of blue! BASF’s Haber-Bosch process patent didn’t include certain tricky details regarding chemical nature of the appropriate catalyst; it took 10 years for US firms to figure out the secret.

http://ssrn.com/abstract=1313867 (July 2011 working paper. Moser and Voena also, according to Moser’s website, have a forthcoming working paper on the impact of TRIPS licensing on the US pharma industry which I certainly want to check out. As far as I know, that paper hasn’t begun to circulate.)

Like this:

“You can’t compete with free,” right? Somehow, iTunes and other online distributors manage to sell a large number of TV episodes a la carte, even though free pirated copies of these shows are widely available on BitTorrent. Are there just two different types of consumers with different moral preferences? Or might many consumers become pirates if incentivized to do so? How much does piracy eat into sales?

Danaher et al have a great natural experiment. In 2007, NBC played hardball with Apple over iTunes pricing of individual episodes. From December 2007 to September 2008, NBC and affiliate shows were not available on iTunes, the dominant legal site for TV downloads. The authors scraped daily reports on torrent traffic for a huge number of TV episodes, as well as daily Amazon sales date for the box sets of these shows.

The results are insightful. Piracy of NBC shows jumped 11% after the shows were removed from iTunes. This increase may be understated since it is 11% above and beyond the increase in piracy of non-NBC classic shows during the same period – if NBC’s actions led some users to try piracy, and those users also began to pirate ABC shows, the actual effect of removing the legal channel for NBC shows on total online piracy may be even bigger than 11.4%. To put that number in perspective, the increase in NBC downloads per week was approximately twice the total number of downloads of these shows via iTunes when they were available. The impact of DVD box sales is close to zero, perhaps suggesting that “digital consumers” are in this instance quite separate in their demand from buyers of DVDs.

What might lead to this result? One explanation is that piracy involves a fixed cost, such as learning BitTorrent or “getting over one’s moral qualms.” Once that price is paid, all content is free, hence demand will be higher than demand for $2 episodes on iTunes. This is further supported by the fact that NBC piracy did not fall back to its November 2007 level after legal NBC shows returned to iTunes in 2008.

Two takeaways here. For piracy researchers, modeling consumers as “non-pirates” and “pirates” who do not respond to incentives across this divide is probably not accurate. For firms, when facing competition with free bootleg copies, the costs of mistakes in pricing strategy can be severe indeed!