Economic Logic, Too

About Me

I discuss recent research in Economics and various events from an economic perspective, as the name of the blog indicates. I plan on adding posts approximately every workday, with some exceptions, for example when I travel.

Tuesday, December 31, 2013

We all know politicians lie, no surprise here. What we do not quite know is how and why they lie. Indeed, they generally do not tell outright lies. They exaggerate or add some "extra spice" to their statements. How badly they lie likely depends on the political context.

Alessandro Bucciol and Luca Zarri, from a country long lead by a professional liar, decide to focus on politicians from the United States. They use data from PolitiFact.com about 7000 claims by 1000 national politicians from 2007 to 2012. They determine that Republications lie more than Democrats, which should not surprise given the influence of the Tea Party on Republicans. I am thus not sure this ranking will last once the Republican Party gets back to its roots. More interesting are variations across party lines. Politicians lie less in battleground states (when the stakes, or scrutiny, are higher), in more educated states, and in the South. And health-related issues are the subject of the most lies.

I am not quite sure how to generalize these results. As mentioned, the current context for the Republican Party is out of the ordinary. Also, health care has been a central issue on the national political agenda over these years. All this can change, and it may be different in other countries. But it is interesting to see that definite patterns are emerging. If we can rationalize them, maybe we can then think about policies that would minimize lying. And hope for politicians to adopt them. A good resolution for the new year.

Monday, December 30, 2013

In countries where some parliament chamber allocates the same number of seats to each member state regardless of its population, small states are deemed to enjoy a disproportionately strong influence. One paper that analyzes whether this small-state effect is empirically significant is by Gary Hoover and Paul Pecorino which shows that US states with higher per capita representation also get more federal funding. Does this mean that the open question is now closed? Of course not, as the scientific process would tell us to revisit this to test whether it holds more generally, whether the effect disappears with time, or whether it is robust to different specifications.

Stratford Douglas and Robert Reed (link corrected) address the latter question. They run a robustness exercise that is unfortunately too rare in Economics. They confirm the results of Hoover and Pecorino, but find that when you switch from ordinary least squares to cluster robust standard errors and include population growth that small-state effect vanishes. So we are not done with this question.

We should have more replication studies in Economies. It saddens me that Douglas and Reed felt the need to add the following footnote on the front page: "we wish to express our special appreciation to Gary Hoover and Paul Pecorino for their willingness to allow their study to be subject to critical analysis. Openness and integrity such as theirs is the basis by which science advances." This should be obvious.

Monday, December 23, 2013

When we think about a social planner that maximizes welfare by assigning optimal allocations without an explicit price system, we are really describing a Soviet economy. History has shown that this utopia does not quite work out for a variety of reasons. Yet, Soviet economies were following this doctrine and their governments must have acted on some principles that must have come from somewhere: what should one allocate where, how should allocations change according to changes in exogenous factors, etc. Russia actually has a rich history of economic theoreticians who have worked out models to guide the policy makers, who liked to think themselves as technocrats. These theoreticians were mostly mathematicians working on various optimization techniques.

Ivan Boldyrev and Olessia Kirtchik describe the life of Victor Polterovich, who expanded Walrasian theory to non-market economies in the 1970s and was the only active Soviet economist with visibility in the West during that period: he has an Econometrica in 1983 and another one in 1993, and a few articles in the Journal of Mathematical Economics in between (see his page on IDEAS) and is a fellow of the Econometric Society. While Polterovich started as many others his academic career of Marxist planning theories, his move to general equilibrium theory may seem puzzling. Indeed, the welfare theorems have often been touted as a victory for the market economy, and Polterovich would certainly have been ill-advised to promote a market economy.

The paper is largely based on interviews of Polterovich that reveal interesting anecdotes, such as the unique history of his first Econometrica and how some of his most important results never got translated. The other Soviet economists did not go through the trouble of integrating with the international research community, and I am sure their are still interesting results that are ignored by the wider general equilibrium theory community. Polterovich came to general equilibrium theory by realizing that one needs at least as many instruments as objectives to manage optimally an economy. That did not seem feasible to him, hence his interest in decentralization. In his early models, agents interact, possibly forming coalitions. Keep in mind that to Soviets, agents were not individuals but political entities or firms. Later, price constructs are introduced, and they are helpful in understanding coordination among agents.

Saturday, December 21, 2013

Yes, this is the seventh year of blogging. Will I enter a a prolonged slump like some faculty do after obtaining tenure? Am I due for a sabbatical? Unfortunately, both may happen. As announced last Summer, my new responsibilities make it difficult for me to maintain the pace I have had in previous years. And it has shown in the last six months: I have missed days, I have been wrong on at least one occasion, and my posts have become shorter. Yet, I am more and more impressed by the following this blog is receiving and I hope the same will hold to Economic Logic, Too, where I invite others to post comments about papers they read.

Traditionally, I have reviewed the most popular posts of the year. For reasons I do not quite understand, this year's lists only contains posts from this year. So here they are:

Friday, December 20, 2013

Social mobility has been much studied to understand how the poor have a shot at becoming rich and how the rich manage to preserve their status. Such studies are usually limited to mobility during a lifetime for a single individual or for a family from one generation to the next. Going beyond this time frame is virtually impossible, because there is no panel dataset for wealth or income that spans over several generations. One can, however, discover some interesting proxies that allow to create such a dataset.

This is what Gregory Clark and Neil Cummins do in a pair of papers that exploit the fact that people with rare surnames are highly likely to be from the same family. Using national birth and death registries for England and Wales as well as probate registries that recorded wealth at death, they gather records for 21,618 people over about 150 years in the first paper. The second paper focuses on educational status instead of wealth over eight centuries and uses registries of students at Cambridge and Oxford universities as well as censuses for the rest of the population. In both cases, intergenerational correlations are estimated to be much higher than in studies with shorter samples. It can take 20 to 30 generations for an initial status to disappear. This may be indicative that social mobility has increased in recent generations in England and Wales (my interpretation, although Clark and Cummins argue that intergenerational persistence is stable over centuries despite stark changes in inheritance taxation) or that families have an underlying social status that changes much more slowly than characteristics that are easier to observe (the authors' interpretation).

PS: If you are looking at the papers, do not be surprised to see the same abstract on both. Very negligent LSE staff posted similar cover pages on both papers.

Thursday, December 19, 2013

A major difference between American and other universities is the professionalization of their administration. Typically, they are managed by former faculty who have specialized in higher education administration, and what is become more and more frequent, by administrators who have never been academics. While the result are universities that put in my opinion excessive emphasis on non-academic endeavors like athletics, students living and other student entertainment, there is little doubt that the academics are also in better shape than elsewhere. When faculty are in charge, I suppose there is too much rent seeking. It would be good, though, to have this formalized in some way for better analysis.

Kathleen Carroll, Lisa Dickson and Jane Ruseski build a model of university administration where the extend of faculty involvement may vary exogenously. The model is rather trivial and does not deliver unexpected results, the more faculty participate, the more academic affairs get priority, and this is social optimal if there are externalities from academics to non-academics. What would have really made the paper interesting is to put the model to the data and actually provide some quantification of effects. How much does faculty participation matter? What is the size of cross-effects between academics and non-academics? How big should the administration be? Too bad this paper was only about trivial theory.

Wednesday, December 18, 2013

Generally, frictions in markets are viewed as something to avoid, except in rare cases like when they prevent excessive and damaging volatility. For labor markets in particular, frictions lead to unnecessary delays in matchings, misallocations of talent and higher unemployment. It would be difficult to find an advocate for frictions on the labor markets, unlike for some financial markets.

Well, there are in fact some advocates, such as Andriy Zapechelnyuk and Ro'i Zultan. Their point is that frictions on the labor market are costly for those unemployed, thus the employed will exert extra effort to avoid becoming unemployed. The same applies to employers who dread the cost of an unfilled vacancy and avoid firing workers. While this could leads to misallocations not being dissolved, Zapechelnyuk and Zultan claim that it is possible to find some level of search frictions that is optimal for welfare as long as there is a sufficient level of moral hazard in job search. This means also that higher unemployment benefits could lead to lower productivity for those working as they feel less hard-pressed to perform to avoid losing their job. But keep in mind that these unemployment benefits also allow the unemployed to wait for a better match, so it is really difficult to sort all these effects out without some quantitative exercise, which this paper is unfortunately lacking.

Tuesday, December 17, 2013

Many people are thinking about the Chinese economy, and all too often they apply for this the tools they are used to, for example models with competitive markets. That does not quite apply to China, despite its recent liberalization, as vast sectors of the economy are still under government control. The fact that China is different is quite apparent in the fact that it is the only economy (that I know of) where the share of labor income in national income is less than half. One needs some serious market distortion to get to such an abnormal outcome.

David Dollar and Benjamin Jones do the right thing and make the effort to model the Chinese economy like it should be done: capital controls, 5-year plans trying to maximize output, controlled internal migration with wage discrimination, state ownership of all land. With this, Dollar and Jones are able to replicate the labor income share, as well as the high investment and savings rates. They find also that if one where to relax China's special features, the economy would first deviate even more from standard characteristics. This is a model people should take very seriously for future modeling of China.

Monday, December 16, 2013

You may think that accounting practices are straightforward and have been in place for a long time. Actually, good practices are actually fairly recent, especially in terms of making them useful diagnostic tools for firm management. But with sophistication comes also the temptation to become creative and use accounting for purposes that are borderline legal, such as escaping taxation, or outright fraud. For this, you would need to be a sophisticated accountant, and one would think that one would not find such sophistication a century ago, let alone during the British Industrial Revolution.

Steven Toms and Alice Shepherd show that in the second case there were surprising sophistication, with creative accounting being used by industrialists to counter the "Ten-Hour" movement that sought to limit work hours. Specifically, they show how the the numbers from a cotton manufacturer were used in the policy debate and how his creative accounting made it appear as though he was facing excruciatingly high fix costs and thus low profits. Where he got creative is with the treatment of capital accumulation, thereby proving that the accusation of making most of his supposedly high profits during the last hour of the shifts was not true.

Friday, December 13, 2013

Why do immigrants americanize their name? Evidently, they feel that this will help them integrate into the host society and bring them some advantages. It is well documented that the more integrated an immigrant is, or the more alike to a native she is, the more likely she is to find better jobs, earn higher wages, and feel better.

Costanza Biavaschi, Corrado Giulietti and Zahra Siddique analyze immigrants to the United States from the 1930's and find there can be a mighty pay-off. Those who chose the most common American name got up to 14% higher pay. And I like how they determined linguistic complexity of the names by using Scrabble points from the American version of the game.

Thursday, December 12, 2013

In Economics, it is standard practice to discount future periods and generations. This is done throughout economics fields, even in the valuation of future benefits from nature, despite objections from biologists. Besides, we would not know how to solve our intertemporal models without discounting, unless one assumes a finite number of generations.

But this was not always so. As Pedro Garcia Duarte points out, Cambridge (UK) in the 1930s was lobbying against discounting. Surprisingly, Frank Ramsey (of the Ramsey model) was part of this faction, following his mentor Pigou. Their reasoning is purely ethical: future generations should be valued the same as the current one. But Ramsey pioneered an intertemporal model with infinite horizon, how did he solve it, you might say. Here is the trick. He assumed there is a finite maximum utility and a finite maximum production, called bliss, and minimized the deviation from it. A cheap trick, as this is essentially looking at infinity minus infinity. Garcia Duarte also explains the first intertemporal models and how discounting was either ignored or not viewed as a technical necessity. It is only in the mid-thirties that arguments about risk and impatience start appearing, and in the 1960s that work on the neo-classical growth model established discounting as an essential ingredient of any intertemporal theory.

Wednesday, December 11, 2013

The elasticity of intertemporal substitution is one of the most estimated parameters in Economics. Why is it estimated over and over again? Because some results are positive, some are negative and some are zero. To have a clearer idea of what its true value is, we have to keep estimating it. However, the econometricians also need to get their results published, and the publishing tournament has not only an impact on which results get published but also on which ones the econometricians submit for publication.

Tomáš Havránek performs a meta-analysis of estimates of the elasticity of intertemporal substitution. That is, he gathers 169 studies and looks at their 2735 estimates. He finds significant under-reporting of results close to zero or negative, because of this publication bias. While the published mean is 0.5, the true mean should somewhere at 0.3 to 0.4. Negative results make little sense, but they can happen with some draw of the data. If editors and referees systematically discard such results, and positive ones, no matter how large they are, get a pass, we have a bias. But given the distribution of published ones, and knowing this bias, one can infer the full distribution of estimates, and hence Havránek's new estimates.

Tuesday, December 10, 2013

When it comes to extracting money from clients, you cannot deny that attorneys have learned their Economics. You cannot say the same about the rest of the legal profession, though. So what makes attorneys so smart? Look at how they evaluate which cases to take. It is not about justice for the plaintiff, it is all about what will give them the highest expected return. And the fee schedule can change dramatically according to circumstances.

Take the paper by Winand Emons and Claude Fluet. They observe that defense attorneys use fixed fee contracts while those representing plaintiffs use contingent contracts with a smallish fixed fee. The latter are offered because it provides incentives to pursue strong cases only, they say. Defense attorneys fight all cases, while plaintiff ones can select, and they do it in a way that makes it worth their time. In addition the latter have privileged information: they can figure out the expected winnings, while the plaintiffs are in the dark. The attorneys thus adjust the schedule accordingly. With all this, I wonder whether there is a way to regulate the fees, say by allowing only particular fee structures, that would maximize the well-being of plaintiffs or some combination of plaintiffs, defense and attorneys, not attorneys only.

Monday, December 9, 2013

Why did the Great Recession spread outside the United States? In particular, why did almost all Western industrial countries enter a deep and pronounced recession together? The failure of Lehman Brothers sent ripples throughout international financial markets, plus European banks were heavily involved in the US subprime mortgage market. Yet, financial and goods markets are not perfectly integrated and this should not have lead to such perfectly coordinated business cycles.

Philippe Bacchetta and Eric van Wincoop show that you do not need complete market integration to get there, only partial. All you need is that market integration be sufficiently high. In addition, tight credit, very low interest rates and inactive fiscal policy "help" tremendously with creating a panic, and we certainly were in such a situation at the time. And this panic is what makes it different from "normal" recessions, where synchronization is not perfect. The model hinges on the fact that there are possibly multiple equilibria and a global panic is the optimal coordination on a bad one. Crucial to the model are a couple of rather strange assumptions, though: prices are preset while wages are fully flexible, I would have thought wages to be less flexible than prices; and there are two periods in the model, meaning that the panic state must be permanent. As a consequence, am not quite sure what to make of this paper.

Sunday, December 8, 2013

I have been pondering for a while what to do with my blog, Economic Logic. My job responsibilities have made it difficult for me to attend to it as much as I would like. Sometimes I would have to skip days, sometimes I have not had time to read as much as I wanted (or I should as I got it wrong in one case). And I certainly have less time to write things up. Somebody suggested I should invite guest bloggers to help out. I am now ready to do this, but on a different blog, Economic Logic, Too.

Why keep them separate? EL got a reputation, and I would hate to ruin it. I have no idea how well this initiative will turn out with guest bloggers. I guess am too risk averse for allowing guest posters to alter EL. I hope they prove me wrong.

Here is how you can contribute to EL2 (this may be amended if need be):

I will accept discussions of interesting research and interesting discussions of research. That means it can also be rejected.

I may make a few small editorial corrections to the text.

There needs to be a link to an IDEAS page.

The research needs to be recent and in working paper form. This means it needs to be in Open Access (NBER and CEPR are OK) and unpublished.

You cannot discuss your own research. This is not voxEU.

You need to submit your text under your real name. It can be published anonymously, though.

Send me an email with your write-up, mention if you want it under a pseudonym or your real name.

Friday, December 6, 2013

How immigrants integrate into the native population has been a concern in many countries for a long time. Typically, this has been done by looking at how they marry, they educate themselves, how much they earn, and how they conduct criminal activities. Their labor market behavior, particularly how they search for a job, is less studied.

Audra Bowlus, Masashi Miyairi and Chris Robinson fill that gap by applying a search model to Canadian data. Two crucial parameters in those models are the job arrival and job destruction rates, which the authors allow to differ between natives and immigrants. In addition, they add a switching process wherein an immigrant stochastically acquires the characteristics of a native. This allows to determine that it takes on average 13 years for this to happen, quite a long time I think. More interesting, however, is that immigrants get significantly fewer offers, 36% lower when unemployed and 93% lower when employed, and they lose their jobs faster as well. This means that if an immigrant transitions to another job, it is almost always through unemployment. As a natural consequence, their wages catch up very slowly with that of natives. In fact, the search process accounts for more than half of the wage gap.

Thursday, December 5, 2013

How do you get children to learn better at school? You can change the curriculum, the teachers, class size, or the administrators. You can provide incentives to the teachers, children or their parents (unfortunately, you cannot change the parents). You can bus children to other schools. You can add more equipment to the classroom. All this may be effective to some extend, and in some cases not. That is a major part of what the Economics of Education field is about, and there is obviously still a lot to do, given how the latest PISA results show dramatic differences across countries.

Tess Stafford adds another piece to the puzzle. As a Texas school district went through a renovation project for all its schools that involved improving indoor air quality for most of the schools, and those projects were staggered over several years, it became possible to see how students test scores have changed before and after the renovation. They improved, and significantly. Stafford even claims that it has a stronger impact than reducing class size. So if you have poorly ventilated and moldy schools, you know what to do.

Wednesday, December 4, 2013

While the last decade will be remembered for the financial crisis, another marked new trend that is emerging in all industrialized economies is that of polarization. Not of the political kind, although this is also quite new and annoying, but rather in terms of a widening in the distribution of wages, with a gaping hole in the middle. Call it the disappearance of the middle class, globalization finally hitting the middle class, or the increasing weight of the top earners, there is not that much debate about the origin of this change in wages. It is skill-biased technological change, namely the emergence of the computer and robot taking over the routine jobs of the middle class. But those who can manage those computers carry a substantial skill-premium on the labor market. This has been amply documented when comparing the distribution of wages across education groups as well as across economic sectors.

Petri Böckerman, Seppo Laaksonen and Jari Vainiomäki look at firm-level data and find the same. Specifically, they take Finnish wage statistics and look at how the distribution of wages changes within firms, taking R&D expenses as an indicator of technological change. That is actually much better than previous aggregate-level studies for two reasons: First, the authors do not have to rely on time dummies only to identify the evolution, second they can better understand what is happening at the microeconomic level. For example, the wage bill share of middle-educated workers performing routine tasks decreases with increases in R&D expenditures, a clear sign that the middle class is getting pushed down.

Tuesday, December 3, 2013

Climate economics is a hugely complex undertaking, especially if you want to get some quantitative answers. Indeed, one has to embed economic and climate models into each other, and the later are not simple particularly once you want to incorporate the impact of some additional pollutants that bring the model outside of historical records. This complexity makes is largely impossible for most of us to attempt at looking at, say, at the effect of our favorite policy intervention.

This need not be so, write Inge van den Bijgaart, Reyer Gerlagh, Luuk Korsten and Matti Liski. Indeed, they find that one can reduce these complex interactions to a single equation and get it 99% right when trying to calculate the social cost of carbon (the monetary equivalent of carbon in the air). While the equation is not particularly simple, it is useful in the sense that one still needs to explicit one's assumptions when using it. And unlike a fundamental equation recently discussed here, the authors detail how it is derived.

Monday, December 2, 2013

If you look at just about any paper that considers the life-time profile of wages, you invariably find the same picture: a hump-share with wages increasing until age 50 to 55, and then a steady decrease. This decrease is every time justified with older people getting less productive and thus getting lower wages. But I have yet to see anecdotal evidence of that. Everybody I know has constantly increasing labor income. So what is wrong?

María Casanova has it figured out. You need to make the distinction between several types of people: those who continue working throughout until age 65, those who retire early, and those in the middle who take part-time jobs. The latter take significant hits on their wages as they typically change jobs, I presume to find a way to enjoy more leisure. In terms of income, this is a double whammy: wage and hours go significantly down. And this is what pull down the wage profile for the 50 to 65 year olds. Indeed, those remaining at full-time jobs actually see slight increases in labor income from increases in wages. This means that all those models that have people working full-time until retirement and use the hump-shaped age profile of wages or labor income have it all wrong.

Friday, November 29, 2013

A new and recent trend in the United States has been the decline in the homeownership rate. While I have mentioned before that homeownership (the "American Dream") is not necessarily a good thing, both privately and socially, it is heavily favored by government policies. And while obviously the recent housing debacle has reduced homeownership, the trend started before that. To understand why the trend is down, it may be of interest to understand how it went up.

Daniel Fetter looks at the period where the nationwide homeownership rate went up the fastest, World War II. Paradoxically, this was a period were home construction was actually severely restricted. Yet, the 10 percentage point increase (half the increase over the entire century) happened in the context of widespread rent control. Exploiting differences in rent reductions through control across cities, Fetter finds that a majority of the increase in the homeownership rate was indeed due to rent control. I suppose it was renters somehow coerced by their landlords to buy the home they lived in, with no alternatives available. Would this mean that imposing rent control now would reverse the decline in ownership? I doubt it, as the market has now segmented between owned homes and rented apartments, and they are not close substitutes for the most part. And you would not want rent control anyway.

Thursday, November 28, 2013

While the unemployment situation in the US is gradually getting better, the numbers on the labor force participation continue to decline. This worries a lot of people because this can be a sign that some of the unemployed are getting discouraged and drop out entirely out of the labor force. But it may also simply be the continuation of a trend for a few decades already of a steady decline in the labor participation rate, in which case this would be much less worrisome.

Regis Barnichon and Andrew Figura add to this discussion that we should not only think about three categories (employed, unemployed, and not in the labor force), but four by adding the marginally not in the labor force. They are not in, but are close to getting in the labor force, an typical case being a discouraged formerly unemployed. These people tend to join by being unemployed first, while other nonparticipants join the the labor force by transitioning straight to employment, because they value not being in the labor force (students, retirees, mothers) and can only be attracted with a job. Barnichon and Figura document that the numbers of marginals has declined for quite some time, which can explain of decline of a half percentage point in the unemployment rate from 1976 to 2010. This cuts across all demographic groups, so a demographic shift can be ruled out as an explanation. The last recession may have unraveled all that, though, we will need a few more years of data and a full recovery to determine whether the trend continues.

Wednesday, November 27, 2013

Academics are in their ivory tower and have little real world or policy impact. That is the view that is often conveyed by those who do not know what those academics are up to. It is also a common justification by policymakers for ignoring any advice coming from academia. I have lamented many times that politicians routinely disregard advice from scientists (including economists), particularly by focusing law-making on the means instead of on the goals. That said, I recently mentioned work that argued that Keynesian policies will always appeal more to policymakers than Hayekian ones, because it gives them a reason to do something in times of crisis.

Michel De Vroey compares rather Lucas to Keynes. Lucasian macroeconomics relies a lot on internal consistency. This disciplines the theory a lot, but this acts also a straitjacket that in unappealing to policy makers. Keynesian theory has a lot more hand-waving regarding consistency but seems to have an answer for everything because it can cut corners (even if answers may turn out to be wrong). The fact that it appears to be so flexible and know-it-all (like those "economists" that are willing to answer any question journalists may have, I would add) makes Keynesian theory a magnet for policymakers, especially in terms of crisis. And this is why, with the last recession, macroeconomics has been declared to be in crisis, because it listened to Lucas and not Keynes for three decades and did not always immediately have answers.

This is an argument written by someone in the ivory tower. Contrast this with someone involved in policymaking. The very recent interview of James Bullard seems to say the exact opposite. Policymakers, at least monetary policymakers, are very much looking at Lucasian theory for help. In his words, "there is still no substitute for heavy technical analysis to get to the bottom of these issues" (speaking of the financial crisis) and that is happening with structural, internally consistent modeling. Hand-waving does not cut it. And I agree.

Tuesday, November 26, 2013

When an author describes his work in the abstract or the introduction, it is common to highlight what is "new," "novel," "unique," an "improvement," or "better." But you do not write that your paper is "pioneering" or "seminal," as this can only be established by others in hindsight.

That does not stop Sarbajit Chaudhuri and Manash Ranjan Gupta, who start their abstract with "This paper makes a pioneering attempt to provide a theory of determination of interest rate in the informal credit market in a less developed economy in terms of a three-sector static deterministic general equilibrium model." OK. So we have a static model to determine the interest rate. That is pioneering. I always thought the interest rate was tied to the relative price of commodities in different periods. I guess the genius here is that with a static model, one needs not to worry about future shocks and even current shocks are instantaneously resolved so the model is also deterministic! This allows to simplify everything to a great extend, but apparently still provides a major improvement of Gupta (1997), that was, however, already pioneering the static determination of the interest rate. So the pioneership of this paper must lie elsewhere. I think the pioneering aspect is rather in the assumption that there is no flow across regional informal markets and moneylenders have a local monopoly. Imagine the pioneering strides we are now making towards a closed-form solution of the model!

Monday, November 25, 2013

Quite a few countries guarantee paid leaves for new mothers that not only allow them to get back the same job they left, but also gives them the financial wiggle-room to well take care of their new offspring. This time at home without worries is good both for the mother and the child, although one can suspect that this time off work can have adverse implication on human capital and the future career path for the mother. Paid maternity leaves are also often promoted as a way to conduct social policy across all social strata, as they apply to everyone.

Not so fast, say Gordon Dahl, Katrin Løken, Magne Mogstad and Kari Vea Salvanes. They look at Norwegian data, where the paid leave was increased from 18 to 35 weeks between 1987 and 1992. As it did not crowd out unpaid leave and expanded the time spent at home for the mothers, we should see some positive effects on child development in the country. None of that seems to have happened, not even on parental earnings, labor market participation, fertility, marriage and divorce. So it seems to be a rather useless reform. Worse, this expansion redistributed resources the wrong way. Indeed, in the absence of crowding out unpaid leaves, the reform corresponds to a pure leisure transfer to upper and middle income families (lower income families tend to have fewer working mothers in Norway). The reform is thus regressive. And we have not mentioned that there are obvious costs to someone for paying mothers while they do not work.

Friday, November 22, 2013

I have complained several times on this blog about how the American Economic Association is run, particularly how its executive and committees are constituted almost exclusively of faculty from the very top universities, and mostly private universities, see the current slate of officers (Past posts: 1, 2, 3, 4). This lack of representation leads to apparent nepotism in the distribution of awards, and this can lead to suspicions of the same for acceptances to its annual meeting program (especially the printed, unrefereed proceedings) and to its journals. I have called in the past to write in at the elections a candidate that does not fit the profile of current AEA officers, but rather a common member of the association. But the AEA has only announced the winner of the election, with no vote tally. As this does not look very transparent, I enquired with the AEA Secretary-Treasurer, Peter Rousseau, whom I asked about full election results and how they are certified. Here is what he answered:

The long-standing policy of the AEA in reporting election results is to report only names of those elected. This policy was re-visited by the Executive Committee several years ago. The minutes of that meeting state:

"A member requested that the number of votes for each candidate in the annual election of officers be reported publicly. Current policy is for the Secretary-Treasurer and Administrative Director to certify the vote counts, which are tabulated electronically, and to report only the names of the successful candidates. After an interesting economic and psychological analysis of the advantages and disadvantages of reporting individual vote counts, it was decided to retain the Association's policy of reporting only the qualitative outcome of the annual election of officers."

The bylaws clearly state that the Secretary certifies the results. Please be assured that it is my fiduciary responsibility to the membership as its agent to report those qualitative results accurately.

Thank you for supporting the AEA and its mission of encouraging economic research worldwide.

So it is the very executive committee that is suspect of inbreeding that is at the origin of this policy of obfuscation of the election results. And it is a member of the executive committee, the unelected Secretary, that certifies election results and only releases part of them. This is how dictators run sham elections.

Japan has been able to sustain unusually high debt levels for a long time, even when other countries were facing debt crises despite having lower debt to GDP ratios, and more sustained GDP growth. What makes Japan so different, and what does this imply for the sustainability of Japan's debt?

Charles Yuji Horioka, Takaaki Nomoto and Akiko Terada-Hagiwara analyze the recent evolution of Japanese debt and have a grim outlook. Up to a few years ago, the debt was largely financed by Japanese households saving towards retirement. But as Japan is continuing through its demographic transition toward an older population, this source of funding is going to quickly dry up, if not reverse itself as an older population requires more transfer payments. During the few last years, an increasing share of debt was bought from abroad by investors looking for safe alternatives during times of financial turmoil. This temporary funding allows to mask the underlying drying up of internal funding. This foreign debt also carries a shorter maturity, so we may expect soon some problems in Japan, especially if other investment opportunities start looking better. Unless the Japanese government gets its fiscal house quickly in order, we may see again a country struggling with its debt.

Thursday, November 21, 2013

When you think about market distortions through regulation and taxation in a developed economy, you think first about France. It is the prime example of how excessive government intervention can lead to disincentives for production and to major misallocations of resources across firms and sectors. This all accepted wisdom, except nobody actually measured the misallocation part.

Flora Bellone and Jérémy Mallen-Pisano do this using the Chang-Tai Hsieh and Peter Klenow methodology which consists of using a model of firms heterogeneous in their use of capital, labor and technology. Taking this to data, distortions in the use of factors at the firm or the sector level translate into lower aggregate total factor productivity. Hsieh and Klenow showed that there were massive distortions in China and India relative to the USA. Bellone and Mallen-Pisano show that for France, there are no more distortions that in the United States. Thus, there are no misallocations across firms or sectors, but it remains that there can still be a uniform misallocation across the entire economy, say, because of distortions on the labor market applying equally to all firms.

Wednesday, November 20, 2013

There is no doubt that the Internet has changed the lives for many of us, both at home and at work. Email, online retail, online news and plain googling around have transformed the way we communicate, inform ourselves, work and shop. How much this happened is an open question, and it must be very heterogeneous.

Scott Wallsten offers some important insights thanks to the American Time Use Survey. Comparing survey responses from 2003 to 2011, he figures out what time spent online must have crowded out. One third of it comes out of leisure, mostly TV viewing, another third of it working, one eighth less sleep, one tenth less traveling, and the rest from household chores and education time. Can we consider that this mix also represents what we do on the Internet (except for the sleeping part)? Not necessarily, as it must also have transformed the productivity at doing things. For example, news reading is now much more efficient, in my case working, too, but it is easy to wander off during surfing, and this must be increasing leisure time.

Note that the ATUS measures only "computer use for leisure" but I figure that a survey respondent working at home on the Internet must have be confused what to answer. Indeed this is the only way it would make sense that online time would have reduced work time. As far as I can see it, online time at work is not measured.

Tuesday, November 19, 2013

In the best of all worlds, improvements in agriculture productivity leads to surpluses that allow capital accumulation and the development of industry, which then provides better inputs for agriculture. This is a virtuous circles that eventually leads to agriculture using only a tiny fraction of the workforce and representing a minuscule portion of GDP. This so-called Lewis path to growth has happened in many western economies, but does not seem to take off in Africa, in particular.

Bruno Dorin, Jean-Charles Hourcade and Michel Benoit-Cattin show that the Lewis path is not the unique equilibrium path in a growth model. A particular concern is the so-called Lewis trap that would result from a lack of additional agricultural land, where agriculture keeps growing in the labor force for little gain in output. But why insist on farming where land is no good? We have a global economy now and can produce goods where the comparative advantage is highest. Many areas of Africa are simply no good for agriculture, so we should stop insisting that they should go through all the motions of the Lewis path. Go straight to manufacturing and import food (my previous rant on this). This would also imply that other areas would specialize in agriculture, which is good even though the authors complain that this would lead to urban poverty there. People will move where the jobs are, for example to charter cities.

Monday, November 18, 2013

It is every physicist's dream to find a formula so powerful that it can explain everything (and carry the inventor's name). Such hopes are not as prevalent in Economics, first as we realize that we cannot find such a fundamental equation (we are just not smart enough), second because an economy is so complex that it defies any attempts to reduce it to one equation.

This does not stop James Wayne, who as a physicist is still pursuing his dream. And he claims to have found the Fundamental Equation Of Economics (FEOE), thereby finally proving that Economics is truly part of Physics. What a relief. And what is this equation that can, as the author forcefully argues, explain all observed economic phenomena and solve all economic problems, without exception? What is this formula that shows that equilibrium, the laws of supply and demand, DSGE and SL/ML (whatever that is) models are all deeply flawed? Here it is: The change in time of the joint probability distribution of future valuation of assets and liabilities is a function of its current distribution. We do not know yet what this function is, because it is currently too difficult to figure it out at the atomic level, but we know it exists. Now we can go revolutionize Economics and solve the world's problems.

Friday, November 15, 2013

There are no laws mandating helmet use for motorcyclists in many developing countries and some US states. In the first case, such laws are likely unenforceable, in the second I suppose the American urge for "freedom" makes such laws inappropriate and the hope is that motorcyclists would have the common sense to use helmets. So what determines why some choose not to wear a helmet?

Michael Grimm and Carole Treibich went to Delhi and surveyed motorcyclists, focusing on helmet use and speeding. Some of the answers are not surprising: the bare-headed ones are less risk-averse, younger, less educated and less informed about accident and fatality rates. More interesting is that speeding and not using a helmet seem to be strong substitutes. Imposing helmets alone would thus not necessarily improve safety. One would have to impose helmets and enforce speed limits. That is likely too much to expect from India though, where just informing about the true risks may be more effective.

Thursday, November 14, 2013

In many countries, it is customary or even mandated that firms should pay employees they let go a severance package. While that may make sense as compensation for a labor contract that is getting broken, mandating it under all circumstances may make little sense on a first glance. It adds to firing costs and may lead firms to retain less productive workers too much. And from having witnessed that first hand, this can induce an employee to become a poison for everyone else to tease out severance pay after getting fired.

Donald Parsons goes through the rationale of mandating severance pay and compares it to a labor market where no such pay is mandated. It turns out there would be not much difference, as firms do voluntarily offer severance pay, as mentioned to break a contract, but also to avoid having an employee move to the competition. On a aggregate level, such pay does slow down worker movement a little bit, but it also has its advantages. For example, it can substitute for unemployment insurance for some time and it can insure against wage loss in reemployment. This is good, especially if moral hazard or administrative costs in these programs are high.

Wednesday, November 13, 2013

The Second Basel Accord was put in place to more effectively prevent bank failures. The first one imposed some rather rigid rules that where not taking into account the true risk exposure of the banks, which obviously varies according to the particular activities of the bank and overall economic conditions. Basel II is more flexible in that it allows banks to used their risk models and scenarios to determine how much capital they need to secure. The goal is to have sufficient capital in 99.9% of cases of unexpected losses, or a failure once every 1000 years. That is pretty safe.

Except it is not. The first exhibit is of course what happened during the last recession. The second is a paper by Ilkka Kiema and Esa Jokivuolle that shows that in fact only a fraction of the regulatory capital needs to be loss absorbing capital. Indeed, half can be subordinated debt and thus not available when needed. According to the authors, this means that the true risk of bank failure is every 20 to 100 years. Not very reassuring, and Basel III does not seem to really address this.

Tuesday, November 12, 2013

In dynamic stochastic models, standard utility function specifications imply that the curvature of this function determines directly both the risk aversion and the elasticity of intertemporal substitution. When calibrating this, modelers have a tendency to be waving hands a bit too much, as they focus more on one than the other. In addition, their calibration seems to be immune to changes in data frequency. Those who are careful about this use Epstein-Zin preferences which disentangle risk aversion and the elasticity of intertemporal substitution. They think they have done all they could to address a proper calibration.

Well, not quite. Larry Epstein, Emmanuel Farhi and Tomasz Strzalecki show there is a third dimension in play, the temporal resolution of long-run risk. Indeed, the interaction of risk aversion and elasticity determines whether economic agents prefer early or late resolution of risk. This matters. Indeed, long-run risk is priced by markets differently than short-term risk, typically higher. Indeed, people are willing to pay to know uncertain outcomes earlier. But we do not know how much so far. An opportunity for additional research.

Monday, November 11, 2013

China has been heavily criticized by western politicians and policy-makers for its exchange rate policy that favors its export industry. Some have tried to explain to Chinese authorities that it is not in their best interest to follow a quasi-fixed exchange rate with the US dollar. Indeed, we know from past experience that fixed-exchange rates can be very expensive to maintain, especially in the context of large external imbalances. But is China different? After all, it financial development is
clearly less advanced than Western economies, and the Chinese economy is growing much faster.

Philippe Bacchetta, Kenza Benhima and Yannick Kalantzis look at the optimal exchange-rate policy of a growing economy where domestic households do not have access to international markets, that is, China. They find that the optimal path for the exchange rate is first a real depreciation during a growth spurt, and then a real appreciation in the long-run. This is pretty much what China has been applying. In other words, China did everything right given its situation, and this is because the growth spurt generates a glut of savings that have nowhere to go. The real depreciation allows to take care of this current account imbalance having the central bank serve as intermediary and converting foreign assets to domestic ones for the desperate households. In some sense, we could even argue that the Bank of China has not done enough of that given the real estate bubble, which is also a consequence of this savings glut.

Friday, November 8, 2013

Why are people drawn to work as an artist? This kind of job seems to have all the characteristics that one would like to avoid for a typical career: very low pay, usually the need to supplement income with another job, the most unequal distribution of income of any field, and with all this a chronic oversupply of labor. One may argue that one should set Economics aside for the arts labor market, but I do not believe that the love of arts can be the only explanation for this uncharacteristic labor market. People need to live, and if they love the arts they can always do this as a hobby.

Milenko Popović and Kruna Ratković find a better explanation. An artist's productivity is a function of accumulated art-specific human capital. If artists are forward-looking and they can cope with very low income during their formative years, it can then make sense to get into such a career. The issue is the uncertainty whether one's artistic career will actually pan out. This is where the oversupply comes in: many people start an artistic career to see how it works out, but eventually drop out. While all this makes intuitive sense this last part about uncertainty is largely hand-waved by the authors and should be subject to some serious quantitative exercise to see whether it can hold water with data, though. Thus, I am still not letting my children get into such careers.

Thursday, November 7, 2013

The debates on whether to, depending on the country, introduce, repeal, increase or lower the minimum wage are never going to cease because empirical studies have not been able to give a definitive answer about the impact of the minimum wage on employment. The issue is first that good data is difficult to come by, second that there are many confounding effects and unobservables that may vary from one labor market to the other in significant ways, and third that the true effect may actually be small.

Sylvia Allegretto, Arindrajit Dube, Michael Reich and Ben Zipperer analyze a common way to study minimum wage hikes (to be distinguished from their introduction), cross-state regressions for the US, as US states have the option to set a higher minimum wage than the federally mandated one. They use six techniques employed in the literature to compare outcomes with four datasets. The reason why you want to try so many methods is that a simple regression does not cut it. The level of the minimum wage, for example, is associated with different business cycle characteristics, that is, setting a minimum wage at a particular amount is endogenous with all sorts of things that can be associated with the labor market. Still, no matter how they look at the data, the authors find that the effect of minimum wage hikes on employment is small, if there is any. This increases the odds that the effect is actually small.

Wednesday, November 6, 2013

Modeling the labor market, we tend to postulate that wages are either posted by employers or negotiated, typically by Nash bargaining. This is especially true of search and matching models, which often study business cycles. Results depend to some degree on this assumption, thus it should be a good idea to check against the empirical evidence how wages are determined in the matching process.

Wage posting dominates in the public sector, in larger firms, in firms covered by collective agreements, and in part-time and fixed-term contracts. Job-seekers who are unemployed, out of the labor force or just finished their apprenticeship are also less likely to get a chance of negotiating. Wage bargaining is more likely for more-educated applicants and in jobs with special requirements as well as in tight regional labor markets.

This implies in particular that the mix may change over the business cycle (as labor-market tightness changes), and that models that assume that one must be unemployed to apply for jobs and then get Nash bargaining are inconsistent with the data, at least in Germany.

Tuesday, November 5, 2013

One important characteristic of Economics is that it is very difficult to conduct a clean experiment. While one may run little laboratory experiments with a few chosen subjects, there is always the uncertainty whether the experiment generalizes. The randomized experiments typically used in development economics are subject to the same limitations, even if their scope is larger. And in all those experiments, their applicability is limited to microeconomic questions.

Oleksiy Kryvtsov and Luba Petersen venture into experiments directly applicable for macroeconomic policy, and more precisely monetary policy. Monetary policy has bite when there are some frictions, among them expectation formation. Their idea is thus to see how people form inflation expectations in a laboratory setting and within the context of a standard new-Keynesian model. In that model with economic agents having rational expectations, monetary policy can reduce macroeconomic volatility by at least two-thirds. With the bit of irrationality exhibited by participants to the experiment, the reduction is still about half, and thus important. The model is a Woodford-style economy where participants have to provide updates on inflation and output-gap expectations, which can be compared by the observer against rational expectations ones. People learn about changes to fundamentals and can draw on past history. In other words, it is like they would live in the Matrix, they are fed information and are supposed to behave within the confines of a virtual world.

This is very interesting and innovative stuff here. I must concede though that I have still not bought the Woodford model. I cannot understand how one can talk about monetary policy in model with supposedly fundamentals when there is no money.

Monday, November 4, 2013

Child labor has often been described as a vicious circle. Parents have too little income to feed their family and require their children to work. Children do not get educated and end up earning too little to sustain their own family. One may then question why they decide to have children in the first place.

Simone D’Alessandro and Tamara Fioroni build a model of human capital and fertility with child labor. At least in theory, they highlight that destitute parents find it relatively advantageous to have children: they are less costly as they can work. If their net contribution is positive, they want to have many children. And this mechanism can be self-reinforcing if the gap between skilled and unskilled wages is large. This is an amplified quantity/quality trade-off that increases child labor and leads to more wage inequality. The only way out is to make it more attractive for unskilled parents to have fewer children and not have them work. Legislating child labor away will not help, as already demonstrated many times. One example was discussed here, and some was to get one of the vicious circle as well: 1, 2, 3.

Friday, November 1, 2013

Isn't it interesting that most human societies, even when not in contact with each other, evolved to a model with long-term monogamous families? What made it crucial for evolution to avoid polygyny, communal families or repeated monogamy? Certain biological traits must have been necessary (and sufficient?) for this to happen.

Marco Francesconi, Christian Ghiglino and Motty Perry show that once you put this into the framework of a game theory model with overlapping generations, it all makes sense. You just need three features: children of different ages overlapping (i.e., women cannot bear "too many" children simultaneously), paternal investment (father need to help for children to succeed), fatherhood uncertainty (fathers may not be certain which children are theirs). This means that mothers need to secure the help of fathers by assuring that they are helping the right children thanks to monogamy. The first feature is necessary, but it is not clear to me why. I think it is because it gives more assurance to the father about paternity. Monogamy is then not only the most efficient family form in the sense that it maximizes the number of offspring, this is even amplified because it is the only form that creates altruistic ties between children.

Thursday, October 31, 2013

Despite the fact that it happened about 200 years ago, we are still puzzling why the Industrial Revolution happened, why it started in Britain and it happened at that moment. A sample of previous work relevant to this has been discussed on this blog: 1, 2, 3, 4. While all this is old history, it is still kind of relevant, as we are also trying to understand how to get the least developed economies to get through a similar revolution. The circumstances are different, but lessons from two centuries ago may be useful.

Morgan Kelly, Cormac Ó Gráda and Joel Mokyr add another piece to the puzzle. British men were significantly better fed and taller than their continental counterparts. They likely had better cognitive skills, too, as we know today that they correlate positively with physical health. And, the distribution of these positive traits was such that a significant share of the population had the right characteristics to participate in the Industrial Revolution. That was not the case elsewhere. Thus, good human capital and a good distribution of it are necessary for the Industrial Revolution, but likely not sufficient.

Wednesday, October 30, 2013

I believe that perseverance and timeliness are the secret to success, and foremost so in school. And I believe these are the qualities that brought me to where I am now, and I hope these qualities have also transpired on this blog. But ,y belief may not be general wisdom or even scientifically established. Thus, I am happy to report on a study that confirms at least part of my credo.

Marco Novarese and Viviana Di Giovinazzo use data on how promptly astudents have enrolled for university to forecast their future academic performance, and the forecast is quite good. Of course, promptness likely correlates with plenty of other positive student characteristics the authors cannot measure. And of course, the result is not too surprising. But I feel comforted in my belief and my bias in selecting studies that confirm my prejudices is thus reinforced.

Tuesday, October 29, 2013

It is well known that girls from developing countries face hurdles in their schooling experience. This goes from subtle issues during their periods, curricula geared towards boys, and household work to plain denial of access to schools. While some of this has to do with cultural issues that are difficult to overcome with (economic) policy, some help could be surprisingly easy. It happened before in public health, my favorite example being telling kids to wear shoes eradicated hookworm from many parts of the world.

Karthik Muralidharan and Nishith Prakash have a recommendation, and that is to give girls a bicycle. They base this on an experiment they ran in India, where girls were offered a bicycle if they continued into secondary education. This helped overcome traditions that would not let girls out of the village and increased enrollments by 30% and closed the boy-girl gap by 40%. The authors also claim this is more cost-effective that the traditional cash transfers because bicycles have positive externalities, such as the safety of girls during commutes and more generally empowering them. As with any such experiment, one can question whether the result can be generalized, but it is interesting nonetheless.

PS: As several readers noted by email (but could have commented), this is not a randomized experiment. Rather, the authors used an initiative conducted by the government of Bihar. I apologize for the confusion.

Monday, October 28, 2013

I generally find debates about schools of economic thought annoying, especially when it is all about adoration of some dead economist while ignoring all the progress we have made since his contributions. Unfortunately, these dead economists keep coming up in the public debate, I think because these are the people non-economists are familiar with, from basic economics classes and popular readings.

Kristina Spanting studies the back-and-forth in popularity between Keynes and Hayek in light of the past 80 years or so of economic history. Keynes was all about shorter term solutions to crises, while Hayek had a longer term vision of things, and would not bulge from it no matter what the circumstances. Accordingly, their popularity in policy circles has oscillated depending on the need to react to a crisis. Keynes is an easy sell to politicians in such times. The electorate is asking them to do something, and Keynes provides the justification for that. And all the work economists have done since Keynes (and Hayek) is brushed aside just when you should draw the most on it. Sad.

Thursday, October 24, 2013

The financial sector is not riding high in popularity polls lately. First, compensation is deemed excessive. Second, the general public often does not perceive the benefits of a financial industry. The most common error there is the idea that finance plays a zero-sum game: anything it gains is necessarily taken away from others. Finance allows a better reallocation of resources and funds to the most productive businesses, and this raises overall productivity. But as it is well rewarded for this, it seems to be attracting perhaps an excessive number of top talents who could, at the margin, be more productive in other sectors. This brings us to a third issue: the financial sector is hiring away the best people from other sectors.

Christiane Kneer studies this inter-sectoral brain drain by looking at the consequences of financial deregulation on sectoral productivity. The assumption here is that financial deregulation attracts top talent to the financial industry because it allows the design and management of new and complex financial instruments. She finds that industries that rely the most on human capital are hurt: after an episode of financial liberalization, they have lower labor productivity, lower value added growth and lower total factor productivity. This is what happens when, for example, a software engineer moves to finance to exploit arbitrage in trade by gaining micro-second advantages over competitors. The social benefit of this arbitrage is close to zero, and some industry lost a great software engineer.

Wednesday, October 23, 2013

Whenever high marginal tax rates are discussed, the example of Sweden is brought forward. And typically the episode where they where close to 100% on labor income. But this has not always been so high, and it is certainly not the case now. What is the history of tax rates in Sweden?

Gunnar Du Rietz, Dan Johansson and Mikael Stenkula look back at 150 years and reconstruct marginal tax wedges for top and average earners. Tax wedges are different from tax rates in that they also incorporate other contributions, such as for social security and payroll taxes. I learned from this study that Sweden was in fact a very low tax country at the start of the 20th century, but this all changed with WWII, a succession of crises and the push for the welfare state that culminated in the 1960s. This change of attitude towards taxation and redistribution was relatively quick, and may have lead to excesses that have been built down since the 1990's. The paper also has a very detailed description of the tax system in Sweden over this period.

Tuesday, October 22, 2013

It is well known, and I have documented it before, that women behave differently from men in politics, in particular when it comes to policy priorities. While the various examples I have discussed before are interesting, it is difficult to ascertain that they generalize. Indeed, politics is fraught with social and local norms. We need more studies.

Fernanda Brollo and Ugo Troiano look at municipal elections in Brazil and concentrate on those where the mayoral seat was hotly contested between a male and a female candidate. One can thus consider that the electorate was essentially similar whether the female or the male won. The outcomes are damning for men. Whenever a woman became the mayor, health outcomes are better, corruption is lower, and the municipality gets more federal funding. To illustrate how men are politicizing relatively more, Brollo and Troiano find that male mayors up for reelection will hire many more temporary workers, a clear sign of electoral patronage.

Monday, October 21, 2013

In some developing economies, cattle are used as store of value. This is because there is no other good asset available as financial markets are not developed. Cattle has its drawbacks though, as it can die from disease or hunger, usually at the worst moment, can walk away or be stolen, and thus needs constant guard. This implies that their return could actually be negative.

Santosh Anagol, Alvin Etang and Dean Karlan find that cows and buffaloes in rural India have a negative return of a whooping 64% respectively 39%. If you take the extreme assumption that labor has no return, then their returns are minus 6% respectively plus 13%. How is that possible? The authors offer several potential explanations: measurement error, preference for home-made milk, the lack of other saving vehicles, in particular those that allow commitment to keeping those savings, improvement in social and religious standing, and preference for lotteries (small probability of striking it rich with female cattle). The one I like the most is that marginal return of labor is actually zero. Indeed, farms do not operate like firms. As they are typically family-operated, everyone "works" even if that means being idle most of the day. This idle person may have a productivity close to zero, and may thus be used to guard cattle.

Friday, October 18, 2013

I have always found the empirical monetary policy literature rather frustrating. It is entirely based on the premise that one can identify monetary policy shocks. First, I am not sure what is really meant by a shock. Is it any change in a policy variable? Not changing it may be a surprise, as we recently witnessed by with the recent FOMC decision not to throttle quantitative easing. And how much a change is anticipated matters as well. The recent emphasis on forward guidance makes the interpretation of an interest change very different from the surprise actions from a few years ago. Second, the empirical identification of those shocks seems doubtful at best. Either you take a VAR and interpret residuals as shocks (never mind those will be significantly different across specifications), or you try to quantify some narrative of policy decisions, sorting out rather subjectively what was a surprise and what was expected. Third, a monetary policy shock should be measured differently under different policy regimes. There is no point on focusing on the Federal funds rates (or a Taylor rule) when the policy focuses on the money supply, for example.

The reason for this rant is that I came across a paper by Martin Kliem and Alexander Kriwoluzky who try to reconcile the VAR and narrative approaches, which of course is impossible. What they highlight though is that both are fraught with error. They find this by plugging the narrative measure into a VAR and they conclude that there is measurement error in the narrative measure and misspecification error in the VAR. That should surprise no one, but needs to be pointed up, with so many people relying blindly on these instruments.

Thursday, October 17, 2013

Many people avoid investing in certain types of firms they associate with unethical or sinful behavior. That would include tobacco companies, high polluters, alcohol, fire arms and defense industry, etc. That should lower the stock market return of these firms, but there is of course some arbitrage that negates these return differentials. Yet, is there some way in which being in a sinful sector is detrimental?

Stergios Leventis, Iftekhar Hasan and Emmanouil Dedoulis found one, and that is the cost of auditing. Auditing firms are extremely sensitive to their own reputations, and who they do business with is part of their reputation. The authors also argues that auditing firms perceive that sin firms bear higher business risk, perhaps because they deviate from social norms and require more scrutiny (risk of litigation, need for higher cash reserves). In the US, such companies end up paying a whooping 20% more in auditing and consultancy fees. I wonder where else they face higher costs (it is known they have higher capital costs). This means that their stock price should still be affected despite arbitrage.

Wednesday, October 16, 2013

I have recently mentioned that entrepreneurship cannot be taught, which implies entrepreneurship classes have little value. What should these entrepreneurship professors then do? Do research on entrepreneurship? It turns out that is also of questionable value.

Case in point, the latest paper of Johan Venter. He wants to understand how entrepreneurship emergences in post-conflict economies and lead to new jobs. To this end, he travels to Liberia and surveys ... entrepreneurship professors who, of course, testify about strong interest in entrepreneurship classes. Never mind that taking such classes has no impact on entrepreneurship outcomes, Venter concludes that entrepreneurship should get more emphasis throughout the curriculum. That came out of nowhere, or rather out of a pitiful survey with 28 respondents.

Tuesday, October 15, 2013

In macroeconomics, one distinguishes between non-durable and durable consumption goods. This distinction is important, as the cyclical nature of the two is very different. Durables are very volatile, as households like to postpone their acquisition in recessions. Non-durables are extremely smooth, however. The later is what most models have in mind when thinking about consumption, while the first are more like investment goods, but at the household level.

Rong Hai, Dirk Krueger and Andrew Postlewaite think we should add a third category: memorable goods. These are non-durable goods that may not last long physically, but we keep good memories about them and thus they continue to provide utility in the future. In essence they are also durable goods, but they are not counted as such in national accounting. Some examples the authors provide are Christmas gifts whose memories last through the year. The same applies to vacations, going out, clothes, and jewelry. Using the consumption expenditure survey, the authors find that memorable goods lie somewhere between durables and non-durables in terms of cyclical properties. As they account for about 14% of outlays, their presence matters quantitatively. In fact, they can fully explain some observed deviations from the permanent income hypothesis. A paper to remember and cherish for a long time.

Monday, October 14, 2013

A good reputation is difficult to earn and easy to lose. And reputation matters, think of monetary policy, auditing, medical doctors, restaurants, and politicians. With the Internet, online reviews and reputation have become important as well. I certainly take them into account before buying online. From the point of view of a seller, how do you build a reputations?

Ying Fan, Jiandong Ju and Mo Xiao got access to data to the major Chinese e-commerce platform to study the evolution of seller reputation. In particular, they have been able to trace the strategies and histories on sellers. They show that a good reputation is a great benefit, but that new sellers have a very hard time establishing it. Imagine you start with no reputation whatsoever and are competing with established sellers. To gain an edge, you need to resort to sales and attract attention in various ways, such as cross-listing your product all over the place. This is a lot of effort, and the authors argue that there is too much of it.

This reminds me of the early days of this blog. Being anonymous, I obviously started with no reputation and had to build it from scratch. With barely any readers, I started adding links to unrelated, but interesting stuff to attract more. That did not work, although this has worked for others (restaurant reviews come to mind). It took several years for readership to really pick up, and I thought several times about abandoning during that time.

Saturday, October 12, 2013

I have recently had the opportunity to fly extensively across both Europe and North America, and it has struck me how different the experience was. It is also puzzling me why this is so.

Let me first highlight the differences I observed. On almost all counts, flying in Europe seems superior. The aircraft are newer, they are equipped with entertainment systems or individual monitors, they serve meals, and flight staff is attentive. Airports are not overcrowded and well-connected to cities, usually by train or subway. Security is rather smooth and security personnel seems "normal".

Contrast this with North America, where the fleet is old and noisy, nothing but a magazine is offered as entertainment, everything but non-alcoholic drinks is nickel and dimed (and the airline's credit card is constantly peddled to you), and flight staff seems tired or disgruntled. Airports are full to the brim and impractical, in particular you have to rent a car or get an expensive taxi to get anywhere. Security is obnoxious and its personnel seems quite uneducated.

Even for transatlantic flights, there is a noticeable difference on similar counts between US and European airlines.

And with that, flying is less expensive in Europe, at least in my experience. Labor and fuel costs appear to be higher there, and I do not think European airlines are saving on their aircrafts as they are newer. Personnel, in particular, seems to have much better working conditions. A Delta stewardess, for example, told me she had to take a vacation day (one of 10 a year) to get a visa to fly overseas for Delta. And she is only paid when aircraft doors are closed. The dismal situation of US pilots is well known. I heard no similar complaints in Europe.

With all this, US airlines are doing very badly. They seem to have higher prices, lower costs and provide fewer services. How is this possible? Is it because there is more competition from rail and low-cost airlines in Europe? Is it because American airlines have some liabilities in their luggage, like large pensions or large overhead? The days of state subsidies for national airlines are long gone in Europe, so that cannot be an explanation either. I am left puzzled.

Thursday, October 10, 2013

I have argued several times already that hosting sports mega-events does not have a lasting impact and that the current impact is limited to the sectors providing directly services to the event (exhibits 1, 2, 3). Yet, politicians continue to come up with rationales why such events should be hosted locally and why public funds should be devoted to them. Is it that we economists are missing something here, or that the politicians are fooling everyone?

Marcel van den Berg and Michiel de Nooij observe that the immediate financial or economic gain from hosting is negative, thus one needs to finds arguments for hosting elsewhere. They highlights a series of biases among politicians that makes them commit to events they cannot afford. First, politicians commit early, before realities about what it implies have sunk in. Politicians thereafter rarely change opinions. Second, they are swayed by arguments about positive externalities like revitalizing areas or building otherwise useful infrastructure. But you can do all this without a mega-event. Third, they see only success stories. Fourth, the bidding process for a mega-event leads to a winner's curse like in any auction. Fifth, media are obviously biased in favor of hosting. Reporting on such events is their livelihood. Sixth, such mega-events provide excellent opportunities for "redistribution" of public funds to lobbyists. Seventh, it is all about pride. What a costly way to provide that.

Wednesday, October 9, 2013

Child labor is frowned upon because going to school is deemed essential to the development of every child, especially in terms of giving her the essential tools to do well as an adult. It is generally recognized that parents do not want to keep their child away from school (excluding those who insist on home schooling), but that sometimes economic hardship forces them to have children help with current expenses to the detriment of their future earnings. But child labor is not a black and white outcome. It may happen that children work and go to school. To what extend does this have an impact of academic outcomes?

Patrick Emerson, Vladimir Ponczek and André Portela Souza got their hands on excellent data from the municipal schools in São Paulo, where they can track students across several years, know whether they work outside the home, what their study habits are as well as a few socio-economic characteristics of the family. They find that transitioning into child labor leads to a decline in test scores for mathematics and Portuguese in the order of 6% to 10% of a standard deviation. That may not look like much, but this adds up to a quarter to a full year of education by the time they are done with school. However, one may argue that they also learn some useful skills for the labor market while working, so one can wonder how it look like in terms of adult outcomes.

Tuesday, October 8, 2013

In any survey, we must consider the issue of imperfect recall or misrepresentation in self-reported assessments. People do not remember precisely how much they spent on this or that, and they they may not recall how long they have been unemployed. For some questions, social pressure may also be a factor. For example, one may not concede on using illegal drugs or one may misreport smoking behavior. The extend of such biases can be measured though, if one has access to administrative data or some other objective measure. The results are often disappointing (Examples discussed here: 1, 2).

Vidhura Tennekoon and Robert Rosenman criticize the fact that the measures against which surveys responses are compared are taken as perfect gold standards. Specifically they look at the biochemical assessment of smoking status, whose results the literature never doubts and which makes self-reported smoking status look really unreliable. Once you use statistical methods that concede that the biochemical assessment may also include some measurement error, they realize that it may be just as bad as the self-assessment. One can thus not exclude that the self-assessment may actually be a better indicator that an independently recorded measure.

Monday, October 7, 2013

I find Arab economies depressing because the amount of mismanagement is staggering and because institutions are very ill-conceived. Part of it comes from the influence of religion, and part form the legacy of the Ottoman empire. It is unclear how corrupt and incompetent governments fit into this history, but they are certainly to blame, too.

Ragui Assaad shows that the labor markets in Arab countries are seriously messed up and finds the governments as the main culprits. Government employment is particularly important and is used as a political tool. The result is a lot of nepotism and cronyism, bloated administrations doing nothing, and large sectors of the economy depending on the government's largesse. Few businesses thrive without these handouts, there is thus very little healthy competition that tries to innovate. The competition is only in getting favors from agencies. As a consequence, there is little accumulation of human capital. While there is a boom in education, not much useful is concretely learned: the education sector is just as corrupt and diplomas do not mean much. I may add that the fields of study are also driven by the corrupt environment, as rent-seeking is everyone's goal.

Is this a hopeless situation? Assaad thinks Arab economies are stuck in this equilibrium. But I think there may be a way out. Indeed, the Arab Spring has considerably weakened governments. This is usually a bad outcome, but in this case this is an opportunity. While this may lead to some chaos, this may be better than counter-productive order. We'll see.

Friday, October 4, 2013

The diamond market is really strange. The wholesale market is dominated by a single firm that basically acts like a monopolist, as it is able to control supplies with large stockpiles (and even get a bailout from the government) and thus sets the prices at will. The resale market is heavily self-regulated to limit the supply from others than the dominant firm (just try to resell your diamond and get more than half what you paid for it...). Thus, if you would want to study price setting in this market, you would want to look at it from the angle of a monopolist trying to maximizing its profit and extracting as much as possible from demand.

Nicolas Vaillant and François-Charles Wolff do none of it. They just apply a hedonic regression and blindly regress selling prices on diamond characteristics. It is interesting that they find that there are some important non-linearities at round carat sizes, but is to be expected given the marketing by the diamond industry. But what quickly put me off in this paper is that the authors do not seem to understand some basics of economics. Here is a quote from the first paragraph:

In 2003, world demand for rough diamonds (US$9.5 billion) was significantly above the diamond supply (US$8.2 billion), so that the excess demand had to be satisfied from producers’ existing stockpiles.

First, by definition, supply includes stockpiles. How one would measure supply by ignoring the stock is a mystery to me. And how could one quantify supply and demand separately like this? In addition, it is not like there is rationing going on, which would justify higher demand than supply, as I cannot imagine that a monopolistic supplier set prices below equilibrium. After this opening paragraph, I cannot trust anything in the paper.

PS: And I just realized this is the second time I criticize a paper by these two authors...