Economic Logic, Too

About Me

I discuss recent research in Economics and various events from an economic perspective, as the name of the blog indicates. I plan on adding posts approximately every workday, with some exceptions, for example when I travel.

Friday, November 29, 2013

A new and recent trend in the United States has been the decline in the homeownership rate. While I have mentioned before that homeownership (the "American Dream") is not necessarily a good thing, both privately and socially, it is heavily favored by government policies. And while obviously the recent housing debacle has reduced homeownership, the trend started before that. To understand why the trend is down, it may be of interest to understand how it went up.

Daniel Fetter looks at the period where the nationwide homeownership rate went up the fastest, World War II. Paradoxically, this was a period were home construction was actually severely restricted. Yet, the 10 percentage point increase (half the increase over the entire century) happened in the context of widespread rent control. Exploiting differences in rent reductions through control across cities, Fetter finds that a majority of the increase in the homeownership rate was indeed due to rent control. I suppose it was renters somehow coerced by their landlords to buy the home they lived in, with no alternatives available. Would this mean that imposing rent control now would reverse the decline in ownership? I doubt it, as the market has now segmented between owned homes and rented apartments, and they are not close substitutes for the most part. And you would not want rent control anyway.

Thursday, November 28, 2013

While the unemployment situation in the US is gradually getting better, the numbers on the labor force participation continue to decline. This worries a lot of people because this can be a sign that some of the unemployed are getting discouraged and drop out entirely out of the labor force. But it may also simply be the continuation of a trend for a few decades already of a steady decline in the labor participation rate, in which case this would be much less worrisome.

Regis Barnichon and Andrew Figura add to this discussion that we should not only think about three categories (employed, unemployed, and not in the labor force), but four by adding the marginally not in the labor force. They are not in, but are close to getting in the labor force, an typical case being a discouraged formerly unemployed. These people tend to join by being unemployed first, while other nonparticipants join the the labor force by transitioning straight to employment, because they value not being in the labor force (students, retirees, mothers) and can only be attracted with a job. Barnichon and Figura document that the numbers of marginals has declined for quite some time, which can explain of decline of a half percentage point in the unemployment rate from 1976 to 2010. This cuts across all demographic groups, so a demographic shift can be ruled out as an explanation. The last recession may have unraveled all that, though, we will need a few more years of data and a full recovery to determine whether the trend continues.

Wednesday, November 27, 2013

Academics are in their ivory tower and have little real world or policy impact. That is the view that is often conveyed by those who do not know what those academics are up to. It is also a common justification by policymakers for ignoring any advice coming from academia. I have lamented many times that politicians routinely disregard advice from scientists (including economists), particularly by focusing law-making on the means instead of on the goals. That said, I recently mentioned work that argued that Keynesian policies will always appeal more to policymakers than Hayekian ones, because it gives them a reason to do something in times of crisis.

Michel De Vroey compares rather Lucas to Keynes. Lucasian macroeconomics relies a lot on internal consistency. This disciplines the theory a lot, but this acts also a straitjacket that in unappealing to policy makers. Keynesian theory has a lot more hand-waving regarding consistency but seems to have an answer for everything because it can cut corners (even if answers may turn out to be wrong). The fact that it appears to be so flexible and know-it-all (like those "economists" that are willing to answer any question journalists may have, I would add) makes Keynesian theory a magnet for policymakers, especially in terms of crisis. And this is why, with the last recession, macroeconomics has been declared to be in crisis, because it listened to Lucas and not Keynes for three decades and did not always immediately have answers.

This is an argument written by someone in the ivory tower. Contrast this with someone involved in policymaking. The very recent interview of James Bullard seems to say the exact opposite. Policymakers, at least monetary policymakers, are very much looking at Lucasian theory for help. In his words, "there is still no substitute for heavy technical analysis to get to the bottom of these issues" (speaking of the financial crisis) and that is happening with structural, internally consistent modeling. Hand-waving does not cut it. And I agree.

Tuesday, November 26, 2013

When an author describes his work in the abstract or the introduction, it is common to highlight what is "new," "novel," "unique," an "improvement," or "better." But you do not write that your paper is "pioneering" or "seminal," as this can only be established by others in hindsight.

That does not stop Sarbajit Chaudhuri and Manash Ranjan Gupta, who start their abstract with "This paper makes a pioneering attempt to provide a theory of determination of interest rate in the informal credit market in a less developed economy in terms of a three-sector static deterministic general equilibrium model." OK. So we have a static model to determine the interest rate. That is pioneering. I always thought the interest rate was tied to the relative price of commodities in different periods. I guess the genius here is that with a static model, one needs not to worry about future shocks and even current shocks are instantaneously resolved so the model is also deterministic! This allows to simplify everything to a great extend, but apparently still provides a major improvement of Gupta (1997), that was, however, already pioneering the static determination of the interest rate. So the pioneership of this paper must lie elsewhere. I think the pioneering aspect is rather in the assumption that there is no flow across regional informal markets and moneylenders have a local monopoly. Imagine the pioneering strides we are now making towards a closed-form solution of the model!

Monday, November 25, 2013

Quite a few countries guarantee paid leaves for new mothers that not only allow them to get back the same job they left, but also gives them the financial wiggle-room to well take care of their new offspring. This time at home without worries is good both for the mother and the child, although one can suspect that this time off work can have adverse implication on human capital and the future career path for the mother. Paid maternity leaves are also often promoted as a way to conduct social policy across all social strata, as they apply to everyone.

Not so fast, say Gordon Dahl, Katrin Løken, Magne Mogstad and Kari Vea Salvanes. They look at Norwegian data, where the paid leave was increased from 18 to 35 weeks between 1987 and 1992. As it did not crowd out unpaid leave and expanded the time spent at home for the mothers, we should see some positive effects on child development in the country. None of that seems to have happened, not even on parental earnings, labor market participation, fertility, marriage and divorce. So it seems to be a rather useless reform. Worse, this expansion redistributed resources the wrong way. Indeed, in the absence of crowding out unpaid leaves, the reform corresponds to a pure leisure transfer to upper and middle income families (lower income families tend to have fewer working mothers in Norway). The reform is thus regressive. And we have not mentioned that there are obvious costs to someone for paying mothers while they do not work.

Friday, November 22, 2013

I have complained several times on this blog about how the American Economic Association is run, particularly how its executive and committees are constituted almost exclusively of faculty from the very top universities, and mostly private universities, see the current slate of officers (Past posts: 1, 2, 3, 4). This lack of representation leads to apparent nepotism in the distribution of awards, and this can lead to suspicions of the same for acceptances to its annual meeting program (especially the printed, unrefereed proceedings) and to its journals. I have called in the past to write in at the elections a candidate that does not fit the profile of current AEA officers, but rather a common member of the association. But the AEA has only announced the winner of the election, with no vote tally. As this does not look very transparent, I enquired with the AEA Secretary-Treasurer, Peter Rousseau, whom I asked about full election results and how they are certified. Here is what he answered:

The long-standing policy of the AEA in reporting election results is to report only names of those elected. This policy was re-visited by the Executive Committee several years ago. The minutes of that meeting state:

"A member requested that the number of votes for each candidate in the annual election of officers be reported publicly. Current policy is for the Secretary-Treasurer and Administrative Director to certify the vote counts, which are tabulated electronically, and to report only the names of the successful candidates. After an interesting economic and psychological analysis of the advantages and disadvantages of reporting individual vote counts, it was decided to retain the Association's policy of reporting only the qualitative outcome of the annual election of officers."

The bylaws clearly state that the Secretary certifies the results. Please be assured that it is my fiduciary responsibility to the membership as its agent to report those qualitative results accurately.

Thank you for supporting the AEA and its mission of encouraging economic research worldwide.

So it is the very executive committee that is suspect of inbreeding that is at the origin of this policy of obfuscation of the election results. And it is a member of the executive committee, the unelected Secretary, that certifies election results and only releases part of them. This is how dictators run sham elections.

Japan has been able to sustain unusually high debt levels for a long time, even when other countries were facing debt crises despite having lower debt to GDP ratios, and more sustained GDP growth. What makes Japan so different, and what does this imply for the sustainability of Japan's debt?

Charles Yuji Horioka, Takaaki Nomoto and Akiko Terada-Hagiwara analyze the recent evolution of Japanese debt and have a grim outlook. Up to a few years ago, the debt was largely financed by Japanese households saving towards retirement. But as Japan is continuing through its demographic transition toward an older population, this source of funding is going to quickly dry up, if not reverse itself as an older population requires more transfer payments. During the few last years, an increasing share of debt was bought from abroad by investors looking for safe alternatives during times of financial turmoil. This temporary funding allows to mask the underlying drying up of internal funding. This foreign debt also carries a shorter maturity, so we may expect soon some problems in Japan, especially if other investment opportunities start looking better. Unless the Japanese government gets its fiscal house quickly in order, we may see again a country struggling with its debt.

Thursday, November 21, 2013

When you think about market distortions through regulation and taxation in a developed economy, you think first about France. It is the prime example of how excessive government intervention can lead to disincentives for production and to major misallocations of resources across firms and sectors. This all accepted wisdom, except nobody actually measured the misallocation part.

Flora Bellone and Jérémy Mallen-Pisano do this using the Chang-Tai Hsieh and Peter Klenow methodology which consists of using a model of firms heterogeneous in their use of capital, labor and technology. Taking this to data, distortions in the use of factors at the firm or the sector level translate into lower aggregate total factor productivity. Hsieh and Klenow showed that there were massive distortions in China and India relative to the USA. Bellone and Mallen-Pisano show that for France, there are no more distortions that in the United States. Thus, there are no misallocations across firms or sectors, but it remains that there can still be a uniform misallocation across the entire economy, say, because of distortions on the labor market applying equally to all firms.

Wednesday, November 20, 2013

There is no doubt that the Internet has changed the lives for many of us, both at home and at work. Email, online retail, online news and plain googling around have transformed the way we communicate, inform ourselves, work and shop. How much this happened is an open question, and it must be very heterogeneous.

Scott Wallsten offers some important insights thanks to the American Time Use Survey. Comparing survey responses from 2003 to 2011, he figures out what time spent online must have crowded out. One third of it comes out of leisure, mostly TV viewing, another third of it working, one eighth less sleep, one tenth less traveling, and the rest from household chores and education time. Can we consider that this mix also represents what we do on the Internet (except for the sleeping part)? Not necessarily, as it must also have transformed the productivity at doing things. For example, news reading is now much more efficient, in my case working, too, but it is easy to wander off during surfing, and this must be increasing leisure time.

Note that the ATUS measures only "computer use for leisure" but I figure that a survey respondent working at home on the Internet must have be confused what to answer. Indeed this is the only way it would make sense that online time would have reduced work time. As far as I can see it, online time at work is not measured.

Tuesday, November 19, 2013

In the best of all worlds, improvements in agriculture productivity leads to surpluses that allow capital accumulation and the development of industry, which then provides better inputs for agriculture. This is a virtuous circles that eventually leads to agriculture using only a tiny fraction of the workforce and representing a minuscule portion of GDP. This so-called Lewis path to growth has happened in many western economies, but does not seem to take off in Africa, in particular.

Bruno Dorin, Jean-Charles Hourcade and Michel Benoit-Cattin show that the Lewis path is not the unique equilibrium path in a growth model. A particular concern is the so-called Lewis trap that would result from a lack of additional agricultural land, where agriculture keeps growing in the labor force for little gain in output. But why insist on farming where land is no good? We have a global economy now and can produce goods where the comparative advantage is highest. Many areas of Africa are simply no good for agriculture, so we should stop insisting that they should go through all the motions of the Lewis path. Go straight to manufacturing and import food (my previous rant on this). This would also imply that other areas would specialize in agriculture, which is good even though the authors complain that this would lead to urban poverty there. People will move where the jobs are, for example to charter cities.

Monday, November 18, 2013

It is every physicist's dream to find a formula so powerful that it can explain everything (and carry the inventor's name). Such hopes are not as prevalent in Economics, first as we realize that we cannot find such a fundamental equation (we are just not smart enough), second because an economy is so complex that it defies any attempts to reduce it to one equation.

This does not stop James Wayne, who as a physicist is still pursuing his dream. And he claims to have found the Fundamental Equation Of Economics (FEOE), thereby finally proving that Economics is truly part of Physics. What a relief. And what is this equation that can, as the author forcefully argues, explain all observed economic phenomena and solve all economic problems, without exception? What is this formula that shows that equilibrium, the laws of supply and demand, DSGE and SL/ML (whatever that is) models are all deeply flawed? Here it is: The change in time of the joint probability distribution of future valuation of assets and liabilities is a function of its current distribution. We do not know yet what this function is, because it is currently too difficult to figure it out at the atomic level, but we know it exists. Now we can go revolutionize Economics and solve the world's problems.

Friday, November 15, 2013

There are no laws mandating helmet use for motorcyclists in many developing countries and some US states. In the first case, such laws are likely unenforceable, in the second I suppose the American urge for "freedom" makes such laws inappropriate and the hope is that motorcyclists would have the common sense to use helmets. So what determines why some choose not to wear a helmet?

Michael Grimm and Carole Treibich went to Delhi and surveyed motorcyclists, focusing on helmet use and speeding. Some of the answers are not surprising: the bare-headed ones are less risk-averse, younger, less educated and less informed about accident and fatality rates. More interesting is that speeding and not using a helmet seem to be strong substitutes. Imposing helmets alone would thus not necessarily improve safety. One would have to impose helmets and enforce speed limits. That is likely too much to expect from India though, where just informing about the true risks may be more effective.

Thursday, November 14, 2013

In many countries, it is customary or even mandated that firms should pay employees they let go a severance package. While that may make sense as compensation for a labor contract that is getting broken, mandating it under all circumstances may make little sense on a first glance. It adds to firing costs and may lead firms to retain less productive workers too much. And from having witnessed that first hand, this can induce an employee to become a poison for everyone else to tease out severance pay after getting fired.

Donald Parsons goes through the rationale of mandating severance pay and compares it to a labor market where no such pay is mandated. It turns out there would be not much difference, as firms do voluntarily offer severance pay, as mentioned to break a contract, but also to avoid having an employee move to the competition. On a aggregate level, such pay does slow down worker movement a little bit, but it also has its advantages. For example, it can substitute for unemployment insurance for some time and it can insure against wage loss in reemployment. This is good, especially if moral hazard or administrative costs in these programs are high.

Wednesday, November 13, 2013

The Second Basel Accord was put in place to more effectively prevent bank failures. The first one imposed some rather rigid rules that where not taking into account the true risk exposure of the banks, which obviously varies according to the particular activities of the bank and overall economic conditions. Basel II is more flexible in that it allows banks to used their risk models and scenarios to determine how much capital they need to secure. The goal is to have sufficient capital in 99.9% of cases of unexpected losses, or a failure once every 1000 years. That is pretty safe.

Except it is not. The first exhibit is of course what happened during the last recession. The second is a paper by Ilkka Kiema and Esa Jokivuolle that shows that in fact only a fraction of the regulatory capital needs to be loss absorbing capital. Indeed, half can be subordinated debt and thus not available when needed. According to the authors, this means that the true risk of bank failure is every 20 to 100 years. Not very reassuring, and Basel III does not seem to really address this.

Tuesday, November 12, 2013

In dynamic stochastic models, standard utility function specifications imply that the curvature of this function determines directly both the risk aversion and the elasticity of intertemporal substitution. When calibrating this, modelers have a tendency to be waving hands a bit too much, as they focus more on one than the other. In addition, their calibration seems to be immune to changes in data frequency. Those who are careful about this use Epstein-Zin preferences which disentangle risk aversion and the elasticity of intertemporal substitution. They think they have done all they could to address a proper calibration.

Well, not quite. Larry Epstein, Emmanuel Farhi and Tomasz Strzalecki show there is a third dimension in play, the temporal resolution of long-run risk. Indeed, the interaction of risk aversion and elasticity determines whether economic agents prefer early or late resolution of risk. This matters. Indeed, long-run risk is priced by markets differently than short-term risk, typically higher. Indeed, people are willing to pay to know uncertain outcomes earlier. But we do not know how much so far. An opportunity for additional research.

Monday, November 11, 2013

China has been heavily criticized by western politicians and policy-makers for its exchange rate policy that favors its export industry. Some have tried to explain to Chinese authorities that it is not in their best interest to follow a quasi-fixed exchange rate with the US dollar. Indeed, we know from past experience that fixed-exchange rates can be very expensive to maintain, especially in the context of large external imbalances. But is China different? After all, it financial development is
clearly less advanced than Western economies, and the Chinese economy is growing much faster.

Philippe Bacchetta, Kenza Benhima and Yannick Kalantzis look at the optimal exchange-rate policy of a growing economy where domestic households do not have access to international markets, that is, China. They find that the optimal path for the exchange rate is first a real depreciation during a growth spurt, and then a real appreciation in the long-run. This is pretty much what China has been applying. In other words, China did everything right given its situation, and this is because the growth spurt generates a glut of savings that have nowhere to go. The real depreciation allows to take care of this current account imbalance having the central bank serve as intermediary and converting foreign assets to domestic ones for the desperate households. In some sense, we could even argue that the Bank of China has not done enough of that given the real estate bubble, which is also a consequence of this savings glut.

Friday, November 8, 2013

Why are people drawn to work as an artist? This kind of job seems to have all the characteristics that one would like to avoid for a typical career: very low pay, usually the need to supplement income with another job, the most unequal distribution of income of any field, and with all this a chronic oversupply of labor. One may argue that one should set Economics aside for the arts labor market, but I do not believe that the love of arts can be the only explanation for this uncharacteristic labor market. People need to live, and if they love the arts they can always do this as a hobby.

Milenko Popović and Kruna Ratković find a better explanation. An artist's productivity is a function of accumulated art-specific human capital. If artists are forward-looking and they can cope with very low income during their formative years, it can then make sense to get into such a career. The issue is the uncertainty whether one's artistic career will actually pan out. This is where the oversupply comes in: many people start an artistic career to see how it works out, but eventually drop out. While all this makes intuitive sense this last part about uncertainty is largely hand-waved by the authors and should be subject to some serious quantitative exercise to see whether it can hold water with data, though. Thus, I am still not letting my children get into such careers.

Thursday, November 7, 2013

The debates on whether to, depending on the country, introduce, repeal, increase or lower the minimum wage are never going to cease because empirical studies have not been able to give a definitive answer about the impact of the minimum wage on employment. The issue is first that good data is difficult to come by, second that there are many confounding effects and unobservables that may vary from one labor market to the other in significant ways, and third that the true effect may actually be small.

Sylvia Allegretto, Arindrajit Dube, Michael Reich and Ben Zipperer analyze a common way to study minimum wage hikes (to be distinguished from their introduction), cross-state regressions for the US, as US states have the option to set a higher minimum wage than the federally mandated one. They use six techniques employed in the literature to compare outcomes with four datasets. The reason why you want to try so many methods is that a simple regression does not cut it. The level of the minimum wage, for example, is associated with different business cycle characteristics, that is, setting a minimum wage at a particular amount is endogenous with all sorts of things that can be associated with the labor market. Still, no matter how they look at the data, the authors find that the effect of minimum wage hikes on employment is small, if there is any. This increases the odds that the effect is actually small.

Wednesday, November 6, 2013

Modeling the labor market, we tend to postulate that wages are either posted by employers or negotiated, typically by Nash bargaining. This is especially true of search and matching models, which often study business cycles. Results depend to some degree on this assumption, thus it should be a good idea to check against the empirical evidence how wages are determined in the matching process.

Wage posting dominates in the public sector, in larger firms, in firms covered by collective agreements, and in part-time and fixed-term contracts. Job-seekers who are unemployed, out of the labor force or just finished their apprenticeship are also less likely to get a chance of negotiating. Wage bargaining is more likely for more-educated applicants and in jobs with special requirements as well as in tight regional labor markets.

This implies in particular that the mix may change over the business cycle (as labor-market tightness changes), and that models that assume that one must be unemployed to apply for jobs and then get Nash bargaining are inconsistent with the data, at least in Germany.

Tuesday, November 5, 2013

One important characteristic of Economics is that it is very difficult to conduct a clean experiment. While one may run little laboratory experiments with a few chosen subjects, there is always the uncertainty whether the experiment generalizes. The randomized experiments typically used in development economics are subject to the same limitations, even if their scope is larger. And in all those experiments, their applicability is limited to microeconomic questions.

Oleksiy Kryvtsov and Luba Petersen venture into experiments directly applicable for macroeconomic policy, and more precisely monetary policy. Monetary policy has bite when there are some frictions, among them expectation formation. Their idea is thus to see how people form inflation expectations in a laboratory setting and within the context of a standard new-Keynesian model. In that model with economic agents having rational expectations, monetary policy can reduce macroeconomic volatility by at least two-thirds. With the bit of irrationality exhibited by participants to the experiment, the reduction is still about half, and thus important. The model is a Woodford-style economy where participants have to provide updates on inflation and output-gap expectations, which can be compared by the observer against rational expectations ones. People learn about changes to fundamentals and can draw on past history. In other words, it is like they would live in the Matrix, they are fed information and are supposed to behave within the confines of a virtual world.

This is very interesting and innovative stuff here. I must concede though that I have still not bought the Woodford model. I cannot understand how one can talk about monetary policy in model with supposedly fundamentals when there is no money.

Monday, November 4, 2013

Child labor has often been described as a vicious circle. Parents have too little income to feed their family and require their children to work. Children do not get educated and end up earning too little to sustain their own family. One may then question why they decide to have children in the first place.

Simone D’Alessandro and Tamara Fioroni build a model of human capital and fertility with child labor. At least in theory, they highlight that destitute parents find it relatively advantageous to have children: they are less costly as they can work. If their net contribution is positive, they want to have many children. And this mechanism can be self-reinforcing if the gap between skilled and unskilled wages is large. This is an amplified quantity/quality trade-off that increases child labor and leads to more wage inequality. The only way out is to make it more attractive for unskilled parents to have fewer children and not have them work. Legislating child labor away will not help, as already demonstrated many times. One example was discussed here, and some was to get one of the vicious circle as well: 1, 2, 3.

Friday, November 1, 2013

Isn't it interesting that most human societies, even when not in contact with each other, evolved to a model with long-term monogamous families? What made it crucial for evolution to avoid polygyny, communal families or repeated monogamy? Certain biological traits must have been necessary (and sufficient?) for this to happen.

Marco Francesconi, Christian Ghiglino and Motty Perry show that once you put this into the framework of a game theory model with overlapping generations, it all makes sense. You just need three features: children of different ages overlapping (i.e., women cannot bear "too many" children simultaneously), paternal investment (father need to help for children to succeed), fatherhood uncertainty (fathers may not be certain which children are theirs). This means that mothers need to secure the help of fathers by assuring that they are helping the right children thanks to monogamy. The first feature is necessary, but it is not clear to me why. I think it is because it gives more assurance to the father about paternity. Monogamy is then not only the most efficient family form in the sense that it maximizes the number of offspring, this is even amplified because it is the only form that creates altruistic ties between children.