Category Archives: Uncategorized

A central idea in modern macroeconomics is the permanent income hypothesis. The basic idea is as follows. Suppose that you could dichotomize your income into a permanent component and a temporary component. The permanent income hypothesis suggests that you would base your consumption decisions on the permanent component.

Why would people behave this way?

Well, individuals want to smooth the marginal utility of their consumption over time. To understand this, consider the following example. Suppose that you varied your consumption proportionately with your current income and that your income fluctuated significantly from year to year. This would imply that your consumption would be high in years when your income was high and low in years when your income was low. However, if consumption is subject to diminishing marginal utility, this would mean that the marginal utility of consumption in high income years is less than the marginal utility of consumption in low income years. So wouldn’t it be nice to take some consumption from your high income years (when the marginal utility is low) and transfer it to the low income years (when the marginal utility is high)? Yes, because your lifetime utility would be higher. Fortunately, you can do this by adjusting your savings behavior in response to temporary fluctuations in your income over time (note that this includes borrowing behavior, which is just negative savings).

So this sounds reasonable, but it is important to think about why the permanent income hypothesis might not hold.

Given our discussion, one obvious reason pops up. What if some fraction of consumers are subject to credit constraints (i.e. either limits or lack of access to borrowing). Individuals who face borrowing constraints might find themselves unable to borrow following a reduction in their income. If this is true, individuals might not always be able to smooth the marginal utility of their consumption over time.

Some have posited other, “behavioral” reasons why the permanent income hypothesis might not hold. For example, maybe people are myopic and don’t plan adequately for the future.

So how do we know if the permanent income hypothesis is a good guide in thinking about consumption decisions? Well, we have to go to the data.

Now suppose that you wanted to test the implications of the permanent income hypothesis. There are a number of ways you might do this. You might test the cross-sectional predictions of the model outlined by Milton Friedman. You might try to estimate consumption Euler equations with aggregate data. Or you might find identify periods of time in which people experience a significant decline in income and see what happens to their consumption behavior.

The evidence testing Friedman’s predictions with cross-sectional data seem to support the permanent income hypothesis. Estimates of the consumption Euler equation do not. (However, as John Seater points out, this is likely due to problems with aggregation and not the theory itself since aggregation imposes pretty strong implicit assumptions about households.) Finally, the work focusing on periods of significant declines in income seems to show a corresponding significant decline in consumption. It is this last bit of evidence that I want to discuss in more detail.

If you notice that consumption declines significantly after a person becomes unemployed or after they retire, this would seem to provide evidence against the permanent income hypothesis. The unemployed worker should be spending out of his savings or borrowing until he finds a new job. The retired person should have planned better for the future.

The problem with this assessment is that whether or not the permanent income hypothesis holds depends on consumption behavior. In reality, most of our data on consumption is typically consumption expenditures. It is important to understand the difference.

To understand why it is important to distinguish between consumption and expenditures, consider the following example. Suppose that I am interested in food consumption. How should I measure food consumption? I could measure food consumption by how much I spend on food. I could also measure food consumption by the number of calories I eat. This might not seem like an important difference, but it can be quite important.

Imagine that I can eat the same exact meal at home as I can at a restaurant. If I eat it at the restaurant, then my expenditures are equal to the market price of the meal. If I eat it at home, my expenditures are the cost of the ingredients. The latter should be less than the former. In addition, the degree to which the latter are less than the former will depend on how much time I spend shopping for the lowest prices of those ingredients. Nonetheless, despite the difference in expenditures, my food consumption is the same (by definition, it’s the same meal).

So why is this distinction important?

Think about people who become unemployed or people who retire. What they have in common is that they have more time than they had when they were working. The opportunity cost of their time has fallen. As a result, those who are unemployed and those who are retired are likely to spend more of their time cooking than they would if they were working. They are also likely to spend more time searching for better prices on the ingredients to make their meal than they would if they were working. The result is that the individual will spend more time on what economists call “home production” (and therefore home consumption) while reducing market expenditures.

This is important for the following reason. There is a difference between expenditures and consumption. Expenditures are simply the subset of consumption that occurs in a market setting.

So how significant is this distinction?

It turns out that this distinction is quite important. Mark Aguiar and Erik Hurst have a paper in the Journal of Political Economy that uses a cool data set that consists of food diaries of U.S. households. What the paper shows is that neither the quality nor quantity of food intake by retired households decline after retirement. In addition, they find that the food intake of unemployed workers does decline, but only as much as one would predict from the decline in permanent income typical of being displaced for some period of time from one’s job. In other words, if one considers the role of home production, then the evidence of a significant decline in expenditures following retirement or job displacement should not be interpreted as evidence against the permanent income hypothesis. Relying on expenditure data to measure consumption might cause one to incorrectly reject the permanent income hypothesis.

What does this mean for economists and their models?

First, as more and more micro-level data becomes available, it is important to consider whether one has the correct measure of the variable of interest before embarking on hypothesis testing. Second, this result seems to imply that if you are going to take a model to the data and you use standard measures of consumption expenditures, the model should include home production in the household decision. Otherwise, what the researcher is calling consumption and how consumption is calculated are not consistent.

We use a model in which media of exchange are essential to examine the role of liquidity and monetary policy on production and investment decisions in which time is an important element. Specifically, we consider the effects of monetary policy on the length of production time and entry and exit decisions for firms. We show that higher rates of inflation cause households to substitute away from money balances and increase the allocation of bonds in their portfolio thereby causing a decline in the real interest rate. The decline in the real interest rate causes the period of production to increase and the productivity thresholds for entry and exit to decline. This implies that when the real interest rate declines, prospective firms are more likely to enter the market and existing firms are more likely to stay in the market. Finally, we present reduced form empirical evidence consistent with the predictions of the model.

2. “An Evaluation of Friedman’s Monetary Instability Hypothesis“, Southern Economic Journal. This paper examines two elements of Milton Friedman’s work within the context of a relatively standard structural model. The first element is the idea that deviations between the money supply and money demand are a significant source of business cycle fluctuations. The second element is the idea that shocks to the money supply are much more empirically significant that shocks to money demand. Here is the abstract:

In this paper, I examine what I call Milton Friedman’s Monetary Instability Hypothesis. Drawing on Friedman’s work, I argue that there are two main components to this view. The first component is the idea that deviations between the public’s demand for money and the supply of money are an important source of economic fluctuations. The second component of this view is that these deviations are primarily caused by fluctuations in the supply of money rather than the demand for money. Each of these components can be tested independently. To do so, I estimate an otherwise standard New Keynesian model, amended to include a money demand function consistent with Friedman’s work and a money growth rule, for a period from 1875-1963. This structural model allows me to separately identify shocks to the money supply and shocks to money demand. I then use variance decompositions to assess the relative importance of shocks to the supply and demand for money. I find that shocks to the monetary base can account for up to 28% of the fluctuations in output whereas money demand shocks can account for less than 1% of such fluctuations. This provides support for Friedman’s view.

3. “Interest Rates and Investment Coordination Failures“, Review of Austrian Economics. This paper examines the role of interest rates in influencing both production time and entry decisions of firms. The paper therefore examines coordination problems similar to those emphasized in the Austrian business cycle theory and the business cycle theory of Fischer Black. I show that in low interest rate environments firms are more likely to preempt the entry of their competitors at lower levels of demand than when interest rates are high. When firms enter simultaneously at these levels of demand, it is a coordination failure. Low interest rates also produce changes in the length of production that are consistent with the ABCT. This provides some support for business cycle theories such as the ABCT, which have been criticized as violating the assumption of rational expectations.

The theory of capital developed by Bohm-Bawerk and Wicksell emphasized the roundabout nature of the production process. The basic insight is that production necessarily involves time. One element of the production process is to determine the period of production, or the length of time from the start of production to its completion. Bohm-Bawerk and Wicksell emphasized the role of the interest rate in determining the period of production. In this paper, I develop an option games model of the decision to invest. Two firms have an opportunity to enter a market, but production takes time. Firms face a two-dimensional decision. Along one dimension, they determine the period of production and the prospective profit therefrom. Along another dimension, they determine whether or not they want to enter the market given the amount of time it will take to start generating revenue from production. Within this option games approach, the period of production can be understood as an endogenous time-to-build and I argue that this framework provides a tool for evaluating the claims of Bohm-Bawerk and Wicksell against the backdrop of competition and uncertainty. I evaluate the period of production decision and the option to enter decision when the real interest rate changes. I show that investment coordination failures are more likely to occur at lower levels of profitability when real interest rates are low. I conclude by discussing the implications of low interest rates for boom-bust investment cycles.

One way to interpret Adam Smith’s Wealth of Nations is as a critique of and rebuttal to what he called the “mercantile system” or today what we would call mercantilism. One critique that Smith made in the book is that mercantilists had an incorrect notion of wealth. In Smith’s view, mercantilists confused money and wealth. According to Smith, this misconception led many mercantilists to see trade surpluses as desirable because it was a way to accumulate gold (money) and therefore make the country richer. As it turns out, this is likely a straw man of Smith’s own construction.

I have recently been reading Mercantilism Reimagined and Carl Wennerlind has an interesting chapter on 17th century views on money in England. Here are some highlights:

J.D. Gould’s work in the Journal of Economic History suggests that to understand the literature on money and trade during the 1620s, one needs to understand the circumstances in which the writers were writing. He argues that this writing must be understood in the context of a significant downturn in economic activity that was largely blamed on a shortage of money. It is unclear whether this was due to an undervalued sterling or incorrect mint ratios, but a trade surplus was seen as a way to correct this shortage. In other words, these writers were not advocating trade surpluses for their own sake, but rather to replenish the money stock.

Smith’s attacks were on these writers of the 1620s, but he either ignored or was ignorant of a literature that emerged in the 1640s and 1650s associated with a group known as the Hartlib circle.

Members of this group thought that the expansion of scientific knowledge would lead to permanent expansions in economic activity. This therefore required an expanding money supply to prevent deflation and other problems with insufficient liquidity.

At least two writers within the Hartlib circle denied that the value of money came from the commodity itself (recall that gold and silver were money at this time). Wennerlind quotes Sir Cheney Culpeper, for example, as writing that “Money it self is nothing else but a kind of securitie which men receive upon parting with their commodities, as a ground of hope or assurance that they shall be repayed in some other commoditie.”

Culpeper advocated for parliament to create a law that would allow a bill of credit to be transferred from one person to another rather than waiting for repayment.

Another Hartlibian, William Potter had a much more ambitious proposal that called for tradesmen to set up a firm and print bills that could be borrowed with sufficient collateral. The tradesmen would agree to accept these bills in exchange for their production. At any time, a bill holder could request that it be redeemed. At that point, a bond would be issued that had to be paid by the borrower of the bill within 6 months. Since the bills were backed by collateral, the only threat to the ability to redeem a bill was a sudden decline in the value of the collateral — although Potter argued that insurance companies could be used to insure against such outcomes.

Winnerland argues that both the Bank of England and the South Sea Company were the outgrowth of Hartlib ideas about money and credit.

The fundamental point here is that it seems that there was an influential group of individuals writing in the 1640s and 1650s that were either ignored by Adam Smith or that he simply did not know existed. However, the omission is important. One would hardly consider the views of the Hartlibians as mercantilist. This group viewed scientific advancement as the key to economic prosperity, not trade surpluses and/or the accumulation of money. Culpeper, as evidenced by his quote, did not confuse money with wealth. His quote is consistent with a Kiyotaki-Wright model of money. Similarly, Potter clearly viewed credit and collateral as important for trade and prosperity (perhaps too much so, he predicted that under his plan that the English would be 500,000 times wealthier in less than a half of a century — that’s quite the multiplier!).

In short, this raises questions about the prevalence of mercantilist views in the time before Adam Smith. The critique by Smith that previous writers confused money and wealth might simply be a straw man.

Throughout his career, Earl Thompson often argued that we needed a more Pascalian theory of political economy. His argument was based on the following quote from French mathematician Blaise Pascal: “The heart has its reasons of which reason knows nothing.”

Based on this idea, Thompson developed a theory of what he called “effective democracy.” The central idea behind effective democracy was a sort of “wisdom of crowds” argument. Namely, he argued that the collective decision-making that takes place through the electoral process is very often efficient – even in ways that are not immediately obvious to economists.

Economists who are reading this are likely already rolling their eyes at this idea. Economists tend to think of collective decision-making as difficult. When the social benefits of a particular good exceed the private benefits, the market will tend to under provide the good. When the social costs associated with a good exceed the private costs, markets will tend to overprovide the good. If individuals cannot be excluded from using a particular good or service, the good will tend to be under-provided or over-consumed. Principles of economics textbooks are filled with examples of these sorts of scenarios and the optimal policy response. Yet, when we look at the world, there are many instances in which democracies fail to adopt the appropriate policy responses.

Economists are also likely rolling their eyes because voters often have very different opinions on issues than economists. For example, economists tend to think that free trade is a net benefit to society. The general public is less inclined to believe that statement.

What made Thompson’s work interesting, however, is that he often argued that democracies tend to understand externalities and collective action problems better than economists realize. For example, he noted that we don’t see factories at the end of a neighborhood. Why not? Well, we typically don’t see factories at the end of a neighbor because of zoning restrictions. But why zoning restrictions? Why not just have Pigouvian taxation to internalize the social costs? In general, economists don’t tend to advocate quantity regulations, so why does it occur?

What Thompson argued is that Pigouvian taxation is insufficient. A factory imposes a social cost beyond the private cost (a negative externality) because it creates pollution (and possibly even because it is not fun to look at). Given this additional social cost, standard economic theory would suggest imposing a welfare-improving Pigouvian tax on the factory. This would force the factory to internalize the cost associated with the pollution thereby giving society the optimal amount of pollution. What Thompson pointed out is that this tax is inadequate. People in society might not just want to reduce pollution, they might want to limit their proximity to the pollution. A Pigouvian tax doesn’t solve this latter problem. To understand why, consider the following. Suppose there is a neighborhood that is not yet completed. Society imposes a Pigouvian tax to limit pollution. A company decides to open a factory in town and wants to put it in this near-complete neighborhood. The people who live in the neighborhood do not want the unsightly, noisy, smelly, polluting factory next to their homes. However, even if the Pigouvian tax bill would result in losses, the company has the incentive to purchase the land in the neighborhood and tell the neighborhood that they intend to build their factory unless the individuals in the neighborhood agree to buy the land back from them. As a result, democratic societies have adopted zoning restrictions to prevent factories from being built in neighborhoods. (As empirical evidence for this, something similar to this happened in my neighborhood, where in certain parts of Mississippi the word “zoning” is considered profane. So perhaps effective democracy hasn’t yet reached Mississippi.)

Thompson had many other examples of what he called effective democratic institutions. He argued, for example, that the lives of individuals tend to produce positive externalities for their friends and family and that this can explain why we subsidize health insurance and have costly safety regulations in the workplace, that the Interstate Commerce Act of 1887 was an efficient democratic response to the transaction costs of complex state regulations and corresponding local lawsuits for firms (especially railroads), and that Workmen’s Compensation Laws were democratically efficient responses to the significant transactions costs associated with the slew of private lawsuits brought by workers against firms.

Whether or not one accepts Thompson’s arguments, they are unique in the sense that they provide efficiency-based arguments for policies that, in general, economists see as inefficient. It is easy to follow Thompson’s intellectual development. He first began by developing his theory of effective democracy. His theory was motivated by the Pascal quote above. Namely, that democracies tend to produce efficient policies even if the constituents of that democracy have a hard time articulating why the policies are efficient. He then went out in search of empirical evidence that supported his view. In doing so, he would examine policies that economists often considered inefficient and he would try to understand why an effective democracy would adopt such a policy. In other words, he would ask: what characteristics would have to exist in order for an economist to consider the policy efficient? This is in sharp contrast to the typical way that economists examine policy, which is by starting with a basic model and determining whether the policy is efficient within that model.

I am writing about this because I believe that there is a critical element to Thompson’s analysis that should be incorporated into political economy – regardless of whether one believes that Thompson’s effective democracy theory is correct. The critical element is the presumption that there is some underlying reason that a particular policy emerged and that the policy might be an efficient democratic response. In other words, the working assumption when any policy or institution is analyzed is that the policy or institution was designed as an efficient response to some problem. Note that this doesn’t mean that economists should always conclude that the policies and institutions are efficient. The tools used by economists are the precise tools needed to determine whether something is indeed an efficient response to the problem. Thus, rather than start with a generic standard model and consider whether the policy is efficient in that context, perhaps economists should ask themselves: what would have to be true for this policy to be considered a constrained efficient response? In some cases this will be difficult to do – and that in and of itself might indicate the inefficiency of the policy. Other times, however, certain conditions might emerge that could justify a particular policy. These conditions would then generate testable hypotheses.

A Pascalian approach would hopefully lead to more humility among economists. For example, the minimum wage is a very popular policy despite the standard economic arguments against it. But why does the minimum wage exist? Even if one believes that the disemployment effects are small enough for the benefits to exceed the costs, this still begs the question as to why the minimum wage is chosen over other attempts to help low-wage workers, such as the Earned Income Tax Credit. Economists typically explain away the existence of the minimum wage as way for politicians to signal that they care about low-wage workers without bearing the cost. But this argument is rather weak. If there is a better alternative, wouldn’t the public eventually realize this? At the very least would the signal sent by the politician eventually be seen for exactly what it is? All too often, economists simply conclude that the general public just needs to learn more economics (how convenient a conclusion for economists to reach). My brief sketch of a theory of why the minimum wage exists (here) was an attempt to approach the topic from this Pascalian perspective.

Most recently, a seeming majority of economists (as well as financial and political pundits) expressed absolute shock at the decision of U.K. voters to leave the European Union. As a result, many have concluded that those who voted to leave did so because they don’t understand the costs (again, the argument is that the dullards just need to learn economics). Others have concluded that the decision to leave is just a manifestation of xenophobia. But perhaps economists are wrong about the costs associated with leaving. Or perhaps economists have miscalculated the long-run viability of the European experiment. Or perhaps individuals place values on things that are often left out of standard cost-benefit analysis because they’re hard to measure or hard to identify. Of course it is also possible that those who supported the decision to leave are indeed economically ignorant bigots. But even if this is the case, shouldn’t we fall back on this conclusion only after all other possible explanations have been exhausted?

A Pascalian view of political economy takes as given that we have imperfect knowledge of the complex nature of economic and social interactions. Studying the emergence of policies and institutions under the presumption that they were designed to efficiently deal with a particular problem forces economists to think hard about why the policies and institutions exist. But the tools at any economist’s disposal are up to the task.

Rather than seeing ourselves as the wise elders passing down advice and judgment to those who fail to understand price theory, let’s be humble. Let’s take our craft seriously. And let’s realize that we might be somewhat ignorant of the complex nature through which democracies create policies and institutions.

A paper that I wrote with Alexander Salter entitled, “A Theory of Why the Ruthless Revolt” is now forthcoming in Economics & Politics. Here is the abstract:

We examine whether ruthless members of society are more likely to revolt against an existing government. The decision of whether to participate can be analyzed in the same way as the decision to exercise an option. We consider this decision when there are two groups in society: the ruthless and average citizens. We assume that the ruthless differ from the average citizens because they invest in fighting technology and therefore face a lower cost of participation. The participation decision then captures two important (and conflicting) incentives. The first is that, since participation is costly, there is value in waiting to participate. The second is that there is value in being the first-mover and capturing a greater share of the “spoils of war” if the revolution is successful. Our model generates the following implications. First, since participation is costly, there is some positive threshold for the net benefit. Second, if the ruthless do not have a significant cost advantage, then one cannot predict, a priori, that the ruthless lead the revolt. Third, when the ruthless have a significant cost advantage, they have a lower threshold and always enter the conflict first. Finally, existing regimes can delay revolution among one or both groups by increasing the cost of participation.

There has been much recent discussion within the econo-blogosphere about the usefulness (or lack thereof) of “Econ 101.” This discussion seems to have started with Noah Smith’s Bloomberg column, in which he suggests that most of what you learn in Econ 101 is wrong. Mark Thoma then took this a bit further and argued that the problem with Econ 101 is ideological. In particular, Thoma argues that Econ 101 has a conservative bias. Both of these arguments rely on either a mischaracterization of Econ 101 or a really poor teaching of the subject.

Noah Smith’s dislike of Econ 101 seems to come from the discussion of the minimum wage. His basic argument is that Econ 101 says that the minimum wage increases unemployment. However, he argues that

That’s theory. Reality, it turns out, is very different. In the last two decades, empirical economists have looked at a large number of minimum wage hikes, and concluded that in most cases, the immediate effect on employment is very small.

This is a bizarre argument in a number of respects. First, Noah seems to move the goal posts. The theory is wrong because the magnitude of these effects are small? The prediction is about direction, not magnitude. Second, David Neumark and William Wascher’s survey of the literature suggests that there are indeed disemployment effects associated with the minimum wage and that these results are strongest when researchers have examined low-skilled workers.

Forgetting the evidence, let’s suppose that Noah’s assertion that the discussion of the minimum wage in Econ 101 is empirically invalid is correct. Even in this case, the idea that Econ 101 is fundamentally flawed is without basis. When I teach students about price controls, I am careful to note the difference between positive and normative statements. For example, many students tend to see price controls as a “bad” thing. When I teach students about price controls, however, I am quick to point out that saying something is “bad” is a normative statement. In other words, “bad” implies that things should be different. What “should be” is normative. The only positive (“what is”) statement that we can make about price controls is that they reduce efficiency. Whether or not this is a good or a bad thing depends on factors that are beyond an Econ 101 course — and I provide some examples of these factors.

Further, by emphasizing the effects on efficiency and the difference between positive and normative statements, this gives students a more complete picture of both the effects of price controls as well as why they might exist. In fact, it is precisely this lesson about efficiency and allocation that is an essential part of what students should learn in Econ 101.

For example, it is common for economists to discuss rent control when they discuss price ceilings. When societies put a binding maximum price on rent, this creates excess demand. However, one would not test whether this is a useful description of reality by examining the effects of rent control on homelessness. On the contrary, economists emphasize that in the absence of the price mechanism, other allocation mechanisms must substitute for price. Non-price rationing comes in a variety of forms: quality reduction, nepotism, discrimination, etc.

Similar arguments can be made for the minimum wage. For example, the basic point is that the minimum wage creates a scenario in which the quantity demanded for labor is less than the quantity supplied of labor. The ultimate outcome could come in a variety of forms. This could lead to a standard account of higher unemployment. Alternatively, this could simply cause a reduction in hours worked. Finally, in the case in which the firm faces some constraint (at least in the short to medium term), there is another way that firms can adjust. Standard Econ 101, for example, suggests that the nominal wage should equal the marginal revenue product of the worker. If the nominal wage is forced higher, there are two ways to increase the marginal revenue product — reduce labor or increase the price of the product.

The value of Econ 101 is the very process of thinking through these possible effects. What effect we actually observe is an empirical question, but it is of secondary importance to teaching students how to logically think through these sorts of examples.

Noah’s view of Econ 101, however, seems to come from his belief that economists want Econ 101 to be as simple as possible. And his argument is that this is misguided because simple often dispenses with the important. Mark Thoma, on the other hand, makes the argument that Econ 101 has a conservative bias:

The conservative bias in economics begins with the baseline theoretical model, what is often called “Economics 101.” This model of perfect competition describes a world that agrees with Republican ideology. In this model, there is no role for government intervention in the economy beyond setting the institutional structure for free markets to operate. There is nothing government can do to improve the ability of market to provide the goods and services people desire at the lowest possible price, or to help markets respond to shocks.

I think that this is both wrong about Econ 101 and it is also a strange view of conservatism.

First, I am not a conservative. However, it seems to me that many conservatives like government intervention. A number of conservatives think that child tax credits are a good idea and that marriage should be encouraged through subsidization. For these sorts of things to be justified on economic grounds requires that they believe that children and marriage generate positive externalities for society. While it is true that Republicans have been particularly obstructionist, Republican does not equal conservative. In addition, obstructionism might not have as much to do with economic beliefs as it does with political motivations about who gets the credit, the lobbying of special interest groups, the desire to imperil the image of the competing party, etc. — regardless of the rhetoric.

Which brings me to my second point. If you are a student who only learned the perfectly competitive model in Econ 101, then you should politely ask for a refund. Econ 101 routinely includes the discussion of externalities, public goods, monopoly, oligopoly, etc. All of these topics address issues that the competitive market model is ill-equipped to explain. And it is hard to argue that any of these topics have any sort of ideological bias.