They use the same shady economic methodology to promote their policies.

If you follow the news, you’re familiar with “IMPLAN”, albeit indirectly. It’s the software package underlying the studies that pro sports teams, among others clamoring for public favors, use to claim that each new stadium will generate several gazillion dollars for the local economy—supposedly justifying a massive public outlay. Here’s a study using IMPLAN to justify a new Sacramento Kings stadium; here’s another that looks at the proposed Santa Clara stadium for the 49ers and another that attempts to justify a new stadium for the A’s. There are studies looking at the impact of the Mavericks’ American Airlines Center, the Packers’ Lambeau Field, and Oriole Park. And, of course, there are countless others: whenever someone wants to make preposterous claims about the benefits of his pet project, he’ll inevitably turn to IMPLAN or a similar package.

Artist's rendition of the new 49ers stadium proposed for Santa Clara, whose economic impact has been studied using the same highly reliable methodology now applied by Rick Perry's campaign.

There’s an obvious element of pseudoscience to these studies. They use “input-output” models that painstakingly track the path of spending through the economy—a worthy goal, though perhaps an overambitious one. But they fail entirely to model the supply side of the economy, effectively assuming that there is unlimited capacity, and that each additional dollar of “spending” (magically generated by the new stadium) will become an additional dollar of economic activity—even more, in fact, after you account for the multiplier.

Strangely enough, Rick Perry’s campaign is using the same model to analyze his tax plan, in a context where it makes evenless sense.

As James Pethokoukis explains, the Rick Perry presidential campaign has contracted with John Dunham and Associates to run a revenue analysis of Perry’s new tax plan. The impact of the plan depends on your choice of baseline policy: it raises $4.7 trillion less than the CBO baseline for 2014-2020 under conventional, static scoring, and $1.7 trillion less under “dynamic scoring”. Relative to the CBO’s more arguably realistic alternate baseline, the plan does better. But regardless of your preferred baseline, it’s clear that the plausibility of Dunham’s “dynamic scoring” model is key: it provides an additional $3 trillion over only 6 years!

It’s troubling, then, to learn that the Perry campaign’s idea of “dynamic scoring” bears absolutely no relation to what most economists mean by the term. In fact, the Dunham model more closely resembles the shady estimates for the 49ers stadium than any accepted methodology in public finance.

The idea behind dynamic scoring—as economists generally understand the concept—is that we should account for how the incentives created by the tax system affect the economy, and how those effects might feed back into revenue. A income tax cut, for instance, might lead to higher taxable income and a new stream of tax revenue—though certainly not by enough to fully offset the initial revenue loss, as Art Laffer once claimed. In theory, capital tax cuts may to even larger offsetting movements in revenue, though still not enough to recover the loss completely. Greg Mankiw and Matthew Weinzierl provide a short guide here.

Dynamic scoring is controversial: many Democrats believe that in practice it’s a gimmick that obscures the revenue losses from tax cuts. But in principle, it’s hard to deny that dynamic scoring would be the ideal way to evaluate the effects of tax policy: taxes do have real effects, and those effects eventually find their way back into the tax base. The challenge is that the relevant magnitudes are extremely uncertain, and it’s hard to calibrate a model that realistically accounts for the effects of new policy. Moreover, if the overall effect on revenue is negative, to be complete you need a model of how other tax and spending policies will eventually adjust to close the additional deficit—a very difficult task indeed, one that practitioners usually ignore. Some proceed nevertheless; some think it’s better to avoid the issue until we have more accurate models.

But none of this matters to the Perry analysis, because it’s completely unrelated. It doesn’t look at the effect of taxes on incentives at all. Instead, it simply feeds the increased personal income from tax cuts into the IMPLAN model and churns out the same kinds of estimates we typically see for sports stadiums. If you think I’m kidding, read the document:

In order to better understand the effects of the Perry tax proposal on the national economy, a dynamic scoring exercise was conducted by JDA. JDA used an input-output model of the US economy to estimate the true revenue effect of personal income tax proposals, including the feedback effects of taxes on national income.

The dynamic analysis used in this model was based on tax savings (or tax increases) for various income groups in each of the 7 years between 2014 and 2020. These savings were run through the IMPLAN input-output model as increases to income for each group and the resulting change in GDP was fed back through the model for subsequent years. This led to higher GDP growth estimates for each year beginning with 2014 (see Table 6). Based on this analysis, GDP is expected to grow faster than forecast by the CBO, reaching $26.5 trillion by FY 2020 – a 16 percent increase.

Needless to say, this description is a little hazy, but it’s pretty clear what’s going on: they look at tax savings as increases in “income” to each group and feed them through the IMPLAN input-output model, which vastly multiplies the initial impulse and leaves us with an utterly implausible estimate for improvement in GDP. (Sixteen percent? Are they kidding?) There’s no recognition that long-run output is determined by supply constraints, not demand; in fact, this is a completely demand-side analysis trying to pass itself off as supply-side dynamic scoring. Rather bizarre for a Republican candidate, particularly one as hostile to demand-side policy as Rick Perry!

Now, to be clear, there is a place for demand modeling and multipliers like those in the IMPLAN model: when we’re in a demand-constrained recession and monetary policy has reached its limits, tax cuts may provide economic stimulus by boosting aggregate demand, not just improving supply-side incentives. (Though there’s a debate about that.) But this is explicitly a short-to-medium term phenomenon, one that only matters (if at all) in a zero lower bound recession. No one—not even the most fanatical Keynesian—claims that such multipliers provide a foundation for long-term analysis of public finance. And certainly no one is crazy enough to think that the demand-side effects of a tax cut can boost GDP by sixteen percent, as the Perry analysis claims.

In fact, this model makes even less sense in the context of federal tax policy than in its usual, already dubious applications. When we’re looking at a stadium, at least we’re confining ourselves to a particular region: consumers flocking to a stadium can’t boost the productive capacity of the US economy as a whole, but they might encourage labor and capital to relocate around the stadium, delivering economic expansion to the region in question. But this doesn’t apply to the US as a whole: we only have so much labor and capital. Granted, if the model looked at the supply side of Perry’s plan, it might demonstrate how improved incentives lead to an expansion in labor and capital supply, thus increasing potential economic output. That is, however, what the model explicitly does not do: it ignores supply considerations completely, instead assuming that supply constraints are irrelevant and that the income from tax cuts will forever ripple throughout the economy and prompt a demand-led expansion that would put the Clinton era to shame.

I never thought I’d see the day when I had to lecture a Republican presidential candidate on the importance of supply-side analysis, or the dangers of overexuberant demand-side logic. Apparently that day has come!

The truth, of course, is that neither Rick Perry nor his staff have any idea of the analysis behind their numbers. Instead, they hired a consulting firm that specializes in using IMPLAN to create exaggerated estimates for the effect of particular industries (“Meat! Responsible for 5 trillion jobs!”) in order to please its lobbyist clients. The firm evidently knows nothing about tax analysis; it has no credentialed public finance economists on its staff and no experience in analyzing tax policy. When asked to conduct a study, it turned to the only game it knew: IMPLAN, which just happens to be a absurd way to analyze national fiscal policy.

But hey—cut them some slack! It’s not like they’re evaluating the key economic proposal from a major presidential candidate or anything.

Nice to see a good take-down of the IMPLAN modelling approach. Those of us who do sports economics and urban economics seriously are almost constantly having to push back against those kinds of studies. The single most disturbing aspect of the IMPLAN model for local economic analysis is the wildly unreasonable values that have for multiplier effects (compared, for example, with the BEA’s Regional Input-Output Modeling System). IMPLAN is exactly what you describe it as, a “model” designed to generate large impact numbers to please a client who wants to lobby someone.

Like this:

I’ve received some skeptical feedback on my last post about how money is just another form of debt, particularly its implications for the effectiveness of a “helicopter drop”. This topic deserves more attention: for reasons I don’t understand, some very smart observers regard the helicopter drop as one of monetary policy’s most potent tools.

What’s wrong with these claims? First, let’s be precise: there are two ways to do a helicopter drop.

Option one: the Treasury and Fed coordinate. The Treasury uses bonds to raise money for a tax rebate, while the Fed immediately buys those bonds. This is just a fiscal transfer plus an open-market operation. Is either component particularly effective? Certainly the open-market operation doesn’t do much: in the current environment, exchanging reserves for short-maturity T-bills is meaningless. Trading reserves for longer-maturity Treasury securities, as in QE2, probably has a minor effect, but the Treasury could achieve the same effect by issuing short-maturity debt itself. Adding the Fed to the picture accomplishes nothing.

The case for this type of helicopter drop, then, is really no different from the case for traditional fiscal transfers during a recession—the Fed’s participation is irrelevant and unnecessary.

To be fair, depending on the Fed’s long-term objectives, there may be a monetary side to the policy. As Gauti Eggertsson once argued, a large debt load can serve as a useful commitment device to generate expectations of future inflation. If the Fed cares about the government’s overall budget, it may be tempted to tolerate inflation to eat away at the real value of the debt—and if everyone expects more inflation, the liquidity trap becomes less severe.

But there are also plenty of caveats. First, since you need a fiscal transfer large enough to materially affect the government’s long-term budget, the scope of the transfer must be enormous. When the long-term budget picture is already so questionable, it’s far from clear that this is a wise choice. Second, there’s little evidence that the Fed sets policy with the Treasury’s debt problems in mind. In practice, the Fed seems dedicated to pursuing its interpretation of the statutory mandate for price stability and full employment. No one at an FOMC meeting has ever suggested inflating away the debt, or even anything close.

And regardless, the Fed’s direct participation still doesn’t matter: trading T-bills for reserves when the policy is enacted has nothing to do with the longer-term decision to tolerate a higher level of inflation.

In the alternative kind of helicopter drop, the Treasury doesn’t issue any new debt: instead, the Fed somehow directly distributes money to households without obtaining any assets in return. This creates a hole in the Fed’s balance sheet, which has traditionally held assets (Treasuries, MBS, etc.) to back its liabilities (money). What happens then? If the hole is small enough, very little: the Fed will simply recapitalize using the profits it otherwise remits to the Treasury. Over time, Treasury will need to issue slightly more debt (since it’s receiving less money from the Fed), and in effect the transfer will turn out to be debt-financed. This is really no different from the first scenario.

What if the hole is large enough that it’s not clear the Fed can patch it using profits—in other words, if there’s a risk that the Fed is actually insolvent? This is murkier territory. First, it’s not clear that the Fed can ever really go broke: as Tyler Cowen points out, it always has the option to print a bunch of money and buy something valuable. Printing a few trillion and stocking up on equities (or even high-yield debt) will probably do the trick. Alternatively, in a crisis it can run to Congress. Both these possibilities seem far more likely than the notion that an undercapitalized Fed will somehow be forced to allow higher inflation.

In any case, a direct helicopter drop by the Fed only affects the future path of monetary policy if it puts the Fed’s balance sheet in peril. Otherwise, there’s no reason to think that the FOMC will make decisions any differently. (If it’s not externally constrained, why stop targeting its mandate?) And this is a very dangerous game to play: if you deliberately sabotage the central bank, it’s hard to know what will happen.

Ultimately, I agree with Scott Sumner: it’s bizarre to use a helicopter drop to create inflation expectations when you haven’t tried the much easier route of saying you want inflation. This is doubly strange when you realize that a helicopter drop only “works” if it irresponsibly endangers the Fed’s balance sheet. If you want inflation, say it. If you want to write more checks to households, tell the Treasury to do it. The “helicopter drop” is just a strange mishmash of fiscal and monetary policy that adds nothing.

It’s useful to think about the precise difference between the “New Keynesian” (Woodford) and “New Monetarist” (Williamson) effects of monetary policy. In New Keynesian models, monetary policy is important primarily because it affects the real interest rate, shaping patterns of consumption and investment across time. If interest rates are high, it’s expensive to buy a car or build a factory, and even ordinary consumption becomes less attractive compared to the return from saving. If interest rates are too high, consumers spend less than the economy can produce, and we see a recession. A good example of this type of model is in Eggertsson and Woodford’s classic piece on the liquidity trap.

In New Monetarist models, monetary policy is important for a completely different reason. Certain kinds of “decentralized” consumption require you to hold money balances; by manipulating the relative rate of return on money, the Fed makes this consumption more or less attractive. When money is expensive, buyers consume less in “decentralized” markets, and we see a downturn. A good example of this type of model is in Williamson’s forthcoming AER article, which builds upon the canonical piece by Lagos and Wright.

It’s silly to deny the existence of either effect: in a qualitative sense, they both hold. But it’s also important to know the magnitudes, and that’s where my skepticism comes in.

Consider the traditional instrument of monetary policy: the federal funds rate. Suppose that the Fed announces a surprise rate hike of 1%, to last exactly one year. What happens?

First of all, even though the Fed only controls the nominal rate, we can expect to see a real rate increase of at least 1% as well. Why? Since the rate increase will last only one year, the Fed’s long-term policy rule remains the same, and long-term inflation expectations stay anchored at roughly the same level. The only way that we can get a real rate increase of less than 1%, then, is if there’s a temporary uptick in inflation—and in virtually any model, that’s associated with an expansion, not a contraction. Certainly this won’t result from an increase in the policy rate. At best, inflation will stay roughly constant; at worst, it’ll fall.

A one year real rate increase of 1% means that all consumption or investment today becomes 1% more expensive, relative to consumption or investment a year from now (which is pinned down by the same fundamentals that existed before the shock). The effect won’t be even across the economy: durable goods will fall much more than, say, food. But to summarize the situation, we can say that all $15 trillion of GDP become 1% more expensive relative to next year. This is the New Keynesian effect.

The New Monetarist effect, on the other hand, is driven by the nominal interest rate. A 1% increase in the federal funds rate pushes down the annual yield of non interest-paying base money, relative to other forms of liquidity, by 1%. Right now, paper currency is the only type of base money that doesn’t pay interest, and there’s a total of $1 trillion in circulation. Taking the midpoint of various estimates, let’s say that roughly half of that is held in the United States. Furthermore, of that $500 billion, let’s be exceedingly generous and say that all of it is being used to facilitate legitimate economic activity. Then a 1% increase in the federal funds rate increases the implicit cost of holding this money by $5 billion. This is the New Monetarist effect.

You don’t need to be an economist to see that in our calculations, the New Keynesian effect is vastly larger than the New Monetarist effect: ($15 trillion)*1% = $150 billion versus ($500 billion)*1% = $5 billion. That’s a factor of 30!

Granted, these back-of-the-envelope calculations don’t explicitly tell us what the economic effect will be. That’s a much more complicated calculation, and it depends on the precise mechanics of the model. But they certainly are suggestive, and unless the New Monetarists have some trick up their sleeves, it’s awfully hard to see how the New Monetarist effect can be nearly as important as the New Keynesian effect at business-cycle frequencies.

In fact, many obvious modifications of the model only widen the spread between New Keynesian and New Monetarist effects. Some sectors of the economy are much more responsive to costs (including interest rates) than others. When the Fed raises rates, we expect a large decline in the demand for cars, but a much smaller change (if any) in the demand for food at home. That’s because cars are durable goods that provide value over time—their exact date of production isn’t so important, and it can easily be shifted around in response to the cost of capital. Consumers’ ability to “intertemporally substitute” basic sustenance, on the other hand, is virtually nil.

But where is New Monetarist effect least relevant? In precisely the cases where spending is most flexible: fixed investment, large durable goods purchases, and other transactions that are virtually never made with cash. Our comparison above, therefore, actually exaggerates the extent to which the New Monetarist effect makes a difference.

The gap also becomes wider if we alter our thought experiment. Let’s say that the 1% rate hike lasts for two years rather than one. Then the New Keynesian effect becomes almost twice as large: all else equal, the economy two years from now remains pinned down by fundamentals, and consumption and investment today is now 2% more expensive relative to then. The New Monetarist effect, on the other hand, remains the same: it depends on the current cost of holding money, not the full path of interest rates.

Of course, a few caveats are in order. My analysis above is concerned with the Fed’s impact on the business cycle: the short term, not the long term. As we stretch our time horizon, the Fed’s ability to impact the economy via the New Keynesian effect diminishes, while the New Monetarist effect remains roughly the same. If we’re talking about the federal funds rate, I still don’t think that the New Monetarist effect is very important, for essentially the same reasons that I’ve dismissed the Friedman rule. Nevertheless, the comparison in this post doesn’t apply.

More importantly, I’ve limited the analysis in this post to conventional monetary policy—changes in the federal funds rate. As we’ve seen over the past several years, this is not the only form of monetary policy. Moreover, a great deal of liquidity creation takes place outside the Fed: at banks, or the Treasury. Modelling this activity is potentially very important, and I think that the New Monetarist program is quite promising when viewed in the right light.

But when it’s applied to changes in the central bank’s interest rate—which is still the channel for monetary policy in the vast majority of countries, the vast majority of the time—New Monetarism simply doesn’t offer much. As even the most casual back-of-the-envelope calculation will convincingly demonstrate, the New Monetarist effect is tiny compared to the New Keynesian effect, and there’s no reason to think that it’s essential for understanding the implications of monetary policy.

Like this:

Following Thomas Sargent’s recent Nobel Prize, I came across this excellent excerpt from a 1989 interview:

The essential job of the Fed from a macroeconomic point of view is to manage the government’s portfolio of debts. That’s all it does. It doesn’t have the power to tax. The Fed is like a portfolio manager who manages a portfolio made up wholly of debts—it determines how much of its portfolio is in the form of money, which doesn’t cost the government any interest, how much is in the form of T-bills and how much is in 30-year bonds. The Fed continually manages this portfolio. But it doesn’t determine the size.

Exactly. The Fed can trade money for bonds, but this doesn’t change the overall level of government debt—just its composition.

This is important to understand the fallacy in common arguments for “helicopter drops”. You often hear people saying roughly the following:

The Fed has the power to create money and hand it to consumers, stimulating the economy. Normally, the problem with this policy would be inflation, but clearly the dominant risk today is deflation, not inflation—so what’s the downside?

I agree that inflation is low on the list of important risks, but this is nevertheless a deeply flawed argument. Holding more debt in the form of money now is not an inflation risk, but the money doesn’t magically disappear after a few years. It’s still out there, and it’s still on the Fed’s balance sheet. It’s still debt.

Suppose that the Fed creates $1 trillion out of thin air and sends every American an equal share. For a while, this will be fine. Assuming that the intervention doesn’t drastically change the demand for currency, the new money will be held mainly in the form of reserves. The Fed will pay 0.25% on these reserves—not a big deal. So what’s the problem?

Again, the money doesn’t go away—the Fed still has an enormous liability on its books. With so much money in circulation, the marginal investor won’t be willing to pay a premium to hold money. The federal funds rate will therefore be roughly the same as the interest rate paid on reserves, which will also be roughly equal to the rate the Treasury pays on T-bills. In other words, holding debt as money won’t be any cheaper than simply holding it in the form of short-term bonds. The government can’t escape the cost of financing its liabilities. Giving money to households via a helicopter drop is fundamentally the same as giving them money via an act of Congress, with all the usual benefits (improved aggregate demand in the short term) and costs (burdensome future taxation to pay back the debt).

Admittedly, it’s possible that the increase in debt load will cajole the Fed into pursuing easier monetary policy in the future. The more nominal debt you’re trying to finance, the more tempting it is to push down interest rates and spur inflation. (In fact, Sargent and Wallace’s famous paper deals with an extreme case of this phenomenon, where the central bank is forced to make up for the fiscal authority’s inadequacies.) If you’re trying to create expectations of future inflation, this is arguably a good thing.

But it’s not clear why a helicopter drop should provoke such a change in incentives, unless it’s of truly overwhelming magnitude. Excluding intragovernmental holdings, the public debt is currently over $10 trillion. Even a $1 trillion helicopter drop would only add 10%. Is a 10% increase in debt enough to dramatically change the Fed’s incentives in the future? It’s possible, but I’m skeptical. Historically, we’ve seen much larger swings in the government’s balance sheet, and the Fed’s response has been minimal at best.

Bottom line? Money is a form of debt. Whether an operation’s short-term financing comes from “bonds” or “money” makes no difference; the cost is the same, and the usual tradeoffs of fiscal policy remain.

Like this:

In some cases, maybe. But not in understanding the effect of conventional monetary policy, at least to any significant degree.

First things first: To be clear about these issues, we need to be specific about the type of “money” and “frictions” we’re discussing. There are, after all, many assets that are sometimes labeled “money”. First, there’s “base money”, the paper currency and electronic reserves issued by the Fed. Then there are all the different kinds of “money” created by banks, both traditional and shadow: transactions accounts, saving accounts, money market funds, repo, and so on. And if that’s not enough, the Treasury creates “money” too: Treasury Bills are so liquid and bereft of nominal risk that they’re essentially as good as bank money. (Often they’re held in money market funds, which add an additional layer of convenience, but Treasury does all the heavy lifting in creating short-term liquidity.)

When all these assets are given the blanket title of “money”, you can’t have any sensible discussion. The properties distinguishing $1000 in cash from $1000 in a money market fund are very different from the properties that, in turn, distinguish $1000 in the money market fund from $1000 in an S&P index fund. So let’s confine our discussion to the narrowest possible definition of money: currency and reserves issued by the Fed.

Does this restrict us to some kind of meaningless special case? Not at all. In normal times, the Fed implements monetary policy by adjusting the federal funds rate (and its expected path). This is the spread between the interest rate on currency (zero) and loans in the federal funds market—in addition to T-bills, commercial paper, and many other assets that end up with essentially the same rate. In other words, it’s the spread between base money and a much broader set of money-like instruments. If monetary frictions matter in understanding the impact of a certain shift in the federal funds rate, their effect must boil down to the difference between base money and “money” more broadly. Base money must be useful in addressing monetary frictions in a way that “money” in general is not, and this usefulness must have nontrivial macroeconomic consequences.

Does it? I find that exceedingly hard to believe. Paper currency, which in normal times comprises the vast majority of base money, simply isn’t that important to the macroeconomy—at least not at the current margin. (Quick question: can you think of any cash transactions you would stop making if interest rates went up to 4%, which makes you lose $4 a year for every $100 in your pocket? I didn’t think so. Even harder question: are there any transactions you would stop making, period, because of this cost—rather than simply shifting to some non-cash form of payment? Again, I didn’t think so.)

That leaves us with reserves. Before the crisis made reserves costless, required reserves amounted to about $40 billion—that’s 10% of the roughly $400 billion in checking accounts subject to the requirement. That’s compared to $150 trillion in total financial assets—or, if we want to avoid double-counting, $50 trillion in financial assets held by households. The impact of a 1% increase in the federal funds rate is to increase the implicit cost of holding those reserves by 1%—that’s $400 million. But to a first approximation, the 1% increase also changes the expected rate of return on all financial assets by 1%. If we use the household total of $50 trillion, that’s a $500 billion effect. The direct effect of interest rates is over a thousand times as large as the secondary effect of making checking accounts more expensive. In other words, the effect where monetary frictions come into play—because a certain kind of money is made more expensive—is vanishingly small in comparison to the standard effect in New Keynesian models. And if the Fed sets the federal funds rate using interest on reserves, the former effect disappears entirely.

If this isn’t enough to convince you, consider the following: the role of monetary frictions is key to understanding what matters in monetary policy. Scott Sumner and other market monetarists doggedly insist that the true stance of monetary policy is determined by the expected future path of nominal variables, not just whatever the interest rate or monetary base happens to be today. I completely agree! We use different languages—I think that it’s better to talk about expectations in terms of interest rate rules, while they like to talk about nominal GDP and quantities of money—but we share the fundamental understanding that expectations are more important than current levels.

If you think that frictions are essential to understanding the effects of monetary policy, on the other hand, you have to think that the present matters much more. Why? The extent to which frictions are a tax on economic activity is determined by the current cost of holding money rather than less liquid assets. (After all, if money pays as much as all other assets for the next few weeks, you can hold all your wealth as money, and frictions won’t be much of a problem in the near-term!) Yes, it’s possible that the future trajectory of monetary policy will affect your capital investment decisions—if extreme frictions in 5 years will make it difficult to sell your products, you won’t want to build the factory today—but the current cost of money (and therefore the current burden of frictions) has a vastly disproportionate influence on your actions.

This result pops out of virtually any model where there is no nominal rigidity and the only impact of money comes through frictions. For instance, if you parse Proposition 2 in Chari, Christiano, and Kehoe 1991, you’ll see that the proof for optimality of the Friedman Rule comes entirely through static considerations—setting R=1 so that a certain first-order condition is satisfied at each point in time. Admittedly, if you’re deviating from this “optimum” and setting R > 1, the future trajectory of policy matters, but in strange ways that don’t do a very good job of matching intuition: tight monetary policy in the future depresses capital investment today, yet all else equal it actually increases current consumption. (Tight future policy makes investing in capital less worthwhile, so you choose to consume more today instead.) I don’t think this is what market monetarists have in mind, to say the least.

Bottom line: if you say that expectations rather than current levels matter in monetary policy, you can’t think that monetary frictions are very important. This isn’t an argument against monetary frictions, of course—the model should drive policy conclusions, not the other way around—but it is a useful check for mental consistency.

To a large extent, I think this issue is confusing because under conventional policy, monetary frictions must matter to some degree—otherwise, the Fed couldn’t convince anyone to earn a lesser rate on money than other assets, and it wouldn’t be able to manage interest rates using open-market operations. But “some degree” can be extremely small in practice, and the details of how the Fed manages rates aren’t necessarily relevant to the effects of those rates, in much the same way that the mechanics of your gas pedal are peripheral to the consequences of driving quickly. Meanwhile, the Fed can implement policy when there are no frictions at all. Even when the market is saturated with money, it can set rates by adjusting interest on reserves. (In fact, it’s doing that right now.)

All in all, I see very little practical role for monetary frictions in understanding the impact of conventional monetary policy. The slight changes in cost of holding paper currency or maintaining a checking account simply don’t have serious macroeconomic consequences. The notion that we need a comprehensive model of these frictions to understand monetary policy is the kind of idea that’s plausible in theory but dead wrong in practice—just like the Friedman rule.

(Unconventional policy like QE is a different story, but let’s save that for another post.)

Like this:

Did you know that Greg Mankiw and Larry Summers once wrote a paper showing that tax cuts are probably contractionary?

Neither did I.

Of course, in an effort to be sensationalist, I’m being unfair to Mankiw and Summers. I doubt either of them has ever actually believed that tax cuts depress the economy. Mankiw, after all, has some innovative ideas about how paycheck-to-paycheck consumers might make tax cuts effective as a tool to boost aggregate demand. As high-ranking economic advisers, both Mankiw and Summers presided over large tax cuts intended as stimulus.

In this paper, we re-examine the standard analysis of the short-run effect of a personal tax cut. If consumer spending generates more money demand than other components of GNP, then tax cuts may, by increasing the demand for money, depress aggregate demand. We examine a variety of evidence and conclude that the necessary condition for contractionary tax cuts is probably satisfied for the U.S. economy. (emphasis mine)

Mankiw and Summers, you see, were following up on a long literature that used the IS-LM model to analyze the effects of fiscal policy. In that literature, debt-financed transfers stimulated the economy: with more money in their pockets, consumers spent more. (Needless to say, this literature didn’t bother with Ricardian equivalence—but that’s another story.)

Now, there was some feedback from the “LM curve”. As consumption rose, there was greater demand for the (fixed) quantity of money, leading to higher interest rates and a partially offsetting drop in consumption and output. But this could only be a partial offset. After all, interest rates only rose because output rose: if output stagnated or fell, there’d be no dampening effect from interest rates, and nothing to offset the positive effect from the transfer, meaning that output would have to rise after all. (Contradiction!)

In IS-LM lingo, an upward movement in the “IS” curve would inevitably boost the value of “Y”:

Mankiw and Summers challenged this result with a simple observation. Maybe money demand doesn’t depend on output in the aggregate—instead, it depends separately on different components of output, like consumption and investment. In particular, Mankiw and Summers argued that money demand was influenced mainly by consumption—households hold a lot of money for consumption purposes, but they don’t need nearly as much for durable goods purchases, and businesses don’t use much cash for investment.

This slight modification of the IS-LM model makes it possible for tax cuts to be contractionary. Here’s the logic: tax cuts boost consumption, which dramatically increases money demand and forces up interest rates. Higher interest rates put such a damper on investment that the overall movement in output is negative.

Interesting story. But it misses the obvious question: why does the Fed stand by, complacently, and keep the money supply exactly the same as such clearly unintended consequences play themselves out? And assuming the Fed will never anything so crazy, why do we care about a thought experiment where it does? If it’s even just following an interest rate rule, the paper’s entire chain of reasoning is meaningless.

To be fair, Mankiw and Summers recognized these issues:

Second, our analysis considers the effect of tax cuts assuming a constant path of some monetary aggregate. Depending on the Fed’s reaction function, a wide range of alternate outcomes is possible.

But, for some reason, they still found it to be a worthwhile exercise:

Our assumption that the money stock is held constant in the face of tax changes, however, is a natural and conventional benchmark

A natural benchmark? Really? A constant money stock is natural? If this is your framework for policy analysis, you might as well conclude that the War on Drugs is the most contractionary economic program in the United States: after all, it creates a tremendous demand for cash in illicit transactions, one that quite plausibly swamps any short-term variation due to fiscal policy. If Mexican drug lords could safely wire their money around, there wouldn’t be any need for so many $100 bills. Holding the money stock constant, interest rates would plummet, investment would soar, and we’d experience a massive investment boom.

But I shouldn’t be too hard on circa-1986 Mankiw and Summers: after all, they were prisoners of their time. A constant money stock in the face of policy changes has never been a “natural” benchmark, but it certainly was conventional. Everybody used it. The Fed even toyed for a few years with its own form of monetarism—the only policy rule under which the Mankiw-Summers result (and all its IS-LM precursors) might have had a grain of truth.

This is one of those historical episodes that makes you realize how far economics has come. Somehow, in 1986, it seemed perfectly natural to write a paper on the obscure properties of the money demand function—even to two economists as sharp as Mankiw and Summers. (They don’t come much sharper than that.) Today, thanks to MichaelWoodford and fellow travelers, we realize that the money part of monetary policy isn’t really so important, and that the perverse feedbacks of the old model are little more than intellectual curiosities—unless, of course, you have a central bank crazy enough to implement a money supply rule, which we fortunately do not.

And this is liberating! Macroeconomics is hard enough without having to worry about how every single policy might interact with money demand. (For instance, in extreme cases we need to discuss changes in liquidity demand; a related concept, but one with very different policy implications.) Let’s be glad that the era of money demand is over—hopefully for good.

Share this:

Like this:

Paul Krugman defends IS-LM as a pedagogical device on the grounds that it’s part of “the minimal model that has goods, bonds, and money”. Greg Mankiw circa 2006 does much the same, favoring the IS-LM model “because it keeps the student focused on the important connections between the money supply, interest rates, and economic activity, whereas the IS-MP model leaves some of that in the background”.

But do the “important connections” in the model bear any correspondence to reality? Not really—and understanding whynot is a great deal more interesting than any attempt to muddle through outdated diagrams.

As I pointed out last week, the “LM curve” represents a version of monetary policy that disappeared decades ago: a target for the money supply. Given a particular value for the money supply, higher output must be accompanied by higher nominal interest rates, which offset the increase in money demand that tends to accompany a larger economy. We’re left with an upward-sloping curve in (i,Y) space—that’s LM.

Now that Fed uses interest rates to implement monetary policy, does this make any sense? Excerpting his textbook, Greg Mankiw claims that the LM mechanism is still a useful way to understand how central banks do business. After all, with a fewexceptions they still implement interest rates by using open-market operations to adjust the supply of reserves. In this light, we can say that the Fed is moving the LM curve to achieve its desired interest rate. Right?

Not quite. For LM to be useful in understanding the implementation of monetary policy, it can’t be just a long term relationship—that’s merely the near-obvious statement that all else equal, a larger economy will eventually need more money. It needs to be valid in the short term as well, the horizon over which the nitty-gritty of monetary implementation takes place. And there’s no reason why that should be true.

Indeed plenty of reason to think exactly the opposite—that at high frequencies, declines in output lead to increases in money demand. Anyone who’s read a newspaper over the last few years has surely come across the notion of a flight to liquidity. When the economy dips, there’s an increase in demand for liquid assets that vastly outweighs whatever tiny drop you’d expect in transactions demand for money. In practice, LM probably slopes the wrong way. (This is also the difficulty with Brad DeLong’s argument that LM applies to quantitative easing—QE tries to change the spread between long rates and the expected path of short rates, but there’s no reason to assume that spread has any particular relationship with output, much less a positive one.)

This isn’t to say, of course, that we should force undergraduates to scribble downward-sloping LM curves. Of course not. Rather, the exact relationship between “M”, “Y”, and “i” is so complicated and time-contingent that we shouldn’t waste time trying to model it at all. As far as I know, the guys at the New York Fed who actually implement interest rate targets don’t rely on some hyper-complicated model of the relationship between reserve demand and 132 macro variables. Instead, they inject reserves into the system when rates are above target, and take them out when rates are below target. It’s a pretty mechanical process, but it works, and you don’t need any more than supply and demand to understand why.

There’s a broader question here: what mechanisms do you really need in a macro model? For decades, monetary economists painstakingly hashed out functions for “money demand”, and spent untold amounts of econometric energy trying to estimate them. You’d see horrendously tedious papers exploring how the effects of government policy X depended on the exact specification of the money demand function. Even as late as 1999, one prominent monetary economist worried that innovations in financial markets would turn the central bank into an “army with only a signal corps”, as they brought down the demand for government-issued money.

As Mike Woodford pointed out a decade ago, none of this actually matters. Central banks today (at least the ones in developed countries) only care about money demand to the extent that it affects their ability to control interest rates, and this remains perfectly feasible when money demand is small or even zero. The messy regulatory and technical issues that determine banks’ demand for reserves on the fed funds market have virtually nothing to do with the effects of monetary policy on the economy. Obsessing over money demand is a waste of time.

When you think about it, this is fairly obvious. The “LM” curve embeds two claims about the demand for money: that it increases with output and decreases with interest rates. But there are countless other influences on the demand for money (at least base money created by the Fed), many of which are just as important in the short term. How many $100 bills do drug dealers need to evade notice? Are paper dollars still popular in countries with underdeveloped banking systems? How many ATMs has Bank of America built? Do gas stations demand payment in cash?

If you seriously believed that modeling money demand was important, you’d be working overtime to build a model with all these features. Sure, you’d probably have a “transactions demand” block like everybody else, but you’d also be surveying coke dealers to keep abreast of changes in their cash management. The fact that no one actually deems a survey of coke dealers necessary to understand monetary policy—even when their effect on money demand is quite plausibly larger than the effect from most other economic activity put together—is powerful evidence that no one really thinks the details of money demand matter.

And that’s the great thing about economic modeling: you don’t have to include every conceivable, small-bore mechanism. You can’t! Instead, you need to focus on what matters—and as the economics profession has finally come to realize, the precise characteristics of money demand just don’t make much difference. LM is irrelevant.

Share this:

Like this:

With alltherecentdiscussion of IS-LM, I can’t help but repeat a longstanding question of mine: what on earth is “LM” still doing in the diagram?

The “IS” curve is logical enough: by encouraging investment (not to mention spending more generally), lower interest rates lead to higher output. Sure, there are some flaws. We should really be looking at the full future path of interest rates (which is what matters for spending and investment decisions), not today’s interest rate in isolation. We should also emphasize the difference between nominal and real interest rates—so that if the vertical axis denotes nominal interest rates, inflation will shift the IS curve (which depends on real rates) upward. But as a starting point, this isn’t really so bad: the fact that higher real interest rates, all else equal, push down output is probably the most fundamental observation in macroeconomics.

But the “LM” curve? It’s implicitly describing a monetary policy rule that disappeared decades ago. Here’s the story: the central bank has a target for the nominal money supply. Due to sticky prices, in the short run this corresponds to a target for real money balances as well. Generally, higher real output (“Y”) will increase the demand for real money balances, while higher nominal interest rates (“i”) decrease it. The set of possible equilibrium pairs (Y,i), therefore, has positive slope: when high Y is elevating demand for real money, i has to rise as well to bring demand back into line with the fixed supply.

Fair enough. But central banks today don’t target the nominal money supply: in the short run, they target nominal interest rates directly. In this light, a more sensible “LM” curve would be horizontal. Now, admittedly central banks try to operate according to policy rules, under which the response of interest rates to output (or, more accurately, deviations of output from its potential level) is generally positive. If we reinterpret LM as a monetary policy rule, the upward slope makes a little more sense. In the past few decades, however, the most important feature in central banks’ policy rules has been the response to inflation, not output; the runaway inflation of the 1970s was blamed on an overeager response to the (misperceived) output gap, and for better or worse no one wanted to repeat the same mistake twice. Meanwhile, although the US has never officially joined the bandwagon, many countries now operate under an inflation targeting framework, in which responding to inflation is the key feature of the policy rule. In this environment, depicting policy as a relationship between “Y” and “i” misses what’s really going on—better to abandon the upward-sloping LM curve altogether and use a simple horizontal line to depict the current policy rate.

I’m not alone in this sentiment. David Romer wrote an entire piece for the JEP in 2000 called Keynesian Macroeconomics without the LM Curve. (As the title suggests, he shares my feelings on the matter.) Tyler Cowen puts this at #6 on his list of grievances. It’s a pretty obvious point—yet, for reasons I don’t entirely understand, we still print thousands of undergraduate textbooks a year with LM front and center.

This is nothing, of course, compared to the abomination that is the AD/AS model, also included in undergraduate textbooks. AD slopes down for the same outdated reason that LM slopes up: given a constant money supply rule, lower prices imply higher real money balances and therefore lower real interest rates, which lead to higher demand. (It can also be justified using real balance effects, which are quantitatively irrelevant, or fixed exchange rates, which only exist in a few cases.) This has absolutely nothing to do with monetary policy as it’s currently implemented. Yet the simple AS and AD curves, made appealing by the apparent (but false) analogy to ordinary supply and demand, lurk somewhere in the minds of countless former economics students. This leads to all kinds of bad intuition—like the notion that sticky prices are problematic because they prevent the adjustment to equilibrium on the AD/AS diagram. (Wrong. Under current Fed policy, the price decrease -> lower interest rate -> improvement in demand mechanism is no longer operative, unless deflation combines with the Taylor rule to force a policy that the Fed should have chosen anyway. Certainly this is no use at the zero lower bound, where price flexibility is actually harmful, because it leads to more deflation and higher real interest rates.)

Somehow, the economics profession hasn’t quite completed the transition to a world where money supply is no longer the target of choice. In research papers, of course, the change happened years ago—but intuition and the hallowed undergraduate canon are much slower to change. Meanwhile, we’re left with LM, the strange vestige of an earlier era.

Share this:

Like this:

Yesterday, I discussed how the central problem with “deleveraging” is that monetary policy fails to accommodate it, not that it’s inherently destructive on its own. One common reaction is the following: “how can monetary policy make a difference when consumers can’t borrow any more?” After all, monetary policy works through interest rates, right? If households are at their borrowing limits, how will anyone’s behavior change?

There are several answers. First, to talk about households in general as overleveraged and pinned up against credit constraints is to seriously exaggerate: some are, but many are not. In the aggregate, the assets of American households are still far higher than their liabilities—in fact, as a quick glance at the Federal Reserve Flow of Funds tables will demonstrate, the situation isn’t so much worse than it was pre-crash:

Bad? Of course. But just as in 2006, most of America’s wealth is held by households whose assets vastly exceed their liabilities.

And this is even after I’ve excluded plenty of assets*: in the Fed Funds table, I’ve taken out both lines 6-7 (nonprofits’ equipment and software and consumer durable goods) and lines 27-30 (life insurance reserves, pension reserves, equity in noncorporate business, and “miscellaneous assets”), because they’re arguably less liquid, while leaving liabilities the same—in other words, I’m counting car loans as liabilities while excluding cars as assets. Yet even this calculation, designed to provide the least favorable picture of household balance sheets possible, shows that aggregate net worth is still well above zero, and aggregate leverage isn’t as high as you might imagine.

This is not to deny that many households have negative net worth. There are, and that’s a problem. Presumably the positive net worth that shows up in these aggregate statistics is disproportionately held by the top 10% of families, and the other 90% is in far worse shape. But the top 10%’s disproportionate share of assets is matched by its disproportionate share of spending, and therefore disproportionate influence on the macroeconomy. Even if in the short term lower interest rates do nothing more than provoke a spending spree among the top 10%, they’ll be worthwhile from a macroeconomic perspective.

Of course, lower rates are more effective than that. Even households drowning in debt tend to have some assets: a house, and maybe a 401(k) or IRA. All else equal, low interest rates place upward pressure on home prices (since they bring down the cost of financing for those who can obtain it) and make both equities and long-term bonds much more valuable (since lower rates increase the discounted value of an asset’s payout). This can actually help fix household balance sheets: it brings them out from underwater on their mortgages (or, at least, makes them less underwater than they otherwise would be) and increases the value of their other financial assets. In this light, it’s entirely conceivable that crippled balance sheets make monetary policy more effective, not less. Although the “wealth effect” from higher equity and bond prices matters most for the richest Americans, it’s useful for a much broader group.

And why are we just talking about households? Household spending isn’t all that matters; cyclical swings in investment by businesses are also a very important part of any recession. The dominant form of business in the United States is corporate, and most corporations aren’t facing any serious credit constraints. If it was really necessary, many could pay for investment through retained earnings alone, and most have access to reasonably liquid public debt markets. Further decreases in their cost of capital, or equivalently increases in Tobin’s q, can only increase the incentive to invest.

All in all, there’s no reason why monetary policy should be any less effective now than ever before. Yes, life is much more difficult at the zero lower bound, but the Fed can still commit to lower rates in the near future, which if credible does nearly as much as lower rates today. (Increases real estate and equity values, improving household balance sheets? Check. Makes durable goods purchases more attractive? Check. Improves corporate incentives to invest by increasing equity values and decreasing the cost of debt financing? Check.) There’s nothing special about a “balance sheet recession” that negates the value of monetary policy—indeed, if we read our vintage Bernanke, we’ll understand that monetary policy may be more important than ever.

.

*I can’t exclude all assets of nonprofit organizations, which are bundled together with households in the Fed Funds data. This is fine for two reasons: (1) many nonprofit organizations have an economic function similar to households, acting as final buyers of goods and services and (2) back in 2000, when the Fed last published figures on nonprofits separately, their assets were insignificant compared to assets held by households; I see no evidence of a sufficiently dramatic upswing since then.

Yes, the concept is important. In fact, it’s probably central to understanding why we’re in such a rut. But almost everyone talking about it fails to understand why it matters, and why it’s intimately related to monetary policy.

Sure, most people know the basic idea: during the crisis, consumers and businesses experienced an enormous hit to net worth, and now they want to improve their balance sheets. To do so, they spend less—but since lower spending means lower income somewhere else in the economy, in the aggregate balance sheets barely improve at all. The economy is depressed as it gradually returns to the correct level of leverage, and we experience the “long, painful” process of deleveraging. (In the words of, well, every blogger and amateur econ pundit in the world.)

Great story. Too bad it ignores everything else we know about macroeconomics.

After all, why should the desire to spend less and save more hurt the economy? If I want to save, I hand my money over to someone who wants to borrow or invest it. Net saving is channeled into productive investment. If consumers want to save more, we’ll see lower consumption but an investment boom—hardly a disaster for the economy.

Sure, you say, but maybe no one wants to invest this money. Won’t an increase in savings then mean that the economy crashes? After all, the money isn’t being spent on consumption, and it has nowhere else to go.

Although this sounds plausible, it doesn’t really make sense. At the micro level, economists don’t worry about weak demand causing a supply glut: instead, they correctly say that prices will adjust to clear the market. The same is true for macro as well. There’s a price—the real interest rate—that determines willingness to save and invest. Like all prices, this price has a market-clearing level: at a sufficiently low real interest rate, the supply and demand for savings will equate, and consumers’ desire to save will translate into an investment boom, not an economic downturn.

So yes, deleveraging can be very bad for the economy. But this is only because monetary policy doesn’t adjust enough to match the market.

In failing to understand this core logic, most commentary about “deleveraging” is rather bizarre. At some level, it’s the same cluelessness that we once saw from central planners: they’d trip over themselves in the complexity of fixing a shortage in one market or a glut in another, never quite realizing that the price mechanism would do their work for them. Right now, historically low inflation expectations and below-potential output are prima facie evidence that real interest rates are too high. That’s what every macro model tells us is associated with contractionary policy by the Fed. Yet we see pundits lost in all kinds of complicated, small-bore proposals to stimulate the economy—when the fundamental, overriding dilemma is getting the price (in this case, the interest rate) right.

Note: High on the list of people who do understand deleveraging are Gauti Eggertsson and Paul Krugman. It’s even obvious from the title of their paper: “deleveraging” comes right before “the liquidity trap”, i.e. the zero lower bound making it difficult for the Fed to properly accommodate the effects of deleveraging.