Friday, October 28, 2011

Markets seemed to like it yesterday, not so much today. You can find news and discussion of the plan everywhere. I liked Salmon's takes here and here.

Like many others I'm skeptical that it will work. Interestingly enough I'm currently reading Paul Blustein's very good And the Money Kept Rolling In (and Out) about the IMF's relationship with Argentina around the turn of the millenium. The parallels between Argentina and Greece are striking -- I'm not the first to notice this -- but the parallels between the IMF and EU actions are also notable. Let's run down some of them.

1. At the time Argentina was on a convertibility system with the peso was pegged one-to-one to the US dollar. This is functionally very similar to the European common currency, where "Greek" euros are pegged one-to-one with "German" euros. Both systems were adopted for similar reasons: national authorities were not able to credibly commit to stable monetary policy, which led to a lot of economic volatility, investment risk, and concomitant slow growth. In both cases macroeconomic adjustment is impossible through the exchange rate, which leaves internal devaluation (i.e. austerity) and/or debt default as the only remaining options.

2. Both policies worked well for about a decade. Because of that, the Argentine and Greek governments were able to borrow at low interest rates. And because of that, governments were fairly casual about fiscal probity. While public deficits were not extreme, they were politically entrenched. When growth began to slow lower tax revenues led to a growing debt burden. Interest rate spreads widened as investors began to believe that both economies would not be able to grow fast enough to manage their debt. This, of course, can become a self-fulfilling prophecy. Both governments were voted out of office, both new governments instituted fiscal reforms. In both cases these were insufficient to close the budget gap. In both cases the cost of incurring new debt, or of servicing old debt, became prohibitive.

3. In Argentina, the IMF began disbursing relatively small amounts of money in the hope that external financing would reassure bond markets. In other words, the IMF hoped that it was a liquidity crunch, not a solvency crisis. In that situation a tie-over loan can buy time for the economy to get some growth back. The EU did the same thing with the introduction of the EFSF. But the underlying economic numbers didn't improve, and bond markets continued to believe that issue was over solvency, not liquidity.

4. Politics intervenes. In Argentina, the US (and other key IMF members) were hesitant to offer additional financing as they had done during the Tequila Crisis. In particular, John Taylor -- then at the Treasury Department -- didn't want to throw US funds into the pot. Neither did Glenn Hubbard of the CEA or Paul O'Neill, the Treasury Secretary. In Europe, many members were reticent to commit more funds. In both cases policymakers tried to figure out how to leverage already-appropriated funds to have a greater effect, but nobody bought it in either case. (Ken Rogoff, at the IMF during the Argentina crisis, quipped "After one strips out all the window dressing, there is no way to make $6 billion of liquidity worth more than $6 billion in liquidity. But there are many creative ways to make it less.")

5. Then come the "voluntary" private sector haircuts, coupled with additional public funds. These are intended to do a few things: extend the timeframe that indebted countries have to consolidate fiscally, reestablish growth, force the private sector to bear some of the costs of bailouts, and thus prevent default in a politically palatable way. In both cases the initial market reaction was positive, but in Argentina the effect was short-lived and I expect that to be the case with Greece as well. Barry Eichengreen described the Argentina situation thus: "The realization had dawned that the IMF package offered no magic formula for getting growth going again. And without growth, it is hard to see how political support for paying the foreign debt can be sustained." This sounds a lot like Greece, no?

6. A corollary of the private sector haircuts, as well as extended financing from international institutions, is that the country actually becomes more indebted rather than less. This happens in two ways. The private sector demands some form of compensation in exchange for voluntarily altering the terms of their debt contracts; and the new financing from international institutions also tacks onto the principle. Additionally, it's more difficult to default on IMF/EFSF loans than private sector loans. Given that the optimistic scenario is that this deal will reduce Greece's debt load to 120% of GDP, and the fact that the fundamental problems -- low growth + high debt in a fixed-currency system -- have not been resolved, there is little reason to be optimistic about the outcome.

Ultimately Argentina's internal adjustment plans were undermined by domestic politics. Voters simply got sick of extreme austerity and revolted. That meant a debt default and the abandonment of the convertibility system. It's hard not to imagine a similar scenario playing out in Greece in the coming months.

Even if it doesn't, there's still Spain and Italy looming over the horizon.

Layna Mosley, one of our IR profs, has a very good op-ed in the NY Times that summarizes some of her recent research on the effect of international trade on labor rights.* Many would be surprised by the results of this work. Gist:

There is, however, a more general way in which trade agreements — and the economic ties they generate — benefit workers in developing nations. As Colombia and Panama expand their trade relationships with the United States, workers stand to gain more than just the job creation and higher wages that often come with expanded trade. Research I conducted over the last several years with the political scientists Brian Greenhill and Aseem Prakash suggests that trade with developed nations helps developing countries expand labor rights themselves.

Why? International trade gives producers incentives to meet the standards of their export markets. When developing nations export more to countries with better labor standards, their labor rights laws and practices tend to improve. Our findings, which are based on newly collected measures of labor rights around the world, demonstrate a “California effect” on workers’ rights, in which exporting nations are influenced by the labor rights conditions that prevail in their main trading partners.

Read the whole thing.

*I guess Layna didn't want to get shown-up by her husband, UNC Prof Andy Reynolds, who recently had an op-ed on Libya in the News and Observer.

Tuesday, October 25, 2011

If I am reading this right, then Suzanne Mettler believes that any tax rate lower than 100% is "indirect social policy":

Conversely, many of our mostly costly forms of social provision today camouflage government’s role as a provider of social benefits. They do this by channeling benefits through the tax code, as does the Home Mortgage Interest Deduction, for example, and/or through subsidies to private organizations, such as employer-provided health insurance benefits, for which recipients are not required to pay taxes. These latter, indirect policies are the ones I call the “submerged state.” It is easy for citizens to miss government’s role in these policies, and to assume that only the market at is at work.

Now as it happens I am against deductions and all other loopholes as a matter of course. But I do not define "things not taxed" as the government acting as "provider of social benefits". Mettler, it seems, does:

The difference between the direct and indirect social welfare policies, however, is illusory. From an accounting perspective, they are the same thing: both impose costs on federal spending and add to federal deficits.

This is only true if you start from the assumption that 100% of national income belongs to the government, which then distributes according to a mix of "indirect" tax breaks and "direct" spending programs. I.e., it's only true if you believe that there is an implicit 100% tax rate, from which the government deducts differing percentages based on a number of criteria. In that case a tax deduction and a spending program are the same thing.

But if you start from the assumption that 100% of national income does not belong to the government then this makes no sense at all. If I buy a house and the government does not tax a small portion of the amount I pay for it -- I still pay taxes on the purchase of the home... I just get to deduct the interest -- I am hardly "getting" anything from the government. I'm just not having something taken away.

And in that case the only thing that adds to federal deficits are spending increases. Think of a scenario: there is a tax rate of 0%, and no federal spending. The government decides to pass a child tax credit equal to 5% per child. Does this add to the federal deficit? No, because 5% of 0% is 0%. Under this scenario the child tax credit is meaningless. Now suppose the government was to keep taxes the same but spend 5% on the child. The deficit goes up.

More:

In most cases, tax breaks distribute resources by permitting people to pay less in taxes, rather than by paying out dollars or providing services. That aspect of their design makes it easy to construe them as tax cuts rather than as social provision. But this, too, is a false distinction. “Social tax expenditures” assist people with particular circumstances, granting them resources to which others are not entitled. This is in stark contrast to across-the-board tax cuts to all Americans.

That's true. People without children cannot claim child tax credits. And the child tax credit is a politically-motivated tax exclusion designed to encourage better care for children (or, perhaps, having more of them). But it is qualitatively not the same as a spending program that is also intended to benefit children. The latter redistributes income from some people to other people. The former does not. The latter increases the budget deficit. The latter former does not. See? It's a very important distinction.

Note also that according to Mettler's definition a progressive tax code is a government "social provision" because the tax does not apply equally to everyone.

You could make the same argument about some of the other programs Mettler discusses. The GI Bill could be considered part of the compensation contract that the military makes with service members. I know my siblings in the military (there are currently five of them) think of it that way. Many wouldn't join the military just for the salary; the fact that part or all of their college education is included in the package is what tips the scales.

If you look at the famous table that Mettler produced in her paper, the programs that people tend not think of as government programs are things in which there is no redistribution happening. All of them are tax credits (except for student loans, which are paid back usually with interest). The things that people do think are government programs are the things in which there is redistribution happening. Mettler recognizes this:

In short, the fact that citizens often fail to recognize these policies as government social provision is attributable not to some fault of citizens, but rather to the characteristics of the policies themselves.

Precisely. And the characteristic of the policy is what is important. A policy of not taxing 100% of income is not a government intervention. It is the absence of a government intervention. A policy of not taxing mortgage interest is similarly a lack of taxing mortgage interest. People have different attitudes about whether these are "social provisions" from the government because they have different definitions of what constitutes a social provision from the government. Remember the outcry when Obama started talking about "tax expenditures" a few months back? That's because people immediately recognized what he was talking about: he wanted to raise taxes and redistribute the proceeds to others. So the phenomenon Mettler is describing is at least partially about semantics, not just cognitive dissonance.

I do not write this to disparage Mettler's research program, which I find very interesting and valuable. And the phenomenon she describes certainly happens sometimes. But I see the tendency to use the accounting that Mettler uses, which presupposes that 100% of national income belongs to the government, from a lot of political scientists. I've never understood it. It goes against all of the common principles pertaining to the nature and role of liberal government in a democratic society.

Friday, October 21, 2011

John Quiggin has a post at Crooked Timber that asks us to think about opportunity costs when considering how allocate resources from the public purse. The title is, I think, not only needlessly provocative -- "Has the U.S. Defense Department Killed One Million Americans Since 2001?" -- but also clearly wrong. If anyone can be considered guilty of that offense it would be the Congress and Presidents Bush and Obama rather than DoD, but setting that aside he makes a valid point:

I’ve spent the day at a workshop on benefit-cost analysis where a lot of discussion is on valuing policies that reduce risks to life of various kinds. US policy, for better or worse, is focused on the idea of Value of a Statistical Life. Typically a policy that reduces risks of death will be approved if the cost per life saved is below $5million, and not otherwise. (There are similar numbers applied to publicly funded health care services, prescription drugs and so on, usually per year of life saved).

The numbers are quite striking. The ‘peacetime’ defense budget is around $500 billion a year, and the various wars of choice have cost around $250 billion a year for the last decade (very round numbers here). Allocated to domestic risk reduction, that money would save 150 000 American lives a year.

So, since 9/11, US defense spending has been chosen in preference to measures that would have saved 1.5 million American lives. That’s not a hypothetical number – it’s 1.5 million people who are now dead but who could have been saved. I think its fair to say that those people were killed by the Defense Department, or, more precisely, by the allocation of scarce life-saving resources to that Department.

As happens on the best of days at CT, many of the comments are well worth reading. Leigh Caldwell made a few good points: 1) the marginal cost curve is not only not flat, it likely slopes upwards pretty quickly (so the nth life saved is much more expensive than the n-50,000th life saved); 2) to the extent that defense spending represents a global public good at all it likely saves some lives, and perhaps quite a large number. 'soru' notes:

US casualties in WWII were approximately half a million. Baseline assumption of planners, calibrated from the first half of the 20C, is that there is 40% chance of a conflict on that scale in any given decade.

So reducing that to a negligible chance saves ~200,000 US lives per decade, which is in the right order of magnitude. So by the time you account for the possibility of a worse (or just less successful) conflict, and the points mentioned above about the incidental benefits of defence spending, increasing marginal costs of other measures and so on, it starts to make sense on its own terms.

Others referenced deterrence spending by India and Pakistan, and other scenarios whereby military spending may have conflict sufficiently costly that it does not occur when it otherwise might have. When we teach Poli 150 -- Intro to International Politics -- we generally say that all military spending is inefficient in that it does not generate prosperity, but is nevertheless necessary to protect the prosperity that has been generated by other means. The implication is that, while military spending would be wasteful in a world with no threats, there exist threats and so some military spending is not wasteful. I think everyone involved in the discussion agrees that the US is currently spending too much on the military, but I also believe that cutting spending by something like 90% (or, as Quiggin mentions, 100%) would probably have some nasty knock-on effects.

As soru says, we're talking in probabilities here, and the differences don't have to be all that large for large amounts of military spending to be rational on its own terms. Moreover, you want to over-insure against tail risks with extremely negative potential consequences. Obviously counterfactuals are hard, but it is pretty clearly the case that spending zero percent of national income on defense would embolden others to attack, and spending one hundred percent of national income on defense is not only overkill but defeats the whole point of defense spending. So there is a margin where spending more on defense will "save lives" via deterrence, and there is some margin where increased spending will have a very small or null effect, and could thus profitably be diverted to other uses. The question then should be: where is the relevant margin?

I asked Phil Arena to write a post on what he thought were the best models addressing the question. He cited Slantchev (as I thought he might):

This is doing no justice to Slantchev's arguments, but to grossly oversimplify matters, he essentially argues that states can typically prevent war if they are willing to encourage sufficiently large costs.

Military preeminence is a stated intention of American military spending, at least partially for this reason. Phil also referenced some of his own work in progress:

However, as I demonstrate in a working paper (a new version of which I will be posting sometime next month), there might be reason to expect that the effect of military spending on the probability of war depends in part on the baseline distribution of capabilities. I identify results where the challenger risks war with the defender if and only if the defender engages in costly military buildups, yet the defender is nonetheless willing to adopt such a strategy. And for that matter, defense spending by would-be aggressors may similarly either embolden them or allow them to extract such large concessions even when playing it safe that the overall risk of war will go down. Put simply, I think there is very good reason to assume that the relationship between defense spending and the probability of war is non-monotonic.

Which is precisely the relationship I described above. We're in Laffer-curve territory here, so it's no use belaboring the point. But it isn't difficult to see how a lot of US military spending -- perhaps even more than we now do -- could be cost-effective in the way that Quiggin describes if it were deployed with the intention of saving as many lives as possible. Tens of thousands of people die in horrible civil conflicts every year, most of which the US does not intervene in. Some of the US' interventions likely have saved many lives at relatively low cost (eg Kosovo and Libya) although the counterfactuals are, as always, very difficult.

But as Phil notes, the goal of most military spending is not saving lives at cost-effective margins. (This gets back to where Quiggin's wrong to point the finger at DoD rather than the legislature and executive.) Thus, criticizing DoD for not maximizing lives-saved is like criticizing the Yankees for not winning the Super Bowl: the DoD is playing a different game. Hell, the legislative and executive branches are playing a different game.

Quiggin should be criticizing the government for playing baseball rather than football. But if the government is guilty of massively misallocating resources in the same way over long time horizons then, as Phil hints, doesn't it make more sense to work towards reducing the ability of the government to misallocate resources rather than trying to force it to do something that it is clearly not well-constructed for? That's what I take away from Quiggin's post: shrink the government. If you don't, it'll spend the money on things that increases the capabilities of the government. I'm pretty sure this is not what Quiggin believes, but since this post isn't coupled with any theory of politics that allows the transference of military funds to other programs that do improve public health, it's hard to take his post any other way.

One last point: Phil notes that IR has a very poor understanding of the way that the systems work, as opposed to bilateral crisis bargaining. In other words, it isn't necessarily reasonable to infer from a model of a dyadic interaction to the broader system. I think this is especially true when talking about a superpower.

Wednesday, October 19, 2011

Jeffrey Frankel summarizes some of his recent research on how some currencies become internationalized, and speculates on the implications of his work (and that of others like Eichengreen) for the rise of the renminbi as an international currency. He highlights two previous occasions change in reserve currencies: the shift from the pound sterling to the dollar in the 1930s-1940s, and the rise of the yen and German mark during the 1970s-1980s. (He mentions the rise of the euro since 1999, but the euro mostly just replaced the mark.) Frankel suggests that shifts in global reserve currencies come about when the following conditions are met:

1. A large volume of trade in the rising country.

2. Creditor status for the rising country.

3. Perceptions that the currency of the rising country will maintain its value.

4. Deep, liquid, open financial markets in the rising country.

China definitely meets the first two criteria, probably meets the third, and definitely does not yet meet the fourth although it has taken some baby steps in that direction over the past few years. But I'd add another criterion to the four above: a reserve currency becomes a reserve currency when it's already a reserve currency. In other words, the attractiveness of a currency is partly about the national characteristics that Frankel describes and partly about network externalities. The usefulness of a reserve currency is largely a function of who else uses it as a medium of exchange. If lots of other folks use it, it makes sense for you to use it too. If few others use it, there's not much point in your using it. Therefore, currency usage is determined by a preferential attachment decision rule: currencies that attract a lot of usage by foreign entities tend to attract even more usage; currencies that attract little usage tend to continue to attract little usage.*

For this reason, the reserve status of a currency tends to be pretty durable. As long as the currency network remains intact, central nodes tend to remain central nodes and peripheral nodes tend to remain peripheral nodes. It generally takes a major crisis for the network to disintegrate, thus it generally takes a major crisis for a displacement to take place even if Frankel's four conditions are otherwise met. The shift from the pound sterling (under the gold standard) to the dollar required several world wars and a global depression. These extreme events obliterated the network that had emerged during the "first age of globalization"(large pdf), and allowed the formation of the post-war reserve system based on the dollar.

In other words, I believe Frankel's criteria are necessary for a shift in global reserve currency, but are not sufficient. Previous systemic changes required major crises. There have been few of these; in the past half-century perhaps only a partial one: the dollar's status weakened a bit with the collapse of the Bretton Woods system of exchange rates pegged to the dollar, which was in turn pegged to gold. This led to the emergence of the German mark and Japanese yen as important currencies, though clearly still subordinate to the dollar. The dollar's centrality remained through the "Bretton Woods II" system, in which industrializing economies exported capital to industrialized countries (particularly the United States) to facilitate growth and employment through exportation.

This monetary system actually reinforced the central role of the dollar in the international monetary system, but eventually culminated in the subprime crisis. Was this shock large enough to further the process of reserve diversification by disrupting the currency network? There are reasons to think so. The initial financial shock was probably the largest in history, and it emanated from the country at the center of the system: the United States. In hierarchal complex networks, major systemic change generally requires a large attack on the central nodes. For these reasons many expected -- and some hoped -- that the subprime crisis would lead to a reorganization of the international monetary system and the rise of a new reserve currency. Some hoped that the Chinese yuan would increase in global importance, others that special drawing rights at the IMF -- which are comprised of a basket of currencies -- would emerge as a central international reserve asset. Still others anticipated a greater role for the euro; while the European debt crisis now makes this look exceedingly unlikely, it never had much of a chance as Kathleen McNamara wrote in a 2008 article in RIPE.

So there are also reasons to be skeptical about a shift away from dollar dominance. Unlike the British in the interwar period, the US has proactively worked to stabilize the international monetary system. The Fed opened currency swap lines with almost every important central bank in the world from 2008-2010, ensuring that the financial system had ample liquidity. For the most part these actions have been successful, and as of year-end 2010 the dollar was as central to the international exchange network as it had been in 1998.** In other words, the US's actions prevented the network from unraveling the way it did during the interwar years.

In addition, there is no other country that is well-positioned to take over. The Chinese yuan is not yet convertible. The Japanese and Swiss economies are not large or dynamic enough. The euro is not stable enough. And that's... about it. So despite major upheavals in recent years, I do not expect major shifts in the international monetary system any time in the near future.

*Credit to Thomas for first making this point to me.

**I took a look at the data in a previous post, from which the picture at the top of this post is taken.

Monday, October 17, 2011

Vikash Yadav supports the Occupy Wall Street, but wants to see more protests directed at international institutions like the Basel Committee on Banking Supervision. Although he doesn't say this, I suspect that he is not aware that the BCBS is nothing more than a talk shop. Its members are the (domestic) central bankers and finance ministers from industrial economies. As such, it doesn't make a whole lot of sense to protest against the administrative staff, which are the only ones actually located in Basel. It makes a lot more sense to protest against the domestic authorities that meet under the auspices of the BCBS to set international regulatory policy. Which is pretty much what the #OWS movement is doing.

Sunday, October 16, 2011

I came across the Occupy Prishtina Tumblr the other day. While I find the rapid spread of the #OWS movement across the globe to be interesting, it also got me thinking. Kosovo is the poorest country in Europe. According to the CIA World Factbook, about 15% of its GDP GNP comes from remittances, and another 7.5% comes from aid and foreign donors. Kosovo has 40% unemployment and per capita income is under $3,000/year. And, of course, they are perpetually threatened by their stronger, richer, larger neighbor -- Serbia.

Almost anyone in Kosovo would gladly trade places with almost anyone camping out in Zuccotti Park. This is illustrative of the fact that the "We are the 99%" sentiment very much depends on which sample you are looking at. If you make $39,000/year you are below the US mean and median, and thus firmly within the domestic 99%. But an income of $39,000/year puts you in the top 1% of the global income distribution.* The American middle class may have stagnated in recent decades, but it is still a much better life than that of 99% of the world's population.

Not including my tuition remission, the stipend UNC gives me for teaching is about $15,000/year. I am occasionally able to supplement that by another grand or two through various sources. While that is above the poverty line, I obviously don't lead an extravagant life. And yet I am in the top 8% of the world's income earners, with expectations of doing far better within a year or two.

Of course the top 1% of Americans are even more elite. I'm not trying to diminish that fact, and I know that the "We are the 99%" is a slogan intended to amplify a broader political point. But I'd like to encourage folks to think globally. There are few people in the US whose lives are not within the world's top 10-15%.

*Yes that is PPP-adjusted.

UPDATE: Suzy Khimm and Scott Sumner make similar points. Khimm's numbers are slightly different than mine; I'm not sure why, but the takeaway is the same. I think this portrayal by Sumner drives it home:

Now let’s start down through Dante’s seven circles of Hell:

1. The US is much richer than Mexico. So much so that millions of Mexicans will risk the horrors of human trafficking into the US to get crummy jobs picking tomatoes all day in the hot sun.
2. China in 2011 is still considerably poorer than Mexico. The Chinese take much greater risks to get here.
3. China today is so much richer than China in 1997 that it’s like a different planet. The changes (even in rural areas) are massive.
4. The China of 1997 seemed like paradise compared to the China of the 1970s. Throughout Hessler’s book, people keep talking about how horrible things were during that decade and how prosperous they are now (1997 in Sichuan!)
5. The China of the 1970s was nowhere near as bad as during 1959-61, when 30 million starved to death.
It’s fine to worry about income inequality in the US. I also worry about this issue. But it’s important to keep in mind that there is much more to life than income inequality, and much more to the world than the US. In the grand scheme of things, tinkering with government programs to help the poor, pitiful, beleaguered American middle class isn’t likely to make much difference, at least from a utilitarian perspective. We need to broaden our outlook.

Wednesday, October 12, 2011

I believe I wrote about this year-old post by John Hempton awhile ago, but it's worth revisiting. I like the way he thinks about the effect of financial regulation:

In the UK banks were allowed to lever themselves to a silly extent (similar over-leverage occurs in their life insurance companies). Overleverage as a policy was the defining character of Northern Rock.

Individually it makes sense for banks to lever up. However competition was intense - and collectively it was insane. Northern Rock was levered 60 times or so - but to mortgages that were really thin margin. Their spreads were about 40bps.

What I suspect is happening is all the banks are standing on tippy-toes. It is individually rational - collectively insane because competition kills the benefit of all that extra leverage. Margins in the UK - the most over-levered market on the planet - fell further than anywhere else.

Of course competition was good for borrowers - at least for a while. Lower spreads meant cheaper finance - but not dramatically cheaper. Spreads of 150bps on mortgages levered 15 times is about as profitable as spreads of 40bps levered 60 times. Competition might drive spreads down by 110bps - at the risk to the whole banking system.

In most of the popular discourse and academic literature banks are presumed to be opposed to regulation, because it corrects market inefficiencies by forcing firms to internalize negative externalities*. The public choice school** argues that there are times when this is not the case -- when incumbent firms can use regulatory structures to collect rents -- so there is no reason to start from the assumption that regulations will be welfare-enhancing.

Hempton is proposing something else: thinking of financial markets as creating a prisoner's dilemma for banks. In this case, banks would be better off if they were able to collude. They'd be able to maintain fairly large spreads, and thus profits, without taking on inordinate risk. The "cooperative" outcome is Pareto-optimal (for the banks at least). But it isn't individually rational. If all the other banks are maintaining higher standards, you can capture quite a lot of market share by "defecting" -- levering up, in this example. To do this you will have to accept lower margins, but profits will still increase if you increase volume enough.

Of course what is individually rational for one firm is individually rational for all firms, so just like in a prisoner's dilemma everyone "defects", driving down margins without capturing any more market share. In this scenario, bankers would prefer regulations like minimum capital adequacy and limits on leverage, not because it bestows rents (at least not only for that reason), but because it changes the structure of the strategic interaction. Firms can now attain Pareto-improving outcomes where they can achieve a decent profit at fatter margins without taking on so much risk. So, ceteris paribus, in this situation firms should actually prefer to be regulated so long as everyone else is regulated too.

And, in fact, in the wake of the financial crisis every banker said they supported re-regulation of the financial sector so long as it affected everybody. But here's the kicker: the same dynamic that makes regulation Pareto-improving also makes regulatory avoidance very lucrative. If you can figure out a way to arbitrage the regulation, you can capture more market share at a slightly lower margin, thus boosting profits. In terms of the prisoner's dilemma, you can profit by defecting while everyone else cooperates. The rise of the shadow banking system is best understood in this light.

Meanwhile, I'm not as perplexed as Drezner is by recent developments in domestic and international regulatory regimes. First, Basel III went basically the way previous rounds went. This process isn't as simple as "bank preferences are communicated to national governments, and also includes the preferences of voters and policy elites. Second, the majority of the Dodd-Frank rules haven't been written yet, must less implemented so it's far too soon to say that bankers have "lost" in any meaningful way. Third, there are very real concerns that the EU won't be able to begin enforcing Basel III any time soon, which could potentially affect the competitiveness of US banks (this is what Jamie Dimon was talking about when he called Basel III "anti-American"). Fourth, Dodd-Frank contains dozens of provisions on top of Basel III, some of which could effect international competitiveness (although most won't).

Lastly, I think Drezner is making too much of the fact that the banks aren't getting everything they want. They are still getting quite a lot -- Dodd-Frank implementation is slow, underfunded, and every GOP candidate vows to repeal or otherwise castrate it -- but they never get everything they want. Finance is one of the most heavily-regulated industries in the country. They routinely lose political fights. The influence of bankers on politics is real, but it is quite often over-stated.

*A line of thought that began with Pigou, who wrote more about taxation than regulation, but the fundamental principle is the same.

Tuesday, October 11, 2011

In Poli 150 -- Intro to International Politics -- we are transitioning from studying the politics of the global security apparatus to studying the politics of the global economy. So I was pleased to see a current case with some local flavor described on the front page of the NYT's web site:

There are still a few textile mills in the Carolina piedmont, making futuristic fabrics that cover soldiers’ helmets and the roofs of commercial buildings.

There is also a new threat on the horizon. A proposed free trade agreement with South Korea, which the House and Senate are scheduled to consider this week, would open the American market to a manufacturing powerhouse that has its own high-technology textile industry. ...

“We are very much in favor of global trade, but we’re just not about having agreements that are unfair to the U.S. textile industry,” said Allen E. Gant Jr., chief executive of Glen Raven, a family-owned company that employs 1,500 people in the United States. “The U.S. needs every single job that we can get.” ...

Economists generally argue that free trade agreements benefit all participating countries by creating a larger market for goods and services. But that benefit derives in part from the movement of some activities to the lower-cost countries. In other words, even if the deal is good for the United States as a whole, it is likely to create clear losers.

The North Carolina textile industry was hit pretty hard by NAFTA, and the new FTAs will likely batter the sector further. I like the article -- written by the excellent Binyamin Appelbaum -- because it neatly lays out the politics of the deal: the country as a whole will benefit, but smaller groups will suffer. Those groups have a strong incentive to lobby the government for protection. Sometimes they will be successful, sometimes they won't. The Obama administration renegotiated parts of the pact it inherited from the Bush administration under pressure from autoworkers' unions and other powerful groups. Textile workers evidently don't have the same sway.

The article also goes a long way towards describing preference formation and aggregation. Allen Gant reveals more than he realizes in that short statement. What he's saying is that he supports market exchange except when his firm is the one facing new competition. And he's saying that in this case trade would hurt both capital and labor in the textile industry. This statement illustrates materialist conceptions of trade politics, and supports the view that attitudes over trade fall along sectoral, rather than factoral, lines. It's a nice little article, and I'll be giving it to my students.

Bueno de Mesquita and Smith continue their guest-posting at the Monkey Cage with this, which basically describes how time inconsistency problems can affect politics. (Michael Lewis recently wrote a case study of this process in California which, for all its faults, is better than his essays on Europe.) Before I get into my criticism, let me say that I value their work on selectorate theory, even though I think it has some problems. I value it for a few reasons: because I think the dynamics they are trying to model (basically a formalization of a strand of public choice econ) are very important for the study of politics even if their theory isn't a finished product, and because they provide a central theoretical paradigm for other scholars to use as a foil for their own research.

Anyway, enough throat-clearing. As this post, and the titles of their books on selectorate theory -- The Logic of Political Survival and The Dictator's Handbook -- make clear, theirs is a theory of comparative politics, not international relations. Nothing wrong with that, but it leaves them susceptible to problems that scholars who primarily operate in one subfield often have when crossing over into another. Specifically, they tend to make assumptions that seem reasonable but are nevertheless highly contentious. This post illustrates one of them. BdM and Smith write:

It is certainly true that bankers and businessmen wrote loans they suspected would not be repaid; they buried debt on the balance sheet; and, in extreme cases, committed outright fraud. These actions, and hundreds of others like them, provide rewards today, accruing costs that must be paid in the future. Why run up so much debt? Easy, paying costs in the future is someone else’s problem. It certainly won’t be the executive’s problem if she does not survive at the top today. Business leaders happily – and smartly – mortgage their firm’s future to ensure that they retain control now. Lavish payments even when performance is poor is the norm, not the exception.

As much as politicians chide business leaders, they too love debt. It lets them buy loyalty now. Repayment is some future leader’s problem. Politicians hate to pay as they go. They love to make expensive promises that they don’t have to fund. For instance, politicians love to pay public sector employees with modest wages and fantastic defined pension benefits. This means less to pay today on their watch and more to pay on someone else’s tomorrow. What could be better!

On the one hand this seems more or less incontestable: politicians would prefer to buy support now and have someone else pay later. As the theorize later in the post, autocracies are more prone towards debt accumulation than democracies. I'm sure this is true in many contexts, and Oatley has some recent research that backs it up, but the example they introduce as illustrative of the broader phenomenon -- the actions of bankers -- gives us reason to question their narrative. Bankers acted the way they did in part because of policies that rewarded this behavior. The financial sector profited enormously from the legal and regulatory structures in the United States and around the world over the past few decades, and there was always an expectation that the government would support them in times of trouble. The Fed made that guarantee via the Greenspan/Bernanke "put", and the fiscal authorities did as well, setting a clear precedent of interventionism through a series of fiscal interventions from 1980s-2000s.

In democracies the "selectorate" -- the group whose support politicians must maintain to stay in office -- is assumed by BdM/Smith to be 50% of the population, or near that number. And yet the most energized political movements on both sides of the ideological spectrum right now, the #OccupyWallStreet and Tea Party groups, both formed in large part in reaction to policies that benefited bankers over the masses. These policies were enacted by both major political parties in two different presidential administrations, and do not benefit 50% of the population. They benefit a very small minority of it, hence the "We are the 99%" slogan of #OWS. In general the economy of the US and other industrialized democracies has become much more unequal over the past several decades, and there has been little or no movement towards redistribution to mitigate the trend. A more progressive tax system would definitely benefit more than 50% of the population, yet the government finds it very difficult to pass a several percentage point marginal tax increase on millionaires.

Not only can selectorate theory not explain this, it would predict the opposite. More generally it would expect the majority of the population in an increasingly-unequal society to favor highly redistributive policies, and it would expect politicians to respond to this demand. BdM/Smith might be on stronger footing if they adopted the Rajan thesis that the government countered stagnating median wages by expanding credit access, but even this would depend on the claim that 50% would prefer more credit to more income. This seems dubious.

It seems more likely, to me, that the idealized account of democracy that selectorate theory provides is incomplete. The strength of selectorate theory is that it can accomodate more complex accounts of preference creation and aggregation relatively painlessly, but the weakness of the BdM/Smith application of selectorate theory is that they seldom take the more complicated steps that are necessary to reach conclusions that match our empirical understanding of the world. That leaves us in a place where selectorate theory is potentially valuable if used with care, but the primary proponents of selectorate theory -- in emphasizing parsimony over accuracy -- end up reaching conclusions that are pretty clearly wrong.

On the Network Topology of Variance Decompositions: Measuring the Connectedness of Financial Firms Francis X. Diebold, Kamil Yilmaz
NBER Working Paper No. 17490
We propose several connectedness measures built from pieces of variance decompositions, and we argue that they provide natural and insightful measures of connectedness among financial asset returns and volatilities. We also show that variance decompositions define weighted, directed networks, so that our connectedness measures are intimately-related to key measures of connectedness used in the network literature. Building on these insights, we track both average and daily time-varying connectedness of major U.S. financial institutions' stock return volatilities in recent years, including during the financial crisis of 2007-2008.

This is important work, and I know that several regulators and central banks (including the Bank of England) are starting to take this sort of modeling -- weighted, directed networks -- very seriously. When you're trying to track sources of systemic weakness you really need to know what the system looks like. The problem isn't just "too big too fail", it's also about which firms are tightly connected to many other firms. These two will often correlate, but not always and not perfectly, so knowing the difference is important.

The Stock Market Crash of 2008 Caused the Great Recession: Theory and Evidence Roger Farmer
NBER Working Paper No. 17479
This paper argues that the stock market crash of 2008, triggered by a collapse in house prices, caused the Great Recession. The paper has three parts. First, it provides evidence of a high correlation between the value of the stock market and the unemployment rate in U.S. data since 1929. Second, it compares a new model of the economy developed in recent papers and books by Farmer, with a classical model and with a textbook Keynesian approach. Third, it provides evidence that fiscal stimulus will not permanently restore full employment. In Farmer's model, as in the Keynesian model, employment is demand determined. But aggregate demand depends on wealth, not on income.

I think some of this gets to my confusion about Keynesianism from a few days back. I think the last sentence particularly drives at what I was saying before: if the monetary multiplier is low because of expectations, then how can the fiscal multiplier be high under the same set of expectations? It makes more sense (to me) for behavior to be conditioned by wealth more than income, particularly if the income is temporary. I clearly need to become more familiar with Farmer's work.

And here's a near-complete preprint of Herb Gintis' most recent book, The Bounds of Reason: Game Theory and the Unification of the Behavioral Sciences. Via one of Phil Arena's commenters.

Cain replied, “I’m ready for the ‘gotcha’ questions and they’re already starting to come. And when they ask me, ‘Who is the president of Ubeki-beki-beki-beki-stan-stan?’ I’m going to say you know, ‘I don’t know. Do you know?’ And then I’m going to say, ‘How’s that going to create one job?’ I want to focus on the top priorities of this country. That’s what leaders do.

“They make sure that the nation is focused on the critical issues with critical solutions,” Cain said. “Knowing who is the head of some of these small insignificant states around the world, I don’t think that is something that is critical to focusing on national security and getting this economy going. When I get ready to go visit that country, I’ll know who it is, but until then, I want to focus on the big issues that we need to solve.”

I’ll give most people a pass for being ignorant about Central Asia. It doesn’t even surprise me that someone aspiring to be the US President would be ignorant of the region. But to be so epically ill-informed about a supply route that that both the Bush and Obama administrations thought important that runs right through a country he so bluntly calls “small” and “insignificant,” well… that’s just funny. (And not for nothin’ but surely at least one American job has to have been created by GM’s factory in Uzbekistan.)

Smirking aside, I actually think Cain's pretty much correct about this. There is a supply route that runs through Uzbekistan, but it's hardly essential for the US to pursue its geopolitical interests. And while the GM factory may have created a job or two, the total number won't be much higher than that. In any case, that sort of "gotcha" question has no bearing on the presidential race.

Cain is obviously all but irrelevant, but its more worrying is that no major political candidate seems to have much understanding of, or interest in, international political and economic dynamics. Secretary Clinton may be the furthest along the curve, and she has a new article in Foreign Policy that looks like it's dedicated to the topic. I haven't had time to read it in full yet, so perhaps I'll have more to say later. But if it contains sense it will be the exception.

Similarly, for someone who says that, the Obama administration is "undermining one’s allies (p. 3)" in contrast to you, who will "reassure our allies (p. 13)", you don't actually talk about America's treaty allies much at all. True, you do talk about expanding America's allies to include India and Indonesia. Mexico gets some face time. Israel gets a lot of face time. On the other hand, NATO is not mentioned once in this entire document. Neither is the European Union. Japan and South Korea get perfunctory treamtment at best. Turkey is a major treaty ally but you treat it like a pariah state. For someone who's claiming that the U.S. will reassure its major allies, you didn't talk about them much at all. This is a really important problem, because Japan and Europe have been crucial allies in a lot of major American initiatives -- and they're getting weaker. Even in discussing new possible allies, I'm kind of gobsmacked that Brazil is never discussed.

Another big problem is that your approach to China is so shot full of contradictions that I don't know where to begin. ...

If the section on China is contradictory, then your discussion of Pakistan is worse. ...

One final point, should you choose to revise this draft strategy -- you need to prioritize the threats you discuss in the paper. You list a bunch of them -- rising authoritarian states, transnational violence, failing states, and rogue states. If you have to prioritize, which threats merit greater attention? This should actually be pretty easy, since you absurdly overhype the threats posed by some of these countries (Venezuela, Cuba and Russia in particular).

But I suspect that these inconsistencies are exactly the message that Romney intended to convey. After all, he's not trying to convince the FP wonkosphere that he's got a consistent grand strategy that would get an exceptional grade in a graduate class. He's trying to convince GOP voters to nominate him for the presidency. And what do those voters want to hear? It seems likely that they want hyped-up threats from Cold War baddies, a blank check for Israel, and mixed feelings on China. I doubt they care much about Brazil, and they take Japan and Europe for granted. And, as Drezner noted previously, they (presumably) want someone who has thought about foreign policy for more than 15 seconds and has some coherent vision for how it should be conducted. Unlike Perry. Romney's tossing them the red meat that he hopes will convince primary voters that he's more serious and knowledgable about foreign policy than his competitors.

In other words, the audience matters. Romney's audience is the contemporary GOP, which is endlessly hawkish but only in some directions. He's signaling to them as hard as he can that he's hawkish in those directions too, but no others. Everything in the document makes sense when viewed in that light, even if it doesn't make sense as actual policy platform.

I'm not trying to knock Drezner at all; his job is to take these kinds of policy statements at face value and evaluate them. He is obviously aware of Romney's motivations. But it's worth remembering the context.

Sunday, October 9, 2011

Some of this is already old news, but there were some developments on trade over the past week.

-- The US looks set to ratify FTAs with Columbia, South Korea, and Panama. I've been pondering a longer post about the value of FTAs, which I'll try to get to in the future. For now it's just worth noting that these deals are pretty small beer.

-- Obama's going after China on violating WTO rules by not reporting subsidies -- 200 of them, apparently -- some of which are probably WTO-illegal. I think this is important. China's trade policies are incredibly distorting, and the global economy needs a rebalancing. Adjustment is occurring, but perhaps not quickly enough. Going through the WTO is much better than risking a trade war by unilaterally imposing tariffs in response to currency manipulation.

The process by which countries accede to the World Trade Organization (WTO) has become the subject of considerable debate. This article takes a closer look at what determines the concessions the institution requires of an entrant. In other words, who gets a good deal, and who does not? I argue that given the institutional design of accession proceedings and the resulting suspension of reciprocity, accession terms are driven by the domestic export interests of existing members. As a result, relatively greater liberalization will be imposed on those entrants that have more valuable market access to offer upon accession, something that appears to be in opposition to expectations during multilateral trade rounds, where market access functions as a bargaining chit. The empirical evidence supports these assertions. Looking at eighteen recent entrants at the six-digit product level, I find that controlling for a host of country-specific variables, as well as the applied protection rates on a given product prior to accession, the more a country has to offer, the more it is required to give. Moreover, I show how more democratic countries, in spite of their greater overall depth of integration, exhibit greater resistance to adjustment in key industries than do nondemocracies. Finally, I demonstrate that wealth exhibits a curvilinear effect. On the one hand, institutionalized norms lead members to exercise observable restraint vis-à-vis the poorest countries. On the other hand, the richest countries have the greatest bargaining expertise, and thus obtain better terms. The outcome, as I show using a semi-parametric analysis, is that middle-income countries end up with the most stringent terms, and have to make the greatest relative adjustments to their trade regimes.

Saturday, October 8, 2011

I like this post by Derek Thompson on inequality in the US. It's not polemical, instead presenting important facts in the easy-to-understand graphs and charts. Most of it I knew previously, but not this:

When you add it all up, we have a country with steep divisions between rich and poor and a tax code that for all of its problems is progressive (although it has been more progressive in recent year). Here are two more graphs to take you home: the first shows share of income by quintile and the second shows share of federal income taxes by quintile. What you'll see is that income inequality is behind tax burden inequality.

The whole post is worth reading, and the accompanying graphs are enlightening.

Thursday, October 6, 2011

This is well outside of my normal purview, but since the assassinations of Anwar al-Awlaki and Samir Khan, both American citizens, I've been thinking a good bit -- not for the first time, thank you -- about the wide space between our conception of the checks and balances on the capricious exercise of government power, and the reality of it. People talked about this some during the Bush administration, but I'm now of the opinion that the primary difference between the Bush administration and others in recent history is the former's simple brazenness: Bush would come right and say "I'm the decider", and he'd have his lawyers so obviously butcher the tradition of law to rationalize his policies, and his Vice-President would declare himself a member of neither the executive nor legislative branches and therefore not subject to any oversight from anyone ever, and his ambassador to the UN openly advocated abolishing the UN... Bush just didn't give a damn. He'd practically dare anyone to do anything to slap his wrists. Every other president before or since has at least pretended to uphold the law. Bush said that wasn't necessary, since any action taken by the president was ipso facto permissible.

In other words, the difference between the Bush administration and other administration is that the former was brazen. But that may be the only difference. The Bush administration may have lied to get the US into a war -- or been selective in their disclosure of the known facts, depending on how charitable your interpretation is -- but the Reagan administration lied about the existence of a war. I'm not going to go down the list, but it's now indisputable that every presidential administration since World War II* has used the tools of war at their own discretion, with essentially no accountability. Alright, I know this is no new news.

But this is, to me at least. It seems that the Obama administration convenes a panel of NSC folks who determine who the US will attempt to assassinate. Without legal mandate or any oversight, of course. Here's how Reuters puts it:

There is no public record of the operations or decisions of the panel, which is a subset of the White House's National Security Council, several current and former officials said. Neither is there any law establishing its existence or setting out the rules by which it is supposed to operate.

This is, of course, completely obscene. There may be a case to be made that the president needs the authority to attack high-level targets at short notice, and use lethal force if that's the best or only option. I would hope that decisions involving the use of lethal force would be reviewable by some sort of oversight committee, but I could understand the argument in favor of such a policy. But as far as I can tell the president does not possess that power in any established legal sense. That makes this panel as it exists today no more than the District of Columbia's branch of Murder, Inc. but with the full resources of the United States government at their disposal. Legal immunity too.

I don't see how this is an acceptable state of affairs.

*And possibly before, although my knowledge of American history is murkier before then.

Wednesday, October 5, 2011

We study the pricing of political uncertainty in a general equilibrium model of government policy choice. We find that political uncertainty commands a risk premium whose magnitude is larger in poorer economic conditions. Political uncertainty reduces the value of the implicit put protection that the government provides to the market. It also makes stocks more volatile and more correlated when the economy is weak. In addition, we find that government policies cannot be judged by the stock market response to their announcement. Announcements of deeper reforms tend to elicit less favorable stock market reactions.

Does short-term debt increase vulnerability to financial crisis, or does short-term debt reflect -- rather than cause -- the incipient crisis? We study the role that short-term debt played in the collapse of the East Asian financial sector in 1997-1998. We alleviate concerns about the endogeneity of short-term debt by using long-term debt obligations that matured during the crisis. We find that debt obligations issued at least three years before the crisis had a negative, albeit sometimes insignificant, effect on the probability of failure. Our results are consistent with the view that short-term debt reflects, rather than causes, distress in financial institutions.

Tuesday, October 4, 2011

Ben Bernanke isn’t the most important central banker in the world. Jean-Claude Trichet is.

That's... crazy. Europe is certainly important, but the dollar is still the world's reserve currency, and Bernanke manages it. Plus, the Fed is responsible for overseeing the US financial system, which is central to the global financial system in a way no European countries are, separately or taken together. Additionally, the ECB isn't (technically, legally) supposed to have all that much to do with the European financial system; regulatory authority still resides with national governments. Trichet faces constraints that Bernanke doesn't face, which limits his influence, but even if that weren't true he'd be less important.

To illustrate: During the crisis, the Fed routinely provided liquidity support for foreign firms, most of which were in Europe. Has the ECB done anything similar for US firms? During the crisis the Fed opened up swap lines with every major central bank in the world. Did the ECB do anything similar? Not outside of the eurozone, as far as I can tell.

(Side note: Yglesias notes that the EU is a larger economy than the US. Which is true. But Trichet only controls monetary policy in the eurozone, not the entire EU, and eurozone GDP is roughly 75% of US GDP.)

I think I asked this question back in '09 or early '10, but I didn't get a satisfactory answer so I'll ask it again. I am certain that there is a simple answer to it, but I haven't yet seen it. I know some Keynesian economists occasionally read this blog, so I'm hoping they'll set me straight.

As far as I can tell, the whole Keynesian framework depends on the existence of a liquidity trap. Without it, as Krugman keeps repeating, normal rules of macroeconomics apply: trade is good, monetary policy is effective, etc. But in Depression Economics all that is turned upside down. The rules of the game change because of the constraint imposed by the liquidity trap. Normal macroeconomics doesn't work.

In the Keynesian framework monetary policy is ineffective at the zero lower bound because people (banks, businesses, households) hoard cash. Thus there is a decrease in aggregate demand, economic activity slows, unemployment increases, etc. I get all of that. Here's the leap that I can't make: why isn't that true of fiscal policy as well? If I use monetary policy to give people money and they hoard it, why would they not hoard money if I use fiscal policy to give them cash?* The whole idea of Depression Economics depends on a psychological model of mass peoples -- what Keynes called "Animal Spirits"** -- that would seemingly apply universally to all public policy intended to stimulate demand. I see no reason why businesses or households would respond to cheap/free money from the monetary authorities by not hiring, but respond to cheap/free money from the fiscal authorities by hiring.

In other words, if the monetary multiplier is small because of the hoarding impulse derived from animal spirits, then the fiscal multipler should be no greater and probably smaller, for a few reasons. Cash transfers on the fiscal side only moves the money once, and then it should be hoarded in the same way as cash from monetary policy (which is also moved once). But fiscal policy also incurs new debt, which must be serviced. That imposes real future costs in the form of interest -- which is admittedly quite small or even negative in the present environment -- and fiscal drag from future taxation, both of which can be anticipated. Tack on some waste/corruption/deadweight loss and it's hard to see how fiscal policy would be more effective than monetary policy at the zero lower bound or anywhere else. Even at a high discount rate monetary policy can always be cheaper than fiscal policy, so it should seemingly have a higher multiplier.

I freely admit my ignorance and stupidity in this matter. I understand that economics often makes no sense until someone explains it to you, and my economics education effectively ended with my undergrad major. So I'm asking someone to explain it to me: why do animal spirits negate monetary policy at the zero lower bound but not fiscal policy? I'm guessing it has something to do with financial intermediaries, but then doesn't that require an additional, separate assumption about psychology?

*The closing scene in the HBO adaptation of Sorkin's Too Big to Fail has Poulson muttering to one of his deputies something like "We gave the banks the cash; now they better spend it and get the economy moving". I'm sure that's apocryphal, but the whole point is that they didn't. They hoarded it, as a Keynesian would expect from monetary policy, but not from fiscal policy. Poulson, of course, was most concerned with the fiscal intervention.

**While I'm here, there's something else I don't understand: Why is it that Keynesians smirk at assertions that businesses aren't hiring between of "uncertainty" when their entire underlying model depends on precisely that claim? Partisans on the right surely miss part of the story when they attribute this uncertainty only to Obama's policies -- I agree with Summers when he said that the biggest uncertainty is over the entries on the order books, i.e. aggregate demand -- but the uncertainty that matters is over expected profits. One part of that equation is revenue, the other part is costs including regulatory and tax costs. Decreasing uncertainty over the former (in a positive direction) increases confidence and thus investment, but so does decreasing uncertainty over the latter (in a positive direction). Both sides seem to be right and wrong. Or, rather, incomplete without the other.

A bunch of smart people (Downes, Fearon, Nye, others) discuss in print whether/why regime change doesn't "work". Perhaps of interest to some readers. I'll just say that regime change "works" almost every time we attempt it, in that we usually succeed in changing whatever regime we want to change. What doesn't "work" is subsequently establishing a liberal democracy that cow-tows to our every wish.* Which seems like another matter, distinct from "regime change" and requiring its own term. But I don't get to make the rules.

*The wish is seldom defined. Our own democracy never fulfills our own every wish, which I believe is a true statement no matter how you identify "our".

Monday, October 3, 2011

That would be this one, which tells me in the headline and subtitle that North Carolina "grooms its best students to be good teachers". I strongly suspect that this is not true empirically, and I certainly hope it is not. Teaching requires basic competence of the subject material plus the ability to lesson-plan effectively and communicate well. While this is not an easy job relative to many other tasks, it's not on the same level of difficulty of oh, say, developing new medical procedures, inventing new technologies, or devising and testing new theories of human interaction.* Given that, I'd rather our best students focus on the most difficult tasks and/or those with the highest social benefit, while our capable-but-definitely-not-the-best students focus on getting first graders to color inside the lines or getting eighth graders to dissect a frog without vomiting.**

*The inclusion of the latter is me puffing out my chest, in case you couldn't tell.

**Not sure if I have those activities assigned to the proper class because I skipped 8th grade and never dissected a frog, so I assume that's when that happened.