February 28, 2006

Why the Spatial Mismatch?

Back in 1968, economist John Kain put forward the "spatial mismatch hypothesis," arguing that African-Americans suffer from a particular form of labor market discrimination—blue-collar jobs were moving to the suburbs while blacks were increasingly concentrated in the inner city—that limits their employment options. More recently, William Julius Wilson discussed the same phenomenon in his 1996 book, When Work Disappears. (Not everyone agrees with the hypothesis, of course—see this link for an overview of the debate. Here's a convincing paper supporting the hypothesis.)

It's not too difficult to see how this would happen, for the most part. Racial segregation in residential areas is a fixture of the American landscape. Douglas Massey and Nancy Denton's 1993 book, American Apartheid, explores this in more detail, showing how white neighborhoods continue to discriminate against minorities; when blacks try to move to white areas, they are often told that the lots are sold, or quoted inflated prices, while banks continue to discriminate in their lending practices. (For a more historical view, see this old post on Richard Nixon's "suburban strategy" to thwart racial integration in the suburbs.)

So black suburbanization has proceeded slowly over the years. What else would we expect? Meanwhile, according to a recent GAO study, over three-fourths of all jobs are now located in suburban areas. That makes transportation an issue for those living in the inner city. It also means that urban workers will have less access to good employment networks. Since firms have classically recruited through local ads or word of mouth, blacks who live in inner cities, away from where the jobs are, are at a disadvantage. (The internet may change this somewhat, but I have no idea how much.)

Anyway, none of that is all that surprising. But there's one other question I've always wondered about: Why do so many firms move to the suburbs—and away from inner cities—in the first place? William Julius Wilson writes about what happens when work disappears, but not why all the work is disappearing. Are businesses making race-neutral decisions about where to locate—and it just so happens that the "best" place for them to locate is where all the white people are (so to speak)?

Probably not. There's an interesting 1998 paper by John Iceland and David R. Harris, economists at the University of Michigan, that I just came across which asks that very question. And what they found was that "firms in neighborhoods with a growing proportion of African-Americans are more likely to express relocation intentions," at least in Boston and Los Angeles, even after controlling for a variety of side factors (the proportion of employees who are black, race of the firm's supervisor, firm sector, changing incomes and population in the neighborhood). So in at least those two cities, businesses really are fleeing away from black people. And the end result of those decisions—whether it's conscious or unconscious—is the structural discrimination described above.

Oddly enough, in Detroit and Atlanta the same effect doesn't hold—firms don't seem to flee the neighborhood when blacks start moving in. Iceland and Harris wondered whether this was because the two cities have both had long histories of racial conflict and "balkanization," such that firms that are sensitive to race have already moved far away from African-American neighborhoods, whereas the process is only just beginning in Boston and Los Angeles. Hard to say. But it seems like an important paper, and I wonder if anyone has followed up on it in the last seven years. Looking around the internet, it doesn't seem so.

February 22, 2006

Follow the VAT

In the Los Angeles Times today, Amira Hass points out that Palestinians are currently being robbed of their rightful tax revenue by the Israeli Cabinet:

At the ports, Palestinian importers are required to pay the Israeli authorities the value-added tax of 17%, as well as whatever custom taxes are due on goods that come in on their way to the West Bank or Gaza. These transactions (along with direct Palestinian transactions with Israeli firms and merchants) last year yielded revenues of $711 million.

But whose revenues are they?

To judge by the actions of the Israeli Cabinet on Sunday, the money belongs to Israel. The Cabinet announced that it was going to withhold Palestinian tax and customs revenues, at least for the moment, as a response to Hamas' electoral victory. Until the money is released — if it is released — the Israeli treasury will earn the interest.

It's a real problem; under the Oslo Accords, the revenue belongs to the Palestinian Authority—it's not Israel's money to withhold. But as it happens, I don't think Hass goes anywhere near far enough. Without getting into too much finger-pointing, it seems fairly straightforward the tax and tariff system set up by Israel has always been inherently devastating to the Palestinians, regardless of whether the Israel Cabinet is withholding funds or not. The entire system's a travesty.After 1967, of course, Israeli markets were integrated with the much smaller economies in the West Bank and Gaza Strip. Firms in the smaller economy were run out of business as they were forced to compete, unprotected, against more advanced Israeli industries. Israel also put in place a customs union in 1967 that increased tariffs fourfold in the combined territories—primarily in order to protect Israeli industry. But the net effect was that the Palestinian territories were forced to shift trade away from other Arab states and towards Israel, and industries lost their competitive edge in international trade as factors of production became more costly.

Now that didn't have to be a terrible thing for the Palestinians, necessarily—after all, the occupied territories also theoretically could "benefit" economically, somewhat, from new opportunities to work in and trade with Israel. (Among other things, Palestinians working in Israel saw a quick rise in money income.) According to economist Fadle Naqib, "In the first decade [after 1967], Palestinian GDP per capita grew from nine percent of that of Israel to fifteen percent."

But after that initial decade, the ratio started declining, for several reasons. Palestinians in the West Bank and Gaza increasingly lost control of their natural resources—especially land and water (by 1987, settlers were using about 50 percent of the water in the West Bank, despite being only 10 percent of the population)—which hurt both Palestinian agriculture and limited industrial expansion in the territories. Second were a variety of trade restrictions (Palestinians weren't allowed to import many advanced technologies, for instance) and licensing restrictions on business activities by the Israeli military administration. Many of these placed heavy limitations on what Palestinians could import and export. Third was the underfunding of vital infrastructure in the occupied territories, which devastated the Palestinian economy.

Fourth was the VAT system, whose funds were only transferred to the Palestinians themselves after Oslo in 1993. In essence, an explicit system was created where the Palestinian economy became entirely dependent on Israel. (90 percent of all Palestinian imports come from Israel; no accident this.) Naqib estimates that the net effect of this resource transfers can amount to about 15 percent of Palestinian GNP to Israel in any given year.

Now some of these things changed in 1993, but in reality, it was less than supposed. Some of the restrictions on Palestinian exports were removed, and the Palestinian Authority now has (very) limited ability to set its own tariffs, but in practice Israel still controls Palestinian trade. Funds from the VAT, meanwhile, are technically transferred to the PA (at least until last month). But this doesn't stop the resource transfers to Israel. Another study noted that, in practice, most Palestinian wholesalers and firms must use Israeli traders to import from the rest of the world, and the taxes collected on these traders often go to Israel, rather than the Palestinians. The study estimated that because of this practice, Palestinians forego about one-third of all tax revenue, or 3 percent of GDP.

Add to that the fact that the PA ran the territories in the most corrupt manner possible, as well as the effects of the intifada and the Israel response, and the fact that the "security barrier" has gutted the territories, and it's no surprise that the Palestinian economy has imploded—unemployment is now as high as 50 percent, and foreign aid is the only thing preventing outright collapse. As for day-to-day life, Richard Ben Cramer described the effects of all these restrictions pretty well in How Israel Lost:

Let's take the case of a Palestinian who is partner to no one, save perhaps his wife: he is a head of household, who has provided for his family, as honor requires, by his labor—it doesn’t matter what labor: say, he ran a shop in Gaza. He's done well enough to have a house, and now, in the fullness of time, his son must wed. Except the son doesn't have a job—what job? So the father must construct an addition to the house. Honor requires no less. And the construction will require, let's say, some gravel. And the gravel must come from Israel—that's quite a business in Israel.

Except this Israeli gravel will cost our hard-pressed shopkeeper twice what it costs in Israel. Because there's only one place to buy Israeli gravel—a legal monopoly run by and for the princes of the PA. And, of course, the new rooms will require some lumber—obtainable only from a similar monopoly (original provenance—a supplier in Israel)… and cement (same story)… and the steel bars to reinforce the cement… Our shopkeeper will pay maybe double the price for every item he'll need. Or, to put it another way, his every occasion of normal need will be twisted to provide commerce for Israel, and pure profit (it adds up to billions each year) for the Tunisians [in the Palestinian Authority].

Right. Lots of blame to go around. Still, it seems that while Amira Hass has identified a real scandal—that Israel is currently withholding $711 million that belongs to the Palestinians—that's definitely the least offensive part of the economic set-up at work here.

Insurgents detonated bombs inside one of Iraq's holiest Shiite shrines Wednesday, destroying its golden dome and triggering more than 60 reprisal attacks on Sunni mosques. The president warned that extremists were pushing the country toward civil war, as many Shiites lashed out at the United States as partly to blame.

Swopa rounds up evidence—okay, more like the barest of hints—that the Samarra bombing could have been the work of militant Shiites looking to provoke some serious sectarian warfare. It's not impossible, I guess. Quite obviously a lot of different groups in Iraq have a lot of different motives for edging the country closer to civil war, and it's getting easier and easier by the day. But it seems more likely that, as Swopa says, al-Qaeda in Iraq did this, in order to keep "Iraqi Sunnis out of nonviolent politics by pushing the Shiites further against the wall."

Meanwhile, it was only a few days ago that Ambassador Zalmay Khalilzad threatened to withdraw aid from the Shiite-dominated Iraqi government if it insisted on engaging in sectarian warfare with the Sunnis. And now top Shiite leaders are blaming Khalilzad for encouraging the insurgents with that statement all while… engaging in sectarian warfare. So what will the U.S. do? President Bush sounds like he's planning to back the Shiite government and oppose the "terrorists" while calling for "restraint" on all sides. But this doesn't seem like the sort of thing you can really finesse in this way. Juan Cole says this is an "apocalyptic day." Seems right.

Kevin Drum says he doesn't see why the sale of operations of six American ports to Dubai Ports World, a shipping company owned by the United Arab Emirates, is such a security risk. After all, the company wouldn't even be handling port security in those ports; the Coast Guard and U.S. Customs and Border Security would. Plus, over 30 percent of port terminals in this country are already operated by foreign companies anyway. And DPW already does this sort of thing in ports all over the world, and other countries seem okay with that. Okay, I'll buy that.

But The Nation's John Nichols, meanwhile, asks an interesting question: Why are ports run by corporations at all? Shouldn't this sort of vital national infrastructure be operated and run by the government? Well, my understanding here is that ports are run by the government, mostly: port operations (i.e., moving ships in and out of terminals) are handled by corporations, true, but the regulatory apparatus (i.e., security, customs, licensing, etc.) is handled by the state, and all major U.S. ports are owned by public port authorities, which oversee development, construction, port policies, etc. Occasionally there's a move afoot to completely privatize the ports, but the current model remains. It's also the model in 90 percent of the world's major ports. There's one major exception, though: In 1983 the United Kingdom under Margaret Thatcher completely privatized the Associated British Ports. According to a World Bank report, the results were mixed, and "sale of either port land or regulatory duties and responsibilities to the private sector has a particularly dubious connection with improved port efficiency." It's also true that the fire sale decimated union representation in the docks (every port in Britain today is non-union), although the Bank seemed to see that as a good thing, naturally.

The World Bank study argues that partial privatization of ports—that is, the current U.S. model—is the way to go, with regulatory functions staying in the hands of the state and operations handled by private companies. (The land itself can be sold to corporations, but it doesn't seem like this has as big an effect on investment and development as one might think.)

According to the World Bank, in 1999 only 7 of the world's top 100 ports were completely state-owned anyway, mainly in Israel, Singapore, and South Africa, and these ports are all going to start partially privatizing soon enough. It's not clear that there's any significance one way or the other; privately-run ports aren't always that much more efficient, but it's also not clear that they're that much more unsafe. (They are almost certainly worse for unions, which in my mind is a major downside.) Still, I'm not sure Nichols has spotted the scandal he thinks he has. I'd be happy to hear otherwise, though.

Back to the UAE scandal. It also seems like the administration has broken the law by not doing a mandatory 45-day review of the DPW deal. It would be nice if we lived in an age where illegal activities on the part of the president actually mattered, but it doesn't seem so. Admittedly it's not entirely unpleasant watching the president get attacked by Republicans hopped up on the very war-on-terror hysteria he helped create. Although I'd rather we didn't have the hysteria in the first place, I guess.

I'd also be curious to know if there's anything to David Sirota's suggestion that the DPW deal was the Bush administration's way of buttering up the UAE for further free trade talks. And why is the president ready to use the first veto of his administration to protect the UAE? It seems shady enough that he doesn't really warrant a pass here.

And while we're at it, I'd also like to know if the country really needs to spend billions more dollars on port security, as Democrats always seem to suggest. Do we really think that some doomsday device will come in through the crates? How likely is this, really? Are there better ways to spend the money? (Like on serious non-proliferation efforts, perhaps?) Okay, no more questions.

UPDATE: In comments, serial catowner makes the case that we shouldn't be doing business with the UAE at all, for various reasons (it's a loathsome regime, for starters). That seems far more compelling to me than the idea that foreign operators might harm our port security; although on the latter, a few hearings wouldn't hurt...

February 21, 2006

So Much for Oceans

At long, long last it's done. We just published our most recent issue on oceans up on the Mother Jones site, and I'd encourage everyone to give it a look. It isn't the sexiest topic around, perhaps, but it's pretty damn important. If you have time for only one article, read Julia Whitty's cover story, "The Fate of the Ocean," which argues that we've pretty much trashed our seas, which are rapidly approaching the point of no return: "Science now recognizes that the ocean is not just a pretty vista or a distant horizon but the vital circulatory, respiratory, and reproductive organs of our planet, and that these biological systems are suffering." Fun times for planet Earth.

Okay, time to read this Francis Fukuyama piece everyone's talking about. Let's see: Bush is incompetent? Right. The neoconservatives overreached on Iraq? Sure, sure. It's time to be more realistic about stuff, without being amoral? Fine, whatever. Ah, here we go. Fukuyama gets into how to go about promoting democracy:

If we are serious about the good governance agenda, we have to shift our focus to the reform, reorganization and proper financing of those institutions of the United States government that actually promote democracy, development and the rule of law around the world, organizations like the State Department, U.S.A.I.D., the National Endowment for Democracy and the like. The United States has played an often decisive role in helping along many recent democratic transitions, including in the Philippines in 1986; South Korea and Taiwan in 1987; Chile in 1988; Poland and Hungary in 1989; Serbia in 2000; Georgia in 2003; and Ukraine in 2004-5.

But the overarching lesson that emerges from these cases is that the United States does not get to decide when and where democracy comes about. By definition, outsiders can't "impose" democracy on a country that doesn't want it; demand for democracy and reform must be domestic. Democracy promotion is therefore a long-term and opportunistic process that has to await the gradual ripening of political and economic conditions to be effective.

So it seems like Fukuyama would prefer to have the United States "promote" democracy around the world by doing what it did in, say, Ukraine, Georgia, and Serbia, rather than the "invade and liberate" model that, one assumes, was discredited long ago.

But the three examples cited above all involved very assertive steps on the part of the United States—we may not have been "impos[ing] democracy on a country that doesn't want it" (whatever that means), but the United States certainly gave a serious shove. The Clinton administration spent about $25 million on civil society groups in Serbia to oppose Milosevic, groups that eventually led the "Bulldozer Revolution" in 2000. Those Serbian groups, especially the student group Otpor, in turn linked up with the Georgian civil-society groups that helped lead the Rose Revolution—groups that were in turn funded heavily by the Soros Foundation. (Indeed, Otpor now travels the world teaching civil society groups how to organize democratic revolutions.)

Ukranian opposition groups, meanwhile, got some $65 million from the United States in the two years prior to the Orange Revolution, and were helped by third party NGOs such as the National Endowment for Democracy and Freedom House, both of which are very active—at times, quite aggressive—in promoting democratic change by nonviolent means, along with Peter Ackerman's International Center for Nonviolent Conflict, which, as nicely detailed by Franklin Foer in the New Republic, has been very active in encouraging democratic revolution around the world, played no small part in the Rose Revolution, and has been eagerly embraced by the State Department looking for a Fukuyama-esque alternative to Iraq-style regime change.

That appears to be more or less the sort of thing Fukuyama has in mind, although he leaves himself a lot of wiggle room. But is this the "right" way to go about promoting democracy? I'm still skeptical. It's certainly true that democratic transitions often depend to a large extent on mass movements from below. But there's a question of whether the Ukranian-Georgian-Serbian model is appropriate for all countries.

After all, all of those revolutions could have easily ended horrifically, resulting in either a crackdown by the ruling regime, or even war in the Ukraine case. That they didn't result in such seems lucky above all; but promoting democratic change in this fashion elsewhere could easily lead to a lot of bloodshed. The 2005 revolution in Kyrgyzstan, after all, where the State Department has spent some $12.2 million funding "pro-democracy projects," ended with violence in the streets and no real meaningful change. The Karimov regime in Uzbekistan, meanwhile, responded with crackdowns next door.

Nor is it obvious that the most recent—and ostensibly "successful"—examples of U.S.-backed democratic revolution will turn out well in the end. Serbia has shown signs of backsliding over the years, having elected Kostunica back into power two years ago and faring somewhat poorly on several governance measures. (Recent reports suggest the country may get axed from EU consideration.) Georgia, as a recent New Republic article by Charles Kupchan noted, may be "reverting to tyranny." And Ukraine is going through its own internal strife that may or may not end well. Meanwhile, having the State Department or USAID fund NGOs that push too aggressively for democracy in dictatorship countries could lead to their expulsion, which could in turn make democratization even more difficult. (Already the Soros Foundation is banned from Belarus, Russia, and Uzbekistan for this very reason.)

Fukuyama would probably agree that this sort of thing is always treacherous and note, as he wrote in his Times piece, that, "Democracy promotion is… a long-term and opportunistic process that has to await the gradual ripening of political and economic conditions to be effective." But that leaves the debate over how fast the U.S. should go, and how hard it should push. Was something like the push we gave in Georgia too hard? Not hard enough? Fukuyama just says that this should all be done "effectively." But that doesn't help us much.

February 16, 2006

Dreams Are Cool

Random dreaming-related fact of the day:

People in small tribal societies tend to have the greatest proportion of physical aggression in their dreams, with the highest reports among the Yir Yiront, an Australian Aboriginal group for whom 92 percent of dream interactions were aggressive—defined to include everything form aggressive feelings to nasty remarks to physical attacks on possessions. Among industrialized societies studied, however, Americans ranked highest for aggression in dreams, with scores of 50 percent for U.S. males (34 percent for females) versus 29 percent for Swiss men and 32 percent for Dutch men.

What it all means, exactly, I don't know, but that comes from Andrea Rock's book, The Mind at Night: The New Science of How and Why We Dream, and the figures were based on the first-ever systematic study of dream content among people the world over, begun in the 1980s. The passage quoted below is also pretty nifty—it's about a scientist who lost his sight at the age of twenty-five and has studied dreaming among blind people ever since:

He says dreams also play a crucial role in his [i.e., the blind guy's] ability to navigate new surroundings and to imprint visual imagery. If he has to learn the route to his dentist's new office, he eventually has what he calls a consolidation dream, in which all of the auditory and sensory data he's absorbed on the first couple of trips is pulled together to give him a mental picture of the new place or route. Only after he's "seen" his way in the consolidation dream can he negotiate the route as if he were in his own home...

Such dreams can also help consolidate other kinds of new visual imagery. "When my daughter gets her hair cut short, I will Braille it, appreciate it, comment on it. However, the next time she crosses my path or I think about her, in my spontaneous waking image she will still be wearing long hair. Once I dream of her in her new hairdo, that is, once I have seen it, she will appear to me in it pretty much consistently from that time on." He says others who have been blinded at some point after early childhood have reported similar dream experiences.

It's not a big secret that the developing world suffered a major slowdown in growth starting around 1980, right about the time that the IMF began leveraging Third World debt to force poor countries into adopting its preferred mix of neoliberal policies: devaluation, "free" trade, privatization, deregulation. Among developing countries, per capita income growth plummeted from 3 percent annually in the "bad" old protectionist days of 1960-1980 down to 1.5 percent in 1980-2000. (For the poorest group of countries, things were even worse—per capita GDP growth plummeted from 1.9 percent annually in 1960-1980 to negative 0.5 percent in the heady globalization decades.)

As I say, that's all well known—at least among critics of globalization. What's less well known is what this all means for a more obscure aspect of the world over the past few decades: namely, the breathtaking rise of urban poverty. The "structural adjustment programs" forced on scores of countries in Africa, Latin America, and Asia during the 1980s had a number of very specific effects: Agricultural subsidies were slashed, causing the rural poor to flee to the cities, while public sector firms were privatized, increasing unemployment. That plus the combination of a collapse in manufacturing, a drop in real wages, and cuts in social services and infrastructure spending all pointed in the same direction: Urban populations expanded rapidly, and urban slums proliferated without end.

In a nutshell, all of that is the context for Mike Davis' new and much-recommended book, Planet of Slums, which examines the rise in urban poverty across the world. Davis looks at the numbers in a hundred different ways, and they're all equally stark. The urban population in the Third World is growing rapidly, and will double to nearly 4 billion within the next generation. Most of this growth will occur not in the biggest cities, but in medium-sized cities, which are ill-equipped to deal with this sort of a population explosion. Among other things, the developing world is inevitably going to see the rise of new "megacities," with populations over 8 million, and "hypercities," with more than 20 million people. Projections have Jakarta, Dhaka, and Karachi all reaching over 25 million by 2025, and no one really knows if this is anywhere close to sustainable.

The swelling ranks of the urban poor in these new megacities will increasingly be consigned to slums—already, there were 921 million slum-dwellers in 2001, a number equal to the entire world population back when Engels started poking around Manchester in the 1840s. There are 250,000 slums around the world today, and by 2030 or 2040 there will be around two billion slum-dwellers living on this earth, people who live in areas with few, if any, utilities—in Nairobi the poor rely on "flying toilets" (crapping in a plastic bag)—people who are barely subsisting in the "informal" economy, breeding disease and dying at an alarming rate, plagued by crime, and often forced into quasi-feudal dependencies by local officials. (Local officials will often threaten to raze an entire district if bribes aren't paid.)

So that's the world in 2030 or 2040. Davis cites a major report on slums by the Global Urban Observatory, "Slums of the World," which, atypically for a UN report, places the blame squarely on decades of neoliberalism, rather than "bad governance" by the poor countries in question:

The primary direction of both national and international interventions during the last twenty years has actually increased urban poverty and slums… [I]nstead of being a focus for growth and prosperity, the cities have become a dumping ground for a surplus population working in unskilled, unprotected and low-wage informal service industries and trade…. The rise of [this] informal sector is… a direct result of liberalization.

Davis estimates that a billion people worldwide work in the "informal sector"—a group that doesn't match up exactly with the slum-dwelling population, but overlaps heavily. Judging from his description, these workers certainly aren't the mini-entrepreneurs described by right-leaning economists like Hernando de Soto, ready to break out if only they lived in a society with property rights. (For another critique of de Soto, see here.) Most of these workers are what Davis calls the "active unemployed," those without jobs who must somehow find a way to subsist—or starve—in a ruthless slum environment, prone to exploitation at every turn.

So what's the upshot of all of this? Davis is describing, first and foremost, an unparalleled human catastrophe. Few cities in the developing world are at all prepared for this sort of urban population explosion, and as Davis says, "who can imagine any plausible scenario, under neoliberal auspices, that would reintegrate [those in the slums and informal sectors] as productive workers and mass consumers?" The 2 billion slum-dweller scenario is on its way; there doesn't seem to be much anyone can do about it.

Back during the industrial revolution, European cities were able to handle the influx of city-dwellers moderately well because mass immigration to America, Oceania, and Siberia "prevented the rise of mega-Dublins" and alleviated some of the pressure on the urban populations there. But today, with immigration increasingly restricted in the rich world, that's not really an option for countries in sub-Saharan Africa or Central Asia. (Most of today's "First World" countries also didn't have to liberalize anywhere near as fast as developing countries are forced to today.)

Nor is it very likely that the billions ejected from the formal economy will ever organize themselves as a class, demanding justice. Self-consuming communal violence, rather than solidarity, tends to be the major trend in most slums. At least today, urban problems in the developing world are most often addressed—when they are addressed—from above, by populist leaders like Hugo Chavez, who can often count on support for the urban poor. With the explosion of slums in the next few decades, we can expect the rise of more populist leaders around the world; many of them far less benign than Chavez—for instance, the fascist Shiv Sena movement in Mumbai.

Davis thinks that, in the absence of a Left (if there even is one anymore) in Third World urban areas, the slum class will eventually turn in increasing numbers to populist Islam and Pentecostalism. In the past, according to religious scholar Hugh McLeod, increasing urbanization has usually come alongside working-class detachment from the church (Glasgow and New York are two major exceptions). That trend will almost certainly change with the rise of a two-billion strong global slum-dwelling class in the decades ahead, as the urban poor increasingly turn to mass religion, and against "Western civilization," such as it is. Really worthwhile book, I'd say.

February 10, 2006

In Praise of Loose Lips

Always nice to see the CIA Director getting down and dirty. In the Times today, Porter Goss takes to berating people who leak national security secrets to the press:

As a member of Congress in 1998, I sponsored the Intelligence Community Whistleblower Protection Act to ensure that current or former employees could petition Congress, after raising concerns within their respective agency, consistent with the need to protect classified information. Exercising one's rights under this act is an appropriate and responsible way to bring questionable practices to the attention of those in Congress charged with oversight of intelligence agencies. And it works....

On the other hand, those who choose to bypass the law and go straight to the press are not noble, honorable or patriotic. Nor are they whistleblowers. Instead they are committing a criminal act that potentially places American lives at risk. It is unconscionable to compromise national security information and then seek protection as a whistleblower to forestall punishment.

Unconscionable, eh? Now admittedly I tend towards the extremes when it comes to thinking about classified information and national security secrets—"leak early, and leak often" is the motto 'round these parts—but even from a more "reasonable" angle, Goss' position seems pretty shoddy.

The Whistleblower Protection Act does provide some channels for whistleblowers to complain to Congress, true, but it's still very incomplete. Russell Tice, the guy who's supposedly privy to a bunch of "illegal actions" taken by the NSA in its domestic spying program, is not allowed to testify before Congress, because even the members of the House and Senate Intelligence Committees, who are technically supposed to oversee this stuff, don't have the necessary clearance. If the executive branch is allowed to possess secret information that no one else can have, then it becomes a bit difficult to find an "appropriate and responsible way" of bringing illegal activities to Congress' attention, no?

Meanwhile, the normal channels for reporting wrongdoing that Goss prefers have become much more treacherous since 1999, when a federal court ruled that whistleblowers can be protected from retaliation only if there is irrefutable evidence of wrongdoing—a high bar to meet (the wrongdoer basically has to cop to illegal activity). According to the Government Accountability Project, prior to the court ruling, 36 percent of whistleblowers who went to the Merit Systems Protection Board won their case on the merits; post-ruling, it's 7 percent. Presumably, many potential whistleblowers have been deterred from using "legal" avenues to report wrongdoing. So that leaves the press.

Is it ignoble or dishonorable to leak classified information to the press? Even I would agree that some national security secrets need to be kept under wraps (although I'm less enamored of the idea that "covert" CIA agents toppling governments and fueling crack epidemics deserve special protection, but whatever). But what counts as a real secret here? Classification of "national security secrets" has always been arbitrary and subjective. The CIA still classifies the intelligence budget total from 1947, despite the fact that budget figures for, say, 1998 have been declassified. Similar examples are everywhere. Is this the sort of whimsy that should be backed by criminal prosecution? I'd say no.

(Moreover, if ever it became a felony to leak classified information—as Congress proposed in 2000, only to be vetoed by Bill Clinton—then the executive branch, which can classify and declassify things at will, would also have the power to create or dissolve at will the conditions for felony prosecution. Surely more than a few people can see an abuse-of-power problem lurking here.)

Under the current administration, moreover, the executive branch has revised the rules on classification, allowing documents to be classified "even in cases of significant doubt." In 2004, classifications rose 25 percent, at a cost of some $6.5 billion. Was this all due to national security? Hardly—a record number of agencies are keeping things secret, including Health and Human Services and Agriculture. Agriculture. The administration has also decided to ignore the "Seven Member Rule," under which, if seven members of the House Government Reform Committee request information from the executive branch, they get it. No longer. In many cases, the members simply wanted to look at adjusted census records—and couldn't even get that. When the rules become this capricious, I'd argue that whistleblowers have less of a duty to respect government secrecy.

The last thing to say is that leaking doesn't always put "lives at risk," as Goss has it—sometimes it does a world of good. Here's an example everyone can agree on: In 2002, press reports on the classified Nuclear Review Posture—in which the administration was considering the use of nuclear weapons—proved pretty damn important, since the Pentagon hadn't even bothered to create an unclassified report on the subject, as it usually does. Some administration officials, reported the Los Angeles Times, even welcomed the leak and "said privately that a national debate on nuclear strategy might be healthy." To put it mildly, yes. If Goss had had his way, there might have been no "healthy" debate at all.

At any rate, those are more scattered observations than specific recommendations. Very few people would agree with me that we should declassify nearly everything, but it seems uncontroversial enough to say that current whistleblower protections are inadequate and the current classification system is hardly the sort of thing that should inspire as much reverence and awe as Porter Goss is demanding.

February 08, 2006

Grown-Ups, At Last

This is genuinely exciting news (there's so little these days…). It looks like Sweden is preparing a plan to become an "oil-free" economy by 2020:

The attempt by the country of 9 million people to become the world's first practically oil-free economy is being planned by a committee of industrialists, academics, farmers, car makers, civil servants and others, who will report to parliament in several months.

The intention, the Swedish government said yesterday, is to replace all fossil fuels with renewables before climate change destroys economies and growing oil scarcity leads to huge new price rises.

Sweden has a decent head start—about 26 percent of its energy already comes from renewable resources (the EU average is 6 percent)—and plans to meet its goal by using biofuels, along with wave and wind power, to generate the needed electricity, rather than relying on new nuclear plants, which already supply half of the country's electricity.

The Volvos, meanwhile, will all run on hydrogen. Or at least that's the plan, though granted, lots of smart people think hydrogen-run cars are easier said than done. Joseph Romm, a former Energy Department official under Clinton and the author of The Hype of Hydrogen, has leveled a number of criticisms along this front—for one, a hydrogen-powered economy can end up using more total energy because all of that hydrogen needs to be transported around to filling stations, and it's harder to ship than gasoline. And a relatively recent study by Argonne National Laboratory estimated that installing the vast infrastructure to equip 40 percent of American vehicles to run on hydrogen would cost $500 billion or more. Obviously Sweden's not as big as the United States, but that's a lot of money, and it will be interesting to see whether the Swedes can pull this all off.

Now the obvious question: Why can't the United States do something like this? There are major differences between us and Sweden, sure: the latter is much smaller, uses less oil, has an abundance of rivers, more nuclear power plants, and less sprawl. That all makes things much easier. And, according to Prime Minister Goran Persson, Sweden's farms and forests are more conducive to generating biofuel than America's. But as I've pointed out before, it's physically impossible to power the whole world—or even more than a small portion—with biofuel, and the United States would have to find its own mix of renewable resources no matter what (most likely involving a heavy dose of solar). So Sweden's not, in a strict sense, a "model" here.

Still, this is what a grown-up approach to energy policy looks like. Nothing mind-blowing. Nothing impossible. All you need is a government willing to act. The contrast between the Swedes and an administration that backtracks from even modest statements on ending our oil addiction—and then lays off 32 workers at the National Renewable Energy Lab because of a $28 million budget shortfall there—pretty much speaks for itself. Lucky us.

Torsten Persson and Guido Tambellini ask some questions about democracy and economic development in a new NBER paper (an earlier version here):

Does democracy promote economic development? We review recent attempts to address this question, which exploit the within-country variation associated with historical transitions in and out of democracy. The answer is positive, but depends – in a subtle way – on the details of democratic reforms.

First, democratizations and economic liberalizations in isolation each induce growth accelerations, but countries liberalizing their economy before extending political rights do better than those carrying out the opposite sequence. Second, different forms of democratic government and different electoral systems lead to different fiscal trade policies: this might explain why new presidential democracies grow faster than new parliamentary democracies. Third, it is important to distinguish between expected and actual political reforms: expectations of regime change have an independent effect on growth, and taking expectations into account helps identify a stronger growth effect of democracy.

Huh. This debate somtimes seems a bit muddled to me—when you start looking at things on the microscopic level, it's quite rare that any country will carry out all of its economic reforms (which, to NBER, tends to mean "liberalize early and liberalize often"), finish that up, and only then work on extending political rights. Usually these things happen in tandem. Usually they're interrelated. Little movements towards one might lead to little movements towards the other. Both "democracy" and "development" are complicated processes, and it's not always clear whether the question of "which should come first?" is as helpful as it should be. Still, it's an interesting set of findings, no doubt.

In other NBER news, Joseph Stiglitz and Sergio Godoy offer new evidence that rapid liberalization isn't all it's cracked up to be:

In particular, we look at whether speed of privatization, legal institutions or initial conditions are more important in explaining the growth of the transition countries in the years since the end of the Cold War. In the mid 90s a large empirical literature attempted to relate growth to policy measures. A standard conclusion of this literature was the faster countries privatized and liberalized, the better. We now have more data, so we can check whether these conclusions are still valid… Our results suggest that, contrary to the earlier literature, the speed of privatization is negatively associated with growth, but is confirms the result of the few earlier studies that have found that legal institutions are very important.

February 07, 2006

In the fashion world, there are only three influential critics (of those writing in English): Suzy Menkes of the International Herald Tribune, Cathy Horyn of the New York Times, and Bridget Foley of Women's Wear Daily. While other major newspapers cover fashion in a service-oriented way—that is, they suggest what to buy—the writers are not critiquing fashion as they might film or books.

I'm not sure I buy this, though. In the Style section of yesterday's New York Times, I was reading Horyn's "Finally, Girl Designers Who Want to Have Fun," where our oligarch approved of the fact that fashion designers are finally letting their hair down and making their clothes fun. (I know, right?) One useful trick, we learn, is to take a "Peter Pan collar" or a "sack dress" and "subverting them just enough so that they don't seem goody." That seems like sound advice. And I would call this "critiquing fashion," sure, sure.

But then in the next column over we have Eric Wilson's "American Style is Coming Back, but Will Men Pay the Price?," which also seems to critique fashion "as one might a film or book": "His jeans with a low waist, baggy crotch and long, tight legs — already the silhouette of the season, ripped from the body of Pete Doherty, the troubled English rocker — were made palatable in finer fabrics with the texture of oilcloth." I don't see what Wilson's not doing here that Horyn, apparently, is. Why does she make the triumvirate but not him? Maybe he's an oligarch in training? It's all very mysterious.

The Oakland Tribunereported today that lab officials in California are "excited" by the prospect of "designing a new H-bomb, the first of probably several new nuclear explosives on the drawing boards." This threw me for a loop at first—"Hang on, new nuclear weapons? Who said this was okay, again?"—but I think I get what's going on. (Although correct me if I'm wrong.)It's no secret that the Bush administration has long wanted to develop new types of nukes, including those entirely frivolous "bunker-busters," for god knows what purpose. In Congress, on the other hand, sensible folks such as Rep. David Hobson (R-OH) have instead called for a "thoughtful and open debate on the role of nuclear weapons," and have opposed adding new weapons to existing stockpiles. Good luck with that, right? But in 2005 Hobson introduced the Reliable Replacement Warhead (RRW) program as a means of finding a middle ground here.

RRW was supposed to allow scientists to "refurbish" our existing nuclear stockpiles and make them more reliable "without developing a new weapon that would require underground testing to verify the design." Even the "refurbishing" is a bit questionable: our warheads are already plenty reliable, and even warheads labeled "unreliable" by experts can still inflict as much massive death and destruction as anyone could hope for. The current "stockpile stewardship" program set up by the Clinton administration in 1992 has never found any problems with the viability of the U.S. arsenal. (See this Bulletin article for more on this.) Still, RRW would channel the energies of the nuclear establishment away from the task of dreaming up new nuclear weapons and into something relatively harmless. That's useful.

Anyway, it wasn't long before Energy Department officials decided to co-opt and expand upon Hobson's RRW idea, and many administration officials now seem to see it as a means of creating an infrastructure that can eventually churn out new weapons if necessary. All of the sudden, everyone had a different interpretation of what the program actually entailed. Last April, Everet Beckner, deputy administrator of the National Nuclear Security Administration, told the Tribune, that building new warheads "was not the primary objective [of RRW], but [it] would be a fortuitous associated event." Oh, fortuitous. Right.

That July, as reported by the Bulletin of Atomic Scientists, the Energy Department was presenting plans before Congress for a completely overhauled nuclear stockpile that would use the RRW program to get there. The department's report "envisions a stockpile to meet an evolving or changing threat environment" and recommends that "a new version of RRW" be implemented to "form the basis of the sustainable stockpile of the future."

Now the new explosives currently being "designed" are still, as I understand it, intended to renovate existing stockpiles, and aren't brand new weapons. In fact, Sen. Pete Domenici explicitly prohibited any funds for the purpose of implementing the recommendations in the Energy Department report.) But the RRW program has slowly and subtly been morphing into a program intended to build new nuclear weapons—despite the fact that this was clearly not Hobson's original goal. And the Bush administration is continuing to push it in that direction, and presumably hopes it will continue to morph in the future. So that's something to watch.

More to the point, the overarching assumption here is that we somehow need all these new nuclear weapons. For what, no one can say. It's pretty clear that nuclear "deterrence" hasn't stopped North Korea or Iran from going nuclear—or 9/11 for that matter; and the United States' insistence on augmenting its own arsenal almost certainly undermines nonproliferation efforts. The administration's desire for "low-yield" nukes—weapons that could conceivably be deployed on the battlefield, and lower the threshold for use—seem completely insane, although Congress seems to have put an end to that little fantasy for now.

In related news today: "State Department political appointees have sidelined career weapons experts who don't share their animosity to arms control agreements and have placed less experienced political operatives in key slots." Okay, then.

In the grander scheme of things, it probably isn't the soundest of decisions to boost defense spending up to even more obscene levels, as the president proposed in his 2007 budget yesterday. But then, who knows, maybe the economy needs it. Last week, the Economic Policy Institute put out one of those "ironic in an Alanis Morissette sort of way" reports estimating that between FY2001 and FY2005, defense spending created 1.5 million additional private sector jobs in the United States. Some might call it pork. Some might call it socialism. Either way, it's hardly anything new in this country.

Disney, which seems to be grabbing all the headlines these days, is one of my favorite socialist success stories over the past fifty years. Walt Disney, as the tale goes, revolutionized the animation business by creating a Ford-style production line in the 1930s, with animators confined to mundane, repetitive tasks in order to churn out all those cartoons so quickly. In the 1940s, the animators went on strike over their dismal working conditions, and Disney—who had little in common with his father, a passionate socialist—fought back: hiring scabs, using private guards to attack the picketers, bringing in a mobster to negotiate a deal, and eventually, long after he had lost the union battle, became embittered and served as an informer for the FBI against uncouth Communists in Hollywood during the McCarthy era.

Quite the free marketeer, that one. Or at least he was until his company started flailing in the 1940s and Disney had to rely on the government dole, mostly in the form of defense contracts, to keep his business afloat—at one point, federal funds paid for nearly 90 percent of his studio's work. The company made propaganda, military training films, and airplane nose art for the Department of War, and later worked with postwar administrations to provide further government agitprop promoting American technology, space travel, and nuclear technology. ("That included the 1958 classic, "Our Friend the Atom," teaching kids in the classroom to "duck… and cover" in the event of a nuclear attack.)

At any rate, Disney's story is hardly unique—all sorts of modern corporations got where they were because of military socialism, especially the automobile and oil industries. As EPI showed, the American economy is addicted to it: since 2001 a little under half of the 3.4 million new jobs created have been paid for by the Pentagon (and another 1.3 million have been created by non-defense discretionary spending; more socialism!). It's not a huge surprise that the Pentagon's latest Quadrennial Defense Review called for virtually no major cuts in spending, or that members of Congress routinely ignores calls to close bases or kill weapons programs. Where else will the jobs come from?

In economics, the usual Keynesian line is that pretty much any sort of government spending can help pump-prime the economy. But that doesn't distinguish between different types of spending. After World War II, a variety of American policymakers worried about sinking into another depression, and believed that only wartime socialism—and not Roosevelt's domestic programs—had saved the American economy previously. "One of the first things we must realize is that in the 1930s we never did find the answer to full employment," said New Dealer Chester Bowles. "Only the defense program in 1940 put our people to work, and only the war and the cold war that followed have kept them at work."

Since 1950, when in the Korean War caused the federal tax burden to leap from 14.4 percent (and falling) up to 19 percent in just under two years—and then remain at that level for most of the Cold War—policymakers seem to be thinking like Bowles, regardless of whether he was right or not. (Certainly some economists deny that the defense program "put our people to work:" for a right-wing view that the domestic economy in the 1940s wasn't quite as prosperous as the history books remember, see Robert Higgs' "Wartime Prosperity? A Reassessment of the U.S. Economy in the 1940s.")

One contrary view came from the late Seymour Melman, an economist and ardent pacifist, who wrote in 1974 in his book, The Permanent War Economy, that military spending had a negative effect on the economy in the long run, by diverting research and development away from more productive and wholesome purposes other than war. Certainly military spending has led to some marvelous innovations—the internet, whose precursor was built by the Defense Advanced Research Project Agency, is one—but compared to what? Military research dollars surely aren't the only way we can invent fancy computers.

And the other question is whether even more jobs could be created if some of those Pentagon dollars were shifted to direct spending on housing and infrastructure. That seems quite likely, and this was Melman's view—as for instance, he argued in a 2003 Counterpunchessay. If we're going to have a socialist system here in America—and already we have a Federal Reserve Chairman who perhaps exercises as much control over the U.S. economy as GOSPLAN ever did in the Soviet Union—we may as well do it right. The other upside is that not building all those fancy weapons will give us even less excuse to use them. And then there's the fact that, regardless of the benefits of military socialism, we can't keep paying for all this empire with piles of debt forever...

February 03, 2006

All Work and No Play

Are Americans overworked? Almost certainly. But could they possibly have more leisure time than they have in decades past? Possibly, according to a new study by Boston's Federal Reserve Board, as written up by a byline-less writer in the Economist:

In fact, most of the official numbers have shown that American toil has not changed that much over the past few decades. Americans may put in longer hours at the office than other countries, but that is because average hours in the workplace in other rich countries have dropped sharply. In America, official studies tend to show women working more and men less, but the average working week has been fairly constant. …

Messrs Aguiar and Hurst think that the hours spent at your employer's are too narrow a definition of work. Americans also spend lots of time shopping, cooking, running errands and keeping house. These chores are among the main reasons why people say they are so overstretched (especially working women with children).

However, Messrs Aguiar and Hurst show that Americans actually spend much less time doing them than they did 40 years ago. There has been a revolution in the household economy. Appliances, home delivery, the internet, 24-hour shopping, and more varied and affordable domestic services have increased flexibility and freed up people's time.

That's all very interesting, and three (genuine) cheers for technological advances. But the quoted bit about how "the average working week has been fairly constant" came as a surprise, so I opened my trusty State of Working America: 2004-2005 to see what they had to say about this. Indeed, it's sort of true: according to EPI's analysis of the CPS data, a graph of "average weekly hours" among Americans shows little upward movement between 1975 and 2002—at their peak in 2000, average weekly hours were only 3.1 percent above their 1975 level. In that sense, the average working week has remained constant.

But that figure can be misleading, says EPI. "[T]he primary factor driving the flat trend in average hours is the entry of more women into the labor force over this period. Since women are more likely to work part time, their hours worked per week lowers the average of weekly hours, despite the fact that family members are clearly spending more time in the paid labor market." As an alternative to the workweek figure, the book graphs average annual hours worked by all families. That number is up 11 percent since 1975. And it's up even more for middle-income families.

So while workweeks, on average, are about as long as they were in the 1970s—with men working less and women more—families as a whole seem to be working quite a bit more. Elizabeth Warren and Amelia Tyagi get into this in their excellent book, The Two-Income Trap, noting that many families, in response to the rising price of health care and property values around decent schools, are increasingly sending two earners into the workplace to keep up. And that trend seems to account, in part, for the rise in bankruptcies—so long as families are entirely dependent on two incomes, and both parents are working all the time, they have no safety net if, say, one person loses his or her job.

Also, not all reductions in workweeks are equal. According to EPI, between 2000 and 2003 the middle quintile of earners saw a 2 percent drop in real family income, in large part because of a 4.6 percent drop in hours worked. I would assume that not all of this drop was voluntary—many people either couldn't find full-time employment, or else corporations have become increasingly adept at "managing" their employee's time to limit the number of hours they have to be paid. McDonald's has perfected the art of telling employees who arrive in the morning to wait around idly in the restaurant, without punching in, until customers start showing up. That's obviously not an increase in leisure, and McDonald's is hardly unique at this trick.

Last fall, Adam Isaacson wondered what we're actually getting for the $4.7 billion dollars we've spent over the past six years fighting drug wars in Colombia. Not a whole lot, it seems.As a policy to curtail the drug supply, "Plan Colombia" has been a total failure. Cocaine and heroin are as cheap and pure as ever in the United States, and coca production in Colombia has been holding steady over the past two years, despite the fact that the government has been dumping herbicides anywhere it can reach. Coca growers, who depend on the crop for their livelihood, have become more innovative in response to the aerial spraying—cultivating smaller, harder-to-detect plots; developing new strains that grow more quickly; and planting in the shade.

And so long as the growers have no other options—alternative development programs are under-funded and reach only a small fraction of rural Colombians—they'll keep on innovating. Not to mention the fact that all that constant spraying makes growers more sympathetic to guerilla groups. Seeing as how the United States is currently in the middle of following a similar strategy to fight opium production in Afghanistan, these seem like lessons worth learning. Doubtful it will happen, though.

Now in 2002, the Bush administration expanded military aid to the Bogota government to help it fight FARC, one of the two leftist insurgency groups. Colombia has seen a few crucial security improvements over the past few years—under President Uribe, kidnappings have dropped by 57 percent, massacres by 71 percent, and murders by 31 percent (at least according to "official" figures). On the other hand, this can't count as an American success; most U.S. military and police aid simply doesn't go towards protecting civilians, so it's hard to credit "Plan Colombia" here. (Isaacson instead credits President Uribe's decisions both to redeploy troops in population centers and to induce the right-leaning paramilitaries to agree to a ceasefire.) And then there's the downside to all that military aid:

Since Plan Colombia's inception, Colombia's attorney general has demonstrated markedly less will to prosecute cases of human rights abuse by the military. The State Department's last human rights certification memo named only thirty-one military personnel… currently under indictment for human rights abuses or support of paramilitaries. This impunity undercuts much of the well-publicized improvements that the Colombian armed forces have made to their human rights training. If a soldier knows he stands almost no chance of punishment for committing an abuse, will the mere knowledge that the crime is wrong consistently prevent him from committing it.

Some numbers suggest that the Colombian government really is acting with more impunity. The share of abuses committed by the Colombian security forces rose from 5 percent in the late 1990s to 7.8 percent in 2003. Killings and disappearances of human rights activists rose from 29 in 2001 and 2002 to 33 in 2003. 340 people were tortured in 2002-03, up from 242 in the previous twelve months.

Nor has Plan Colombia improved the prospects for peace talks; indeed, there's some evidence that the specter of U.S. military aid may have helped scuttle the moderately promising negotiations in 2000 between the government and FARC and ELN by fostering mistrust among the guerrilla groups. And the counterinsurgency campaign isn't proving very effective either: in part because over 80 percent of the U.S. aid is of a military nature, the Colombian government has had the same trouble that coalition forces have had in Iraq—they can seize rebel-held territories, but they can't hold them once they withdraw. Not only that, but should the conflict in Colombia start escalating rapidly, the U.S. could find itself committing more and more resources to a major war in its backyard. Already some American policymakers have been linking FARC to Hugo Chavez in Venezuela, which should make for fun times.

Here's one comprehensive suggestion for a change in strategy in Colombia, which would involve: Less military aid; more support for institutions that bolster human rights. An end to crop spraying; more support for alternative development strategies in neglected rural areas. At home, drug treatment programs would be a much cheaper and actually effective means of reducing the demand for drugs; but this fraction of the $12 billion that the federal government spends each year on the "war on drugs" hasn't changed for a long while. Budget cuts this year focused on hacking up health care for the poor rather than taking even the slightest look at a billion dollar foreign policy adventure that doesn't seem to be achieving much of anything.