Noahpinion

Sunday, August 02, 2015

I was struck by Cornell West's negative reaction to Ta-Nehisi Coates' new book, Between the World and Me. This line in particular caught my attention:

Coates can grow and mature, but without an analysis of capitalist wealth inequality, gender domination, homophobic degradation, Imperial occupation (all concrete forms of plunder) and collective fightback (not just personal struggle) Coates will remain a mere darling of White and Black Neo-liberals, paralyzed by their Obama worship[.]

I've seen a bit of this idea among humanities folks before - the idea that the only way that racial minorities will win true freedom is with a revolution that overthrows capitalism.

I kind of think that this idea is a trap that helps keep minorities down.

First of all, I agree with Jamelle Bouie that racial disparities in America - and everywhere, really - are about a lot more than class. Attempts to define the struggle of black people for social equality as simply one more case of the eternal Marxian struggle of the proletariat against the capitalist overclass fundamentally miss a lot of the important reasons why black people struggle in America. It's not just because they're poor and capitalism hurts the poor. (This is also the glum conclusion of the protagonist in the novel Invisible Man, who joins a communist-type organization called the Brotherhood, only to realize that racism can't really be understood through the lens of class conflict.)

But also, taking a historical perspective, I doubt that the strategy of anti-capitalism will do anything to help minorities. The example I'm thinking of is my own ancestors: Jews in Europe. Now, Jews were not a racial minority per se, but in an age when religion was mostly inherited, they were somewhat similar to one. European Jews were persecuted for millennia - regularly attacked and massacred, excluded from many types of economic of activity, kept from holding political power, etc.

European Jews mostly responded to this with nonviolence. Instead of defending themselves from regular attacks, they routinely fled. Instead of trying to overthrow the government, they isolated themselves in secluded communities. Instead of trying to redistribute wealth to themselves by militant force, they engaged in commerce, attempting to get rich in industries like baking and jewelry.

This approach - an early version of what you might call a "model minority" strategy - seems not to have worked very well, at least for a long time. Many Jews got rich - so much so that Jews developed a stereotype as being wealthy - but the massacres and exclusion continued in many places.

Some European Jews took a different tack at the beginning of the modern age. They signed on to the new international communist/socialist movement that was sweeping the continent. Some Jews, like Marx, even helped define the movement. Eventually, this movement turned into violent anti-imperialist revolution in Russia. Many Jews, like Leon Trotsky, joined the Russian Revolution and helped successfully overthrow imperialism.

Unfortunately, this didn't really work either. Jews continued to suffer extreme and often violent discrimination and exclusion in the Soviet Union. The leftist gambit failed - it turned out that social inequality was about a lot more than the imperialist system. It seems pretty clear that a similar thing would happen with black people in America if we ever experienced our own version of the Russian Revolution. Cornell West's anti-capitalist "fightback" would be a disaster for black people.

So what did eventually work for Jews? Moving to tolerant societies. In the Netherlands and England, discrimination still existed, but there were no massacres. Eventually, as societies became richer and more democratic, even social exclusion was reduced. Britain even had a Jewish prime minister - Benjamin Disraeli. In the modern day, many Jews moved to the United States, where anti-semitism was never more severe than the various other frictions between ethnic and religious groups.

And within these tolerant societies, Jews (mostly) didn't try to overthrow capitalism - they worked within the system, doing essentially the same thing they had done in medieval Europe. But with the advent of modern capitalism, this strategy bore a lot more fruit than before. Jews have, overall, flourished economically in the U.S. without suffering the discrimination and violence that used to accompany it.

Obviously, direct application of the "model minority" solution is not going to work for African Americans, since this country for historical reasons has entrenched discrimination against black people in a way that it doesn't have against Jews (or Asians, or Hispanics, or Italians, etc.).

But anti-capitalist revolt is not going to work either. It's a seductive mirage that will only destroy those who chase after it. Capitalism isn't a cuddly, friendly system, but its destruction tends to lead to things far more baleful.

Leftism, as a philosophy and worldview, has suffered enormous setbacks in the past few decades, because communist countries both A) collapsed, and B) were revealed as being nightmarish to live in. A natural strategy for proponents of hardcore leftism - at least, those who choose not to do the sensible thing and moderate their views - would seem to be to try to co-opt oppressed racial minorities, telling them that their social exclusion is due to capitalism.

But ultimately, hopping on the radical leftist boat will hurt minorities. And I suspect that leftists in the humanities are doing minorities no favors by trying to convince them that radical leftism is their only hope, when in fact it is a self-defeating strategy.

So what will work? If history is any guide, the only option is to increase tolerance. I don't pretend to know how to increase tolerance. For immigrant groups, it seems to naturally fade over time, especially if those groups 1) organize to fight discriminatory policy, and 2) make a bunch of money. For African-Americans, intolerance seems much more entrenched. I don't pretend to know how to get rid of it, but I am pretty sure that a militant overthrow of capitalism would make things much, much worse.

Tuesday, July 28, 2015

Here is a news article about a Malcolm Gladwell speech. This news article is of great interest to me, since it suggests that it's not actually very hard to build a lucrative career going around and giving knowledgeable-sounding speeches about concepts, technologies, or companies that are in the news. I could do that job. Dear readers, you know I could do that job.

A more minor reason that this article is of interest to me is that it gives me a chance to do a snarky point-by-point refutation, which is something I have to do periodically or else go (more) insane. So let's go through and count some of the silly things that Malcolm Gladwell is quoted in this article as having said.

Last night futurist, journalist, prognosticator, and author Malcolm Gladwell told pretty much the most data-driven marketing technologist crowd imaginable that data is not their salvation.

Well that particular piece of data won't tell you, but maybe others could. For example, you could use regional/national variation in the time that countries got smartphone service, and compare Snapchat uptake among age-matched cohorts.

Of course, that is a different piece of data than the one Gladwell cited. Does Gladwell think it is a significant, penetrating insight to point out that for different questions, you may need different data sets? When Gladwell calls data a "curse", is he using the word "curse" to mean "something that you might need more than one of in order to be omniscient"?

Anyway:

Developmental change, in Gladwell’s story, is behavior that occurs as people age...Generational change, on the other hand, is different. That’s behavior that belongs to a generation, a cohort that grows up and continues the behavior...The question is whether Snapchat-style behavior is developmental or behavioral.

“In the answer to that question is the answer to whether Snapchat will be around in 10 years,” Gladwell said.

No, that will most certainly not tell us whether Snapchat will be around in 10 years. For example, suppose Snapchat is "developmental", so that young people like it more than old people. Well, there is a constant new supply of young people. But suppose instead that Snapchat is "generational", so that people who grow up with it like it. Well, why wouldn't new generations like growing up with it just as much as old generations did? So even if we answer Gladwell's question, it does not, in fact, tell us much about the future of Snapchat.

Next, Gladwell tells us about the "Facebook Problem":

“Facebook is at the stage that the telephone was at when they thought the phone was not for gossiping — it’s in its infancy,” Gladwell said...

The diffusion of new technologies always takes longer than we would assume, Gladwell said. The first telephone exchange was launched in 1878, but only took off in the 1920s. The VCR was created in the 1960s in England, but didn’t reach its tipping point until the 1980s...

Technologies that are both innovative and complicated, like Facebook, take even longer to really emerge.

Except that this doesn't apply to Facebook, because everyone already uses Facebook. Yes, there was a period in time when social networks - Friendster, Myspace - were not widely used. That era is now in the past. People may find new ways to use Facebook, but it's not in its infancy - it has already experienced near-universal uptake. Discussing when Facebook might "really emerge" is like discussing when television might "really emerge".

Finally, Gladwell tells us about the "Airbnb Problem":

The sharing economy, featuring companies like Airbnb, Uber/Lyft, even eBay, rely on trust...

And yet, if you look at recent polls of trust and trustworthiness, people’s — and especially millennials — trust is at an all-time low. Out of ten American “institutions,” including church, Congress, the presidency, and others, millennials only trust two: the military and science...

That’s conflicting data. And what the data can’t tell us is how both can be true, Gladwell said...“So which is right? Do people not trust others, as the polls say … or are they lying to the surveys?”

So is it a contradiction if people trust the clocks on their cell phones but distrust Vladimir Putin? Is it a contradiction if people trust their neighbors but distrust the mafia? Are data contradictory whenever they show differing levels of aggregate trust in different people, institutions, or objects? And in general, why should trust in institutions be correlated with trust in other individuals?

What really startles me is that people trust Malcolm Gladwell to say useful things at marketing conferences.

Anyway, generating such jaw-dropping nonsense must get tiring, so Gladwell falls back on some good old tried-and-true incorrect facts:

[Gladwell said there has been] a massive shift in American society over the past few decades: a huge reduction in violent crime. For example, New York City had over 2,000 murders in 1990. Last year it was 300. In the same time frame, the overall violent crime index has gone down from 2,500 per 100,000 people to 500.

“That means that there is an entire generation of people growing up today not just with Internet and mobile phones … but also growing up who have never known on a personal, visceral level what crime is,” Gladwell said.

Baby boomers, who had very personal experiences of crime, were given powerful evidence that they should not trust.

Except here is a chart of U.S. homicide rates:

You'll see that when Baby Boomers were young (under 20), there was even less homicide (and other crime) than when Millennials were under 20. Oops.

Also, Gladwell's statement that young people don't know "what crime is" ignores the fact that U.S. crime rates are still many times what they are in other countries. It's just an obviously false statement.

Also, just to be complete I should note that if Gladwell were right, regions that experienced much less of a crime spike in the 70s and 80s should have higher Airbnb use among Baby Boomers. But I think we've seen very high uptake in, say, Northern California and the Pacific Northwest, where the crime boom was much less severe. However, rigorous analysis (with yes...gasp...DATA!) would be able to answer this question more definitely.

Folks, there are many important cautions to be made about the use of Big Data. These are not they.

Now here is a presentation by Jesse Bricker, Jacob Krimmel, and Claudia Sahm of the FRB, using survey expectations data to explain the housing bubble. Pretty neat stuff.

The question is why asset price crashes affect the real economy. One possibility is that they precipitate a regime shift in household savings behavior, causing households to abruptly start focusing on "balance sheet repair", also known as "deleveraging". There is some evidence that this kind of thing does in fact happen; the question is why. One possibility is that asset price crashes shift people's economic expectations.

If people have extrapolative expectations, then rising asset prices can be a signal of rising fundamentals, which people may mistake for a long-lived, structural trend. If asset prices then crash, people may then make the similar mistake in the opposite direction, forecasting a long-term continuing slump. Note that this explanation is my interpretation, not that of Bricker, Kimmel and Sahm, though they seem to be after something similar.

Anyway, BKS look at survey data on economic expectations, from the Survey of Consumer Finances. Those surveys are pretty accurate on a lot of measures - expectations generally end up matching aggregates, such as income growth. So BKS assume that they're accurate for measuring expectations as well.

What BKS find is that people who had more positive expectations increased their spending more - in other words, the expectations people reported in surveys correlated with the actions they actually took. Here is a graph from their presentation:

Interestingly, the expectations-based spending gap was bigger in 2003 than usual, and biggest in 2006, offering hope that this kind of data can be of use as a bubble detection method in the future. Consistent with this, they also find that survey-based income growth expectations predicted the ZIP codes in which housing prices would rise (and then crash) the most:

BKS find that households with more positive expectations spent more in response to rising house prices before the crash...but not after. That's an interesting discrepancy that might cast doubt on the idea that irrationally pessimistic expectations are the cause of deleveraging.

Anyway, this is one more interesting piece of research investigating the possibility of irrational expectations having major macroeconomic effects. It's not a slam-dunk case that extrapolative expectations cause bubbles, and I'd like to see a lot more investigation of extrapolative expectations as a cause of deleveraging. But it certainly seems to confirm that survey expectations measures are measuring something real and important and predictive. If I were a hedge fund, I'd want to get my hands on some SCF-type expectations data. And in the future, this offers some promise that irrational-expectations models may augment our understanding of financial business cycles.

Thursday, July 23, 2015

For most of the time that I was growing up, the struggle of nerds for acceptance in mainstream American society was an important part of both my life and the culture I was surrounded by. Movies like Revenge of the Nerds depicted nerds - people who liked to use their brains - as downtrodden outsiders, struggling against a dominant culture ruled by "jocks" and other anti-intellectuals. That struggle dovetailed with the idea that human capital - the ability to do math, program computers, and otherwise use your brain - was going to be crucial to the economy of the future, and that America's anti-intellectual culture was in danger of holding it back. The struggle of the nerd was not just a struggle for inclusion, it was a struggle for the nation's future.

So it's been a sobering and unpleasant experience for me to see the degree to which America's intellectual culture has seemed to turn against nerds in recent years. Mainstream culture has accepted nerds more and more, but this has turned nerds from outsiders into insiders, which means they've lost their cred as downtrodden rebels. That in turn has given rise to a number of problems within nerd-dom - problems which intellectuals have been justified (if over-enthusiastic) in calling out. Here are three of the main lightning rods, and why I think they matter:

1. The Rise of the Ubernerds

The American economy's turn toward higher-value-added activities, and the general advance of technology, have changed the composition of who gets rich. In the past - and here come some huge simplifications - people got rich by organizing other people. That required relationship-building skills, first and foremost. Andrew Carnegie and Leland Stanford were undoubtedly very smart people, but their success came from their ability to deal with others, not from direct application of their intelligence to technical problems. In recent years, that has obviously started to change, with the high proportion of new super-rich people who got rich by creating tech companies. When Bill Gates, a bespectacled computer nerd, claimed the title of Richest Person on Earth, it was clear something had changed.

But when nerds are winners, it's hard to argue that they're a disadvantaged group of outsiders. America has always measured success by money, and now being a nerd makes you get rich. Coders and scientists have great salaries, and more importantly, they have positions of authority within corporate hierarchical organizations. Nerds are no longer despised brain-slaves toiling in the basement without recognition, as in years past (e.g the 1980s Goldman Sachs culture described in Emanuel Derman's My Life As A Quant). Nerds now have equity.

When you're rich, you lose your cred as a downtrodden group. What's more, some nerds seem to realize this, adopting many of the "bro" culture elements that in the past were the exclusive province of relationship-building backslapping frat-boy CEOs. Tri-Lambs, we hardly knew ye.

And as you might expect given these trends, a few of these Ubernerds are starting to look pretty arrogant. You have techno-libertarians reading Ayn Rand and styling themselves as modern John Galts. You have the occasional techie spewing disgust against the homeless. You have the occasional rich tech businessman suggesting that the rich should get more votes than the poor. And it's not just puffed-up rhetoric, either - sometimes it comes with real power. Witness the ease with which Uber just crushed an attack by Bill De Blasio, one of the most powerful local politicians in the country. Power is scary.

Now that nerds have the option to become rich, arrogant overlords, their status as a pariah group is effectively over, along with any sympathy that generated.

2. The Unbearable Whiteness of Nerd-dom

In recent years, we've started to see a breakdown in "intersectionality" - the idea that all outsider groups should fight for each other (oddly, this is happening just as the word "intersectionality" is starting to come into common use), I suppose is only natural, since disadvantaged outsider groups have gained enormously in power (even if they have not yet achieved parity with heterosexual white males). When a diverse coalition of rebels wins big gains, there tends to be increased friction within the coalition. Not only that, but the progress made by some groups, like gays and women and Hispanics, is starkly contrasted with the lack of progress made by blacks, many of whom remain trapped in horrible neighborhoods, gulag-style prisons, and under the thumb of brutal police regimes - not to mention still being poor overall and suffering disproportionately from the Great Recession.

Compared to the travails of blacks, the problems of nerds seem like chump change. Nerds get ignored by girls in high school; black people are getting shot in the street. Not exactly comparable problems. And at some point, people noticed that there are relatively few black nerds; the flood of money to American nerd-dom is doing a lot more for white people than it is for black people. That can make it seem like nerd-dom is entrenching the black-white racial disparity.

In the old days, this was more likely to be ignored. At the end of Revenge of the Nerds, the jocks are about to physically assault the nerds, but the nerds are saved by the intercession of their black fraternity brothers - a memorable fantasy of successful intersectionality between outsider groups. Now, with intersectionality breaking down, the idea of nerds and black people as natural allies is swiftly vanishing.

In addition, we've seen the arrival en masse of a new racial group - Asians. Because Asian immigrants are disproportionately admitted for their technical skills, they are massively over-represented within the nerd community. But within that community, they seem to suffer discrimination. Asian tech workers are regularly bypassed for executive jobs in the tech industry, in favor of white co-workers with less technical skill. That naturally tends to make many Asians feel as if nerd-dom isn't working for them - at least, not as much as it is for white nerds.

So when nerd-dom seems like it's only working for white people, that's going to make an increasingly non-white America less keen on the culture.

3. Nerds and Sex

If nerd-dom disproportionately benefits whites, the gender disparity is even more alarming. Sexist behavior is rampant in the tech industry, as it usually is in male-dominated fields. A well-publicized low point was reached in 2013 with Titstare, a joke about an app that let men stare at women's boobs. Even the protagonists of Revenge of the Nerds were frat boys who were mainly interested in women as sex objects.

A few prominent nerds have made some attempts to fight this by encouraging more women to enter the tech industry - this would be good, since a larger number of women would mean a more welcoming environment for yet more women, as well as disapproval for public displays of sexism. But these efforts are struggling against the tide - very few women go into engineering. In some technical fields, like biology and neuroscience, there have been big strides - women are coming to dominate bio-nerd-dom. But engineering and software still seem like a man's world.

Worse, a subset of male nerds - not a majority but enough to get noticed - harbors an attitude of bitter sexual resentment. Many nerds grew up in the time when being a nerd conferred low social status, and suffered sexual rejection in high school or even later. Some of these men, predictably, became misogynistic as a result. Others simply developed unhealthy attitudes toward sex - witness the travails of MIT computer science professor Scott Aaronson, who expressed a yearning for a time when society would grant him a wife, and bitterly blamed feminism for his fear of romance.

This sexual resentment boiled over in hideously spectacular form with the coming of GamerGate. GamerGate began with a male nerd's bitter rant over his poor treatment by a nerdy girlfriend. The man quickly became a cause celebre for nerdy male gamers, many of whom feel like they have been ill-treated by the female sex, and many of whom are angry at what they perceive to be the intrusion of feminism into "their" gaming culture. The outpouring of male gamer anger quickly became a roach motel for the most vicious right-wing trolls on the internet. Severe harassment of female game designers and journalists evaporated much of the reservoir of cultural sympathy for geeks - and, since geeks and nerds overlap, for nerds as well.

If the Revenge of the Nerds means revenge on women, it's something we can do without.

So the past quarter century has brought the fruition of a childhood dream of mine - the accession of nerds to the mainstream of American society. But as with all successful revolutions, the victory has been bittersweet. I wanted an America that loves intellectual pursuits, and doesn't stuff people in trashcans for preferring math to football. What I didn't want was for nerd-dom to become an exclusive smoke-filled backroom for rich white men to sit around tossing around Ayn Rand quotes and calling themselves the ubermenschen. I didn't want Asian nerds to be sent to the basement to occupy the dingy desks once reserved for nerds of all races. I wanted the nerd to get the girl, sure, but I also wanted the nerd to be the girl. I wanted gamers to spend their time attacking Sephiroth or GlaDOS instead of attacking women on Twitter.

I believe that most nerds out there are good people. They just want to do their thing, same as they always did. Most of them are liberal types, who don't hate women, who would love to see diversity in their fields. It's time for them to stand up and take nerd culture back from the high-profile jerks who are getting all the air time, and open the gates to all those people who are still on the outside looking in.

Sunday, July 19, 2015

I am extremely flattered that John Cochrane considers me "the other Smith" - an honor I certainly don't deserve, with all the other econoSmiths out there. But unfortunately, John was not a big fan of a post I wrote for Bloomberg View, entitled "Growth Fantasy of Tax Cuts and Small Government". Now, remember that I don't get to choose (or even approve) my Bloomberg View post titles, and they are almost always clickbaity - in fact, I don't think tax cuts and small government are always a growth "fantasy". Personally, I'd recommend tax cuts for Europe, smaller government for both Europe and Japan, and corporate tax cuts for the U.S. And I'm also worried that the U.S. has become over-regulated in a number of areas. Personally, I believe that some of the policies Cochrane suggested in his earlier post - for example, an end to agriculture subsidies, more immigration, and the removal of many occupational licenses - would be good for growth, though others (like school vouchers) would reduce it. So the title Bloomberg View gave my post did not reflect the nuance of my personal views. Nor should it be expected to.

My post was about expressing skepticism. We always hear big promises from rightward-leaning economists about the growth-enhancing powers of tax cuts and deregulation, but these policies often fail to deliver. In my post, I noted a prominent example of this - the ALEC rankings. It's not the only example. The Bush tax cuts were followed by a decade of extremely disappointing growth. The Clinton-era financial deregulations don't seem to have done us any good. Why should we trust more of these promises? Why should we keep trusting the people who keep making the promises? Instead, I say we should look to serious, evidence-based analyses by economists who are neither knee-jerk free-marketers nor knee-jerk interventionists.

Fortunately - and here let me go off on a brief tangent - those sober, middle-of-the-road economists comprise a solid majority. Here's an excerpt from the abstract of a 2007 paper by economists Daniel Klein and Charlotta Stern, entitled "Is There a Free-Market Economist In the House?":

We surveyed American Economic Association members and asked their views on 18 specific forms of government activism. We find that [only] about 8 percent of AEA members can be considered supporters of free-market principles, and that less than 3 percent may be called strong supporters. The data are broken down by voting behavior (Democratic or Republican). Even the average Republican AEA member is “middle-of-the-road,” not free-market.

So most economists are supporters of a mixed economy. Now granted, some of those economists may support government intervention for reasons not related to economic efficiency. But some of the questions in the survey are pretty clearly not just about redistribution - for example, one question asks about government production of education. There is also other evidence. The Chicago Booth IGM Forum poll of top economists found that 73 percent either agree or strongly agree with the following statement:

Because the US has underspent on new projects, maintenance, or both, the federal government has an opportunity to increase average incomes by spending more on roads, railways, bridges and airports.

Only 3 percent of surveyed top economists disagreed with this statement. 3 percent! For reference, about 26 percent of Americans say that the sun goes around the Earth. So there is overwhelming consensus among top economists that more government provision of public goods would boost economic efficiency. Hardline free-marketers are a small but loud and influential minority.

My surprise in reading Noah is that he provided no alternative numbers and no alternative policies. Well, if you don't think Free Market Nirvana will have 4% growth, at least for a decade as we remove all the level inefficiencies, how much do you think it will produce, and how solid is that evidence?

As for numbers, I don't really feel I need to produce an alternative to a number that was made up as a political talking point. Why 4 percent? Why not 5? Why not 8? Why not 782 percent? Where do we get the number for how good we can expect Free Market Nirvana to be? Is it from the sum of point estimates from a bunch of different meta-analyses of research on various free-market policies? No. It was something Jeb Bush tossed out in a conference call because it was "a nice round number", after James Glassman had suggested "3 or 3.5".

You want me to give you an alternative number, using the same rigorous methodology? Sure, how about 3.1. Wait, no. 3.3. There we go. 3.3 sounds good. Rolls off the tongue.

(Ed Prescott, after conducting his own quantitative analysis along with Robert Lucas, politely told Jeb that the economy's "maintainable" rate of growth was only 3 percent. But Prescott is undoubtedly just an incorrigible pessimist.)

As for justifying the 4 percent number by appealing to past growth, that doesn't make a lot of sense, since our current opportunities for policy-based efficiency improvements will be different from the ones in the past. But we can do that exercise, sure. Here is 10-year trailing annualized real U.S. GDP growth:

In an earlier post, Cochrane showed a similar graph that showed 10-year arithmetic means. I think geometric means are pretty obviously the correct meaning of "average growth". But even if you just eyeball the yearly growth graph, it's obvious that we haven't hit 4% very often since the 1960s. We briefly spiked above it in the early 80s, lingered above it in the late 90s, and never even hit it in the 2000s.

So...maybe Free Market Nirvana would get us there. Maybe. But it seems to me that even beneficial types of market liberalization, like every other policy improvement, are subject to diminishing returns - the low-hanging fruit gets picked first. The Kennedy tax cuts seemed to produce a burst of growth, the Reagan tax cuts a more modest bump, and the Bush tax cuts basically nothing. There's no reason to think the same isn't true of deregulation.

And keep in mind that we're facing slower population growth than in the 80s and 90s, and slower productivity growth than in the 90s and 00s. In other words, Free Market Nirvana Version 2.0 will have to do a lot more than Version 1.0 did, when a priori we should probably expect it to do less.

More deeply, Noah suggests no alternative policies. He does not claim that more government wage controls, unions, stricter labor laws (Uber drivers must be employees) heavier and more politicized regulation, cartelizing more industries beyond health and finance, raising taxes to confiscatory levels, larger welfare state, boondoggle public works and so on -- the alternative path in the current policy debate -- will get us back to 4% growth.

To this I reply:

1. There's no logical reason to assume that anything can get us "back" to 4 percent growth, short of a massive increase in immigration (which is politically impossible, thanks to the GOP, but which I would dearly love to see, especially if it were heavily tilted toward high-skilled immigration).

2. John waves away the idea of public goods provision being a potential source of growth, which seems like it would put him in the 3 percent minority in that IGM poll.

3. If I had to suggest alternative policies to the ones John suggests (and keep in mind I like some of his), I'd recommend A) spending more on road, rail, water, electrical, and broadband infrastructure while dramatically cutting costs, B) spending more on basic and applied research, C) reforming the patent system, especially with regards to software and business-process patents, and D) reforming urban land-use policy, especially by means of land value taxes or close equivalents.

So there you go.

John then accuses me of pessimism:

So, one must only conclude that Noah -- and others voicing the same it's-not-possible complaint -- believes 4% growth is not possible. 2% or less is the new normal. Sustained growth, of the sort that made us all healthier and wealthier, if not wiser, than our grandparents, is a thing of the past.

But my post is all about skepticism. I am not certain that the 4 percent number, pulled out of Jeb's posterior, is unachievable in theory. I merely recognize that free-market "structural reformers" have made many growth promises in the past that have not been kept. And this makes me heavily distrust their current promises, and long for verification by less ideological folks than the people at the conservative think tanks that feed Jeb his talking points.

Also, why is 2% per year such a crappy outcome for living standards? Given that a lot of our headline growth slowdown since the 60s has been due to lower population growth, the per capita slowdown has been much more mild, and it's per capita that we really care about when we're talking about living standards. Total factor productivity - which should include the effects of government policy - is growing faster than it did during Reagan's term in office, and I expect it to continue doing so...so why is John calling me a growth pessimist??

John then says that I'm denying a useful role for economists in society:

If the absolute best economic policies anyone can imagine -- and, again, Noah offered no alternatives -- cannot return us to 4% growth and sustain that growth, why bother being economists?

I kind of see it the other way around. If economists have already figured out The Answer to Everything - if it's just a bullet-point list of simple free-market policies - then why don't we pack up and go home? Why sit there analyzing repeated games, or making DSGE models, or finding natural experiments to analyze, when it's all been figured out and boiled down to a simple canon?

John continues:

They do not call us the "dismal science" because we think the current world is close to the best of all possible ones...("Dismal" only refers to the fact that good economics respects budget constraints.)

Actually, the name "dismal science" came from a guy making fun of Malthus was invented by a guy making a case for reintroducing slavery in order to regulate the labor market! But anyway, budget constraints seem to me to be exactly the reason that we can't just pull numerical growth targets out of thin air and assume that we can always hit them if we just have the will to do so.

John closes with a potted ideological history of the American economy:

Noah's tired pot-shot has been going on a long time. In 1980 Ronald Reagan announced some pretty radical growth-oriented policies, at least by the standards of the time. (Not much new since Adam Smith, of course.) The standard liberal commentators made the standard objections: voodoo economics, numbers don't add up, it will take generations of unemployment to lower inflation, the debt will explode, and so forth. (Plus, the Soviet Union will be there forever, we might as well get along.) Reagan offered optimism; won, malaise ended, we won the cold war, and there was an economic boom.

Well, you know, personally I do think Reagan's liberalizations, on balance, were good for the economy. They didn't actually get us to 4 percent growth, even with favorable demographic tailwinds. They didn't end the TFP stagnation (at least not until the mid-1990s, if you want to posit 14-year lags). They luckily coincided with Volcker's taming of inflation. They indeed caused a massive runup in the national debt. But I think that on balance they were necessary and good, and I hope Japan emulates them to some degree in the near future.

But like I said, Reagan probably picked a lot of the low-hanging fruit. Should we expect further government-slashing to do much more than it did for Reagan, as it would have to in order to get us to sustained 4 percent growth? Or should we expect it to do much less? I'm willing to be convinced by evidence, but my prior is on "much less."

So to sum up, I think we - by which I mean Americans - have repeatedly bought into the promises of free-market ideologues, outlier economists, and conservative think tanks, even as their chosen policies have yielded conspicuously diminishing returns since the early 80s. I'm just saying that before we buy any more such promises, we had better have some solid evidence in favor. The burden of proof is on the free-market ideologues at this point, and "Reagan Reagan rah rah rah" is not proof.

Tuesday, July 14, 2015

A couple posts ago I wrote about equilibrium asset-pricing models, and how the finance industry has basically decided not to use them. I was emailed by a finance industry quant with 25 years of experience working on factor models (at some of the top firms), and we had a long discussion about modeling approaches. He agreed to let me post a redacted version of some of his comments, so here they are.

On the historical use of equilibrium asset pricing models in industry:

"Actually I think there has been historically quite a lot of interest in models based on macro factors that would lend themselves relatively naturally to interpretation in terms of equilibrium valuation models. The problem is that they don't tend to work very well empirically...Clients asked for it, and other firms were offering models with these types of factors, but those models just didn't do a good job of explaining a substantial fraction of asset returns...

The context I'm most familiar with is interest rates. I worked on these models in the late '80s, early 90s (figuring out how to do practical implementations of Heath-Jarrow-Morton, which is a pure factor model). As far as I know, there is absolutely no equilibrium framework that says anything useful about yield curves. Cox-Ingersoll-Ross dressed up the presentation of their model with some equilibrium argument, but its predictions of yield curve shapes were very far from realistic. People have done multi-factor versions that probably give more reasonable shapes, but AFAIK nobody even waves their hands towards an equilibrium justification any more. I think the view is that, sure, there's probably some fundamental explanation, but its far too complicated to be practical. A key point here is that if your chosen equilibrium said there were large mispricings in the market, you'd be much likelier to mistrust the model than you would to trade on it."

On why industry people are skeptical about equilibrium models:

"[W]hen you [Noah] talk about equilibrium models you're really focused on the link between consumers and asset values. But as an industry person, I think that link looks very weak, except maybe over intervals long compared to any timescale for decision making by market participants. Everyone in the finance food chain -- from the individual investing for retirement to investment advisor to investment management firm to pension consultant ... all the way to governments and corporations financing their spending -- have short term incentives that seem like they would dominate theoretical long term valuation considerations. Behavioral biases and agency effects are huge at every step of the way. Easier for an end user to forget about those kinds of models and just stick to empirics."

On why it's too risky to try to use equilibrium models for trading:

"I think the big picture here is just that relative value approaches based on factors leave you with little net risk (most of the time) by construction. They're in some sense "localized" to a small subset of factors and assets whose interrelationships can be modeled fairly reliably and with few assumptions...

[E]quilibrium models are much more ambitious [so] they're also much harder to get right enough to be useful on time scales not subject to large risks. Plus they usually seem to have unobservable or hard-to-observe components. For example, any model of interest rates is going to have to have some place for a market price of interest rate risk, or something that plays the same role as in the CCAPM. Even a small error in that estimate is going to produce a huge discrepancy to observed yields. Now how long can you wait to be right?...[Equilibrium asset-pricing models] are making something...like a cosmological grand claim about a necessary relationship that must be satisfied in the large, rather than merely an empirically observed relationship or one driven by relatively straightforward no-arbitrage condition among highly similar assets."

Basically, this shows that industry people didn't just take a look at equilibrium models and say "Hahaha no way!" They understood how the models work, they tried to use them, they thought deeply about the content of the models, and they also thought deeply about the risks of using them. Equilibrium models are not newfangled gadgets that haven't been tried yet; they are something that has been around for a good long while and - unlike their cousins, the factor models - have not yet passed the rigorous test of industrial applicability.

Sunday, July 12, 2015

I was trying to think of a good metaphor for Mike Woodford's role in the macro theory world. Dumbledore? But then I'd have to make someone be Voldemort, and I'm not that big of a jerk. Maybe Ed Witten? But far fewer people know who Witten is than know who Woodford is. I give up. Woodford is Woodford. And right now, at this moment in time, Woodford certainly seems like the most influential person in business-cycle theory. Maybe the most dominant influence since Robert Lucas.

One big challenge to the paradigm Woodford has built - which has won near-universal adoption at central banks - is the Neo-Fisherian idea. This is the idea that holding interest rates low for a very long time will either A) make the economy explode, or B) eventually cause persistently low inflation. Looking at the experience of Japan since 1990, (B) doesn't seem so crazy. John Cochrane explained the Neo-Fisherian idea in an epic blog post back in November, and the idea is supported by a more formal model by Schmitt-Grohe and Uribe. It's a big challenge to the Woodford paradigm because 1) the core of the idea is pretty simple, 2) it seems to fit with recent Japanese and possibly American experience, and 3) it says that central banks working in the Woodford paradigm are achieving the exact opposite of what they intend to achieve.

At the recent NBER Summer Institute, Woodford struck back. Actually that makes it sound too confrontational, since Woodford is the consummate nice guy. What he actually did was to address Cochrane's arguments directly, and give some reasons why he thinks they don't apply.

The question of whether interest rates affect inflation in a Woodfordian way or a Neo-Fisherian way depends on whether people's expectations are infinitely rational. Woodford's new idea - which will certainly be a working paper soon - is that people don't adjust their expectations to infinite order. He essentially puts bounded rationality into macro. He posits a rule by which expectations converge to rational expectations.

So to all you guys who ask "When will behavioral economics have a big impact on macro?" The answer is: Right now. It just did. Behavioral macro is now a reality. (Well, really it was a reality with learning models like Evans and Honkapohja, or even Sargent, but Woodford is using it to think about policy in real time, for big stakes, and his presentation will undoubtedly be influential).

Anyway, I don't understand everything about the new bounded-rationality Woodford model, but from reading his slides, here's what seems to be happening. A permanent interest rate peg ends up making the economy explode. When the peg begins, people think it's a temporary peg. As it continues, people never quite believe it's permanent, but their estimation of its duration keeps getting longer. This makes expectations of the eventual interest rate (infinitely far in the future) diverge, so the economy basically explodes.

So that's the theory, anyway. It's not clear how well this theory applies to Japan, or to other economies that have had very low interest rates for a while now. It's also not clear how well the macro world will accept a behavioral theory as the workhorse model for monetary policy. I guess we'll see, especially after the paper comes out and people (hopefully) start to fit it to data! In the meantime, expect a response from Cochrane. Should be interesting to watch.

This is a particularly important voice, as it seemed to me that standard New-Keynesian models produce the new-Fisherian result. i = r + Epi is a steady state in all models. In old-Keynesian models, it was an unstable steady state, so an interest rate peg leads to explosive inflation or deflation. But in new-Keynesian models, an interest rate peg is the stable/indeterminate case. There are too many equilibria, but if you raise interest rates, inflation always ends up rising to meet the higher interest rate.

What I can glean from the slides is that Schmidt and Woodford agree: Yes, this is what happens in rational expectations or perfect foresight versions of the new-Keynesian model. But if you add learning mechanisms, it goes away...

[I]f one has to resort to learning and non-rational expectations to get rid of a result, the battle is half won.

Actually, I'm not sure that this is what Woodford is saying. It's hard to tell from looking at his slides, but it looks like he's saying that if we restrict ourselves to stable paths, the Neo-Fisherian result holds in a rational expectations equilibrium. But the indeterminacy in rational-expectations New Keynesian models might also allow for explosive paths, of the kind that Cochrane calls "old-Keynesian". In fact, I'm fairly certain this is the case, since the "rational bubble" literature shows that explosive paths for inflation are a fairly general result in rational expectations models. I think what Woodford is saying is that if you even slightly relax the assumption of perfect rational expectations, there's no way to get a stable path with a permanent interest-rate peg.

Also: I removed a paragraph making a comparison between Woodford and Nakamura/Steinsson/McKay. Though the problem is similar in the two cases (one is about forward guidance in the infinite future while the other is about belief in a permanent interest rate peg), the two papers use fundamentally different techniques. So the analogy is not a great one, and wasn't that important to the post anyway.

Also: Nick Rowe has a great simplified explanation of the Woodford model. I'm pretty sure he's right, and this is what's going on. It's really impressive how Nick is able to capture monetary econ in these little simplified models...that's a skill I never learned and don't possess. Also, I agree with Nick that if your model isn't robust to an infinitessimal departure from rational expectations, you should be worried.

Also: Scott Sumner comments. He says the problem is the New Keynesian model itself.

Also: Brad DeLong comments. He hypothesizes that Neo-Fisherianism is basically just a face-saving way for economists who predicted QE-->inflation back in 2011 to admit their worldview was wrong without admitting that Paul Krugman etc. were right.

Friday, July 10, 2015

A while ago I talked about the fact that DSGE macro models haven't seen industry applications, and that this means people don't yet trust them to give a good conditional forecast of the effect of economic policy changes on macro variables. How about asset pricing models from the finance literature? How do these pass the market test?

There are basically three things you might use an asset pricing model for in the real world. The first is to measure risk. The second is to beat the market, if you believe that the model represents a long-term equilibrium from which assets will only deviate for a short time. The third is to arrive at a fair starting price when brokering deals between two parties.

There are two main classes of asset pricing models in the finance literature. The first class is equilibrium models. These are basically economic models that treat asset prices as solutions to consumers' utility maximization problem (and, sometimes, firms' profit maximization problem). The most famous of these is the CCAPM. Modern extensions of that include Bansal and Yaron's long-run risk model, Campbell and Cochrane's habit formation model, and various models based on "rare disasters." Though these days these models are usually pitched as solutions to time-series puzzles like the equity premium, excess volatility, and volatility clustering puzzles, they are models of risk and of the cross-section of expected returns.

The second main class of models is factor models, such as the Fama-French 3-factor model, the Carhart 4-factor model, and various macroeconomic factor models. These models are based not on any behavioral assumptions like utility or profit maximization. They only require no-arbitrage conditions, and some assumptions about the structure of the market. Though they may be motivated by economics, they are not economic models, in the modern sense of the neoclassical, maximization-based econ that we are all used to.

Factor models can end up looking a lot like equilibrium models - factor models give you factor betas and equilibrium models give you consumption betas and stuff like that. But the methods they use to derive the betas are much different, and the substance of the conclusions is almost always very different.

I don't have a comprehensive data set of the finance industry, so don't regard my info as complete, but I've talked to a fairly large number of industry people, and they all say the same thing. Factor models are very common, even ubiquitous, in the industry. Equilibrium models are essentially never used by anybody.

The most obvious reason for this is that equilibrium models are harder to make. You have to make a bunch of assumptions, solve an optimization problem, then estimate the thing to see how well it works. With factor models you just skip the first two steps, slap down some factors, and estimate it. As you might expect, the ease of generating factor models leads to a lot of false positives in the academic literature. But in the finance industry, a lot of effort goes into exhaustive backtesting and robustness-checking of factor models, which cuts down massively on false positives (though overfitting is of course still a danger). You have companies that exist just to make, test, and sell factor models.

So is the disuse of equilibrium asset-pricing models just a function of the effort that goes into making them? Maybe. But you'd think that eventually, academics - who are more than willing to put in that effort, and who in fact make oodles of such models - would come up with a really good one that industry would adopt. It's no harder to use an equilibrium model than it is to use a factor model.

But they don't. No one I've talked to or heard about uses consumption betas of any kind, including ones based on habit formation or Epstein-Zin preferences. Despite decades of high-powered research, equilibrium models are not yet passing the market test.

I assume that this is because when you test these things, factor models come out ahead. I hope that someone is using information criteria or some other defense against overfitting when these comparisons are made. If they are, then what it means is that the best factor models are more empirically successful than the best equilibrium models. Which in turn implies that equilibrium asset-pricing models are, basically, misspecified.

Why would equilibrium models be misspecified? They might be focusing on fitting time-series facts and neglecting the cross-section. Or there might simply be something very wrong with the workhorse assumptions economists use to describe consumer behavior at the aggregate level - something so deep that modifications like Epstein-Zin or habit formation can't fix it. Tests of the basic consumption Euler equation make me suspect that that is the case. If economists find that wrong thing and fix it, it could reap big dividends not just for asset pricing models, but for macroeconomics as well.

Updates

Two caveats to this post! First of all, the story isn't over yet. New models are coming out all the time - for example, a commenter points me to this model from 2014, which is a hybrid of a typical factor model and a production-based theory of firm investment behavior. Looking over it, it looks pretty good. It also agrees with my prior that firm behavior is a lot more important and a lot easier to explain than consumer behavior. But the main point is that industry could start using equilibrium models - even consumption-based equilibrium models - at any time.

The second caveat is that the notion of what it means to "use" a model is a lot more subtle than I'm making it out to be. For example, you could find factors by data mining, then use an equilibrium model to convince yourself that the factors will be stable in the future (in fact, that's exactly how you might use the model I linked to above). To my knowledge this kind of use is very rare so far, and I've heard stories of people in industry trying and failing to do just this. But if people do start doing it, it won't look like people ditching factor models for equilibrium models. It will look like people using equilibrium models to check their factor models. Another way you could use equilibrium models is to search for new ideas for factors. But since the set of available macro variables is too limited, it's highly likely that data-mining will find the factors long before equilibrium modelers do.

Sunday, July 05, 2015

Sorry, folks, one more sentimental, overwrought, non-economics-related post. I promise I'll try my best not to turn this from an econ blog into a sentimental, sappy blog about nationalism and culture and stuff like that. But anyway, I just had to do a follow-up to my last post.

In my last post, I wrote about what I see as the fundamental core of the United States as a nation - the concept of equality. Ta-Nehisi Coates, in a much longer, more personal, and better-written essay than mine, expresses a very different vision of what America is all about. I encourage you to read his piece from start to finish (don't skim).

To be black in the Baltimore of my youth was to be naked before the elements of the world, before all the guns, fists, knives, crack, rape, and disease. The law did not protect us. And now, in your time, the law has become an excuse for stopping and frisking you, which is to say, for furthering the assault on your body. But a society that protects some people through a safety net of schools, government-backed home loans, and ancestral wealth but can only protect you with the club of criminal justice has either failed at enforcing its good intentions or has succeeded at something much darker...

Before I could escape [the ghetto], I had to survive, and this could only mean a clash with the streets, by which I mean not just physical blocks, nor simply the people packed into them, but the array of lethal puzzles and strange perils which seem to rise up from the asphalt itself. The streets transform every ordinary day into a series of trick questions, and every incorrect answer risks a beat-down, a shooting, or a pregnancy. No one survives unscathed. When I was your age, fully one-third of my brain was concerned with who I was walking to school with, our precise number, the manner of our walk, the number of times I smiled, who or what I smiled at, who offered a pound and who did not—all of which is to say that I practiced the culture of the streets, a culture concerned chiefly with securing the body...

This America that Coates sees is very real. How can his America be the same America that I believe in so strongly?

Let me try to explain. I am not as eloquent as Coates, so I'll use my typical plain-spoken, spluttering style, with visual aids.

Because I grew up Jewish, my parents made me read about the Holocaust at a very young age. I read the testimonials, I saw the pictures. It really freaked me out. I was scared. I remember walking down the street with my dad at the age of 7, and asking him: "Is there anyone like the Nazis around today?" He said "Well, the Soviet Union is a little like the Nazis." That scared me. "What if the Soviet Union comes to get us?", I asked. And his answer was: "We have the most powerful military in the world. And we have tens of thousands of nuclear weapons. There's no way they can get us."

And that was it. Suddenly I felt safe from those cosmic threats, those vast and distant hosts of evil. I had the ultimate backup.

I had this on my side:

I may be too weak to defend myself against an army of bad guys, but behind me stands what you see in the picture above. That is a picture of America coming to protect the weak and forsaken. That is a picture of the ultimate backup.

What about black Americans? Do they have that same backup? Well, they used to, at least once in a while. In the Civil War they did. And in the Civil Rights era they did. Check out this:

That's what I'm talkin' about!! That is the 101st Airborne Division - the same guys from D-Day! - protecting some black kids from assholes who want to stop them from going to school. And the guy who ordered them to do it was the same guy from D-Day - Dwight D. Eisenhower! And how about when the assholes tried to stop the black kids from going to school? This!

Damn straight! 'MURICA!!

But here's the thing - this only happens once in a while. Most of the time, large chunks of black America - not the whole thing, not all African-Americans, but vast swathes - are either left to rot in the anarchy Coates describes, or actively oppressed by police, courts, and prisons.

Don't believe me? What's this, then?

WHAT THE F******************* IS THAT S***?!!!

Or this?!!!

These are peaceful protesters being menaced by militarized police!! These are not racist individuals making personal choices to be mean to black people! This is an institutional, organized, official act of government oppression of a peaceful American populace! Where's the 101st Airborne when you need them???!!

It's not just black people who get this treatment these days, of course - I saw the same scene at anti-Iraq War protests in 2003. But for a lot of black Americans, this is their only contact with the police, with the armed might of America, except for getting arrested and thrown in hellish prisons.

We lock up millions of people in this nation - far more even than authoritarian nations like China and Russia. It's our own Gulag Archipelago. In these prisons most inmates are raped and beaten. All of them are forced to do slave labor. After they are released, they can never vote in American elections again. And they have extreme trouble ever finding jobs again. They are, in short, tortured, enslaved, and then reduced to permanent second-class citizens.

Black Americans are far more vulnerable to this bullshit than other Americans. Racial profiling means they are far more likely to get arrested (and are killed by police at much higher rates). Racist sentencing means they are far more likely to be given prison terms, and longer ones. And the militarized response of police to protests in Ferguson and Baltimore makes it clear to black communities that America is not on their side - that the police are not simply trying to protect poor black communities from a few bad apples, but simply trying to suppress and contain poor black communities and keep them cordoned off from the rest of America.

This is exactly what Coates talks about in his essay. It's real. It's true. You can see it on YouTube.

For white Americans, there is at least the idea that this is temporary, that America will correct these abuses, right the wrongs, and return to the correct path. For Asian or Hispanic Americans, there is probably less certainty (remember those internment camps?), but still an overall feeling of positivity. But when black Americans look back on American history, what do they see? Centuries of racial enslavement. Another century of lynchings, segregation, and KKK terrorism. And essentially zero lag between the end of segregation and the beginning of the prison gulags.

For me, growing up, America represented the ultimate backup, the ultimate protector. Many black Americans - not all, of course, but many - feel utterly forsaken by that protector. They are lost, abandoned in a violent, frightening world, with no backup, no sheltering presence, no cavalry to ride to the rescue when the chips are down. Maybe just cavalry to ride over your face.

To me this unacceptable. Unacceptable! Those are our people - our countrymen! America should be the protector of the weak, not the oppressor - and when the weak are Americans themselves, the imperative is a million times stronger! No American should ever feel forsaken by the powers that be. No American should ever feel that the system is against him or her.

To me the plight of American black people is not about justice. It's not about righting the wrongs of history. It's about giving black Americans the certainty that America is a country that is always - ALWAYS - on their side, that will always fight for them, not against them. It's about backup.

I'm a nationalist. I firmly believe that the fundamental essence of America is the belief in human equality and freedom, and that this essence will continue to win out over the darker tendencies that afflict our nation and all nations. But this essence is not a magical force that rights all wrongs while we sit on our couches and scarf Doritos and Mountain Dew. My relatives weren't in the D-Day picture above, but they were in the overall effort, flying planes or fighting in Asia or doing cryptography. The army of backup doesn't exist unless a bunch of us join up and volunteer.

And to our credit, lots of us are joining up and volunteering. The protests after the killing of Eric Garner were enormous. But we need more than protesters - we need police departments, judges, and legislators to use their power to stop the madness. Stop the police militarization, stop the racial profiling, stop the racist sentencing, and stop mass incarceration.

If we can save France from Hitler, if we can save China from the Japanese Empire, if we can save Europe from the Soviet Union, we can save our own people from our own system.

Saturday, July 04, 2015

(Warning: This post does not contain economics, and does contain nationalism.)

The 4th of July isn't the anniversary of the Constitution, it's the anniversary of the Declaration of Independence. The Constitution set up our government, and Americans tend to revere it as the immortal symbol and spirit of our country, much as Japanese people traditionally saw the Emperor. But in many ways, the Declaration of Independence is the more important document. By declaring when it's OK for a populace to abolish their government and form a new one, the Declaration necessarily appeals to powers that are higher than any Constitution, king, or emperor. The powers it appeals to are 1) People power, 2) God, and 3) moral ideals.

The ideals of the Declaration are many, but the basic ones are laid out in the second sentence:

We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness.

This is the sentence everyone can quote, and with good reason. This sentence is the core idea of the United States as a nation-state.

When you think about it, it's pretty incredible that people - especially people in the 18th century - would put a moral statement as their reason for founding a new nation. This sentence makes it clear that the United States was created as fundamentally not just a group of people and the plot of land they live on. It was a nation created to advance a set of moral ideals.

Of the ideals in that sentence, the most dramatic, the most memorable, and the most astonishingly radical is "all men are created equal". I feel like if the idea of the United States of America were stripped down to its very essence, these are the five words that would remain.

What the heck does "all men are created equal" even mean? Does it mean that they're equal in some measurable characteristic, like height? Obviously not. In fact, "equal" obviously doesn't mean - can't mean - anything that can be proven or disproven with facts. The founders could not possibly be using the words "self-evident truth" to mean "measurable fact", so they must be using them to mean something else. "All men are created equal" must be a prescriptive statement - a statement of opinion, a codified expression of emotion.

It's a statement about how society ought to treat people. It says, basically, that society shouldn't treat people differently based on the conditions of their birth. There's really nothing else it could mean.

"All men are created equal"! What an astounding thing to say! What an astounding thing for a bunch of slaveholders to say! And what an astounding thing to base a country on! Centuries later people from other countries still boggled and balked at it. It's a statement that turns almost everything about traditional human society on its head.

Now, lots of people make sweeping moral statements that they or their descendants completely ignore. But this one has proven to be astonishingly powerful down through the centuries. Again and again "all men are created equal" is something Americans have proven willing to fight for. We fought the Civil War to enforce the ideal of "all men are created equal" (no, slavery was not the only reason the Union fought, but it was really what the conflict was all about). George Washington knew from Day 1 that this was going to happen. In 1797 he told a guest that "I can clearly foresee that nothing but the rooting out of slavery can perpetuate the existence of our union, by consolidating it in a common bond of principal (sic)."

The rest of American history involves a long list of episodes in which people fought for the principle of "all men are created equal". It wasn't too long before it became obvious that "men" meant women as well - the women's rights and feminist movements clearly spring from the idea that "all [people] are created equal". Civil rights was another battle, and just recently, the triumph of gay marriage was another. The fight against police racism is an ongoing example. The U.S. is not always the first to implement policies that confer greater equality (though we're usually one of the first), but I can't think of another country where the cause of equality arouses as much popular feeling.

The American ideal of "freedom" has traditionally been conflated with this ideal of "equality" - in fact, some early drafts of the Declaration said "all men are born equally free." The idea is that a society that treats people differently due to their circumstances of birth is not a free society. Later, Communism adopted the language of "equality", using the term to mean something very different (and leading to interminable high-school debates over "equality of opportunity vs. equality of outcome"). But the notion of "equality" as "freedom from social discrimination based on the circumstances of birth" lives on, and now that Communism is effectively dead, the word "equality" is starting to make a comeback - e.g. in the marriage equality movement.

In fact, I don't think it's a stretch to say that every distinctly American ideal springs from "all men are created equal". Democracy, the rule of law, human rights and civil liberties - these all follow from that one animating idea. If "all men are created equal", they all deserve to have a voice in government, they all deserve equal treatment under the law, and they all deserve the same rights and protections. Even capitalism itself relies partly on the notion that everyone should have an equal chance to participate in the economy - that there should be no hereditary rentier class pulling the levers.

The American idea of "all men are created equal" is a radical ideology of incredible power, and it has influenced other nations tremendously. It inspired the French Revolution. It inspired the thinking of the Meiji Restoration reformers. It reshaped much of Europe and Asia after the World Wars, inspiring people there to remake their societies in America's image.

This is why I'm a nationalist at heart. I'm not a jingoist who thinks America always does the right thing. I'm not a blood-and-soil nationalist who thinks of America as a race plus a plot of land. I believe in those five words - "all men are created equal". It's an ambiguous statement, an emotional statement, a religious statement. It's also one of the most powerful and transformative and positive ideas the world has ever known.

That's why we should celebrate July 4th. When our founders wrote those five words, everything changed. Once you say "all men are created equal," once you throw down that gauntlet, you can never take it back. As long as America exists - and even if it dies - "all men are created equal" is here to stay, animating the hearts and minds of billions. In terms of human history, there was Before July 4th, 1776, and there was After July 4th, 1776. I'm damn glad I live in the After.

Tuesday, June 30, 2015

Recently, a California court ruled that Uber has to treat its drivers as employees, with all the regulatory costs that entails. Most people think that this will hamper Uber a bit but not kill it. But a few, like Megan McArdle, think that the ruling spells Uber's demise. What if McArdle is right? What do we conclude?

First of all, it's important to point out that Uber might die for reasons totally unrelated to the California decision. Companies die all the time for reasons totally unrelated to regulation. Recent financial statements show Uber taking a pretty big loss at some point in the recent past, which might mean that competition has been a lot stiffer than expected. So if Uber dies, disentangling causality will be very difficult.

But IF the California ruling, and others like it, are what put a stake through Uber's heart, then I think we conclude two things:

1. Uber wasn't actually that amazing of an idea.

2. Our labor regulation is too stringent.

Why do we conclude #1? Because there are lots of ideas that absorb the cost of labor regulations and manage to keep on turning a profit. Wal-Mart does it. McDonald's does it. If you can't even clear that hurdle, your idea wasn't really creating that much value.

Why do we conclude #2? Because Uber is providing lots of people with work. Many people who would not otherwise be driving taxis are now becoming Uber drivers. That they are choosing to do this means that Uber is good for labor markets. In the interests of improving our labor markets, we should reduce regulations that keep people from doing jobs they'd be willing to do, as long as those jobs are safe and meet other minimum standards of quality (such as paying overtime). Assuming that Uber driving is a safe job that meets minimum standards of quality - which I'm willing to assume - we don't want to regulate the job out of existence.

I suspect that neither (1) nor (2) is true. I suspect that Uber actually creates more than a tiny sliver of value, with its network effect and its circumvention of the local monopoly of taxicabs. And I also suspect that American labor regulations are not so onerous that they are putting large numbers of people out of a job.

Thus, I predict that the California ruling will not kill Uber. Uber may still die of other causes, but I don't think that being forced to call its employees "employees" will do it in.