Primary

Perils of Transit Journalism II: Judge Transit By Its Goals, Not Yours

In the last post I took on the problem of poor data analysis in transit journalism, specifically how easy it is to create totally arbitrary stories out of performance trends. This post is about another routine mistake in transit journalism, which is to make false assumptions about what transit investments are trying to do. The most common example is to assume that short-term ridership is the only valid measure of transit’s success.

For this series, I’m taking examples from an example-rich Los Angeles Times article on the alleged “accelerating” decline of transit ridership in Los Angeles. The article, by Laura Nelson and Dan Weikel, is provoking a lot of commentary, but it’s not unusual in the assumptions it’s making.

Here’s the lede again:

For almost a decade, transit ridership has declined across Southern California despite enormous and costly efforts by top transportation officials to entice people out of their cars and onto buses and trains.

The Los Angeles County Metropolitan Transportation Authority, the region’s largest carrier, lost more than 10% of its boardings from 2006 to 2015, a decline that appears to be accelerating. Despite a $9-billion investment in new light rail and subway lines, Metro now has fewer boardings than it did three decades ago, when buses were the county’s only transit option.

Long Term Infrastructure Is Not About Short Term Ridership

Here and throughout the article, Nelson and Weikel insistently pair the ridership drop with the $9 billion cost of an infrastructure investment program, giving the reader the impression that they must be connected.

This is like saying that your crops failed because you didn’t have a harvest the day after you planted them.

In this case, the “$9 billion investment” in infrastructure has nothing to do with the short term ridership being discussed here. The rapid transit program is designed for long-term ridership growth and city-shaping effects. One of the key things they do, for example, is make denser development viable, which enables more people to live and work where transit is excellent and therefore rely on it more. Nelson and Weikel quote Metro’s CEO making this point, but doesn’t really explain why long-term investment works.

So how would you assess the value of these lines? You’d look not just at ridership but at what’s happening in real estate development along them. You’d look at what the trends are in demand for housing and jobs near good transit, and extrapolate to show the benefits — for livability and in terms of long-term ridership potential — of meeting that demand. And you’d see how similar investments in similar places have paid off in the past.

In a Tweet yesterday, Nelson said:

The ‘it’s too early to evaluate’ response (a favorite of Metro’s) has always struck me as self-inoculating.”

Yes, it can strike me that way too, which is why great transit agencies put a lot of effort into explaining this issue. But that doesn’t make it wrong.

It takes a while, but great transit investments (and they’re not all great) do pay off when they’re done well. And when they’re not done well, it’s not ridership that tells you this, but rather contradictions in the plan — like putting stations in places where dense development is illegal or impossible, or building in too many sources of delay while claiming that the line will be fast and reliable.

It’s easy for people to take pot-shots at transit projects by arbitrarily focusing on one outcome. In a recent Seattle Times panel I was on recently, for example, Brian Mistele of the traffic analysis firm Inrix repeatedly criticized a Seattle area light rail line for not reducing congestion on the adjacent freeway, as though that were its purpose. Another common example of this mistake is to criticize bus services for low ridership without asking if ridership is even their goal; many bus services exist for non-ridership purposes.

So remember: If you’re going to imply that something is failing, you have to understand what it’s actually trying to do, and show it’s failing at that.

Compare Ridership to the Service Offered

A broader point here is that ridership, and especially ridership trends, are meaningless unless they are compared to the service offered to achieve them. This article gives the appearance of doing that by making a false comparison between short term ridership and long term investments. This echoes the common fallacy that transit ridership is generated by infrastructure.

In fact, transit ridership comes from operating service. Infrastructure is mostly a way to make that service more efficient and attractive, but its impact on ridership is indirect, while the impact of service is direct.

So the most important frame of reference for ridership is the quantity of service being operated, not capital dollars being spent. This is why the article (and transit agency databases in general) should be showing productivity (ridership per unit of service provided.) This “bang for buck” measure is the only way to tell whether transit is succeeding given how much service is being offered.

3 Responses to Perils of Transit Journalism II: Judge Transit By Its Goals, Not Yours

“Which Way, LA?” (a long-running but now defunct podcast on LA issues) had a number of transportation folks on a couple weeks ago talking about this exact same issue. I was shocked that no one called the host out for his single-minded focus on short-term ridership. Long-term expectations for gasoline prices were not mentioned, nor was the necessity of planning or the land use transportation connection. Proper pricing was mentioned at some point, but if that’s the best we can do as a profession we’re definitely not going to win the PR battle. Your point about actually operated service is also well taken. Getting these ideas into the public debate is an ongoing challenge.

I think this problem is common for many complex problems. There is a natural tendency towards simple solutions and simple measurements. Short term transit ridership is but one way to simplify the discussion, but it is a very crude, and often incorrect measurement of the value of a system.

I will say, though, that if the goal is to “build the city of the future”, or otherwise channel growth to those areas, then it ought to be spelled out before you actually build the thing. Because, quite frankly, many cities won’t care. If you can get a federal grant for a great new transit line that will channel growth to the middle of Detroit, then hurrah. But many cities, especially west cost cities, could care less. They want a way to get from one place to the other, and nothing more.

But ridership data alone often fails. My worst pet peeve is when they focus on the ridership of the new system, ignoring the possibility that people just switched from the old. Yes, that new streetcar carries 10,000 people a day, but the old bus carried the same amount (before you killed it).

Even when you talk about the entire system you have to be careful. There are a lot of people that simply prefer driving. I know you just picked a random example, and it happened to be L. A., but a city known for its freeways and car culture in general (e. g. “Drivin’ down your freeway”, L. A. Woman) probably has a lot of people like that. You can build the nicest, most convenient transit system in the world and they will still drive. Likewise you have the opposite. People who will sit for a very long time to catch the next bus, only to spend a huge amount of time stuck in traffic. Should we ignore them and try to get more of the first group? Of course not.

We should measure a system improvement by how much time is saved on a trip using transit, multiplied by how often that trip is made (whether that trip is actually made by transit or not). Often that corresponds with overall transit ridership, but sometimes it doesn’t.

I did some quick math to compare ridership to the service offered, as Jarrett suggests.

Between 2009 and 2013 (arbitrary years, I know) the amount of service LA Metro was able to offer declined from 7.5 million hours to 6.7 million hours.

Over the same period, boardings (a measure of ridership) also declined.

Ridership per service hour is the real measure of “bang for buck,” not total ridership. And we can get a sense of that by dividing total boardings by service hour.

From 2009 to 2013, boardings per service hour stayed roughly flat, but actually increased slightly, from 51.4 in 2009 to 52.4 in 2013.

I’ll gladly accept the criticism that these are arbitrary starting and ending years, and if anyone would like to look further into the past, or look at 2014 data, please do!

But the basic point is, “bang for the buck” has not gone down. Looking at ridership, alone, as though it were a measure of Angelenos desires or needs for transit, misses the point. You can’t ride transit service that isn’t being provided!

You are here:Home›Perils of Transit Journalism II: Judge Transit By Its Goals, Not Yours

The Author

Since 1991 I've been a consulting transit planner, helping to design transit networks and policies for a huge range of communities. My goal here is to start conversations about how transit works, and how we can use it to create better cities and towns. Read more.