4. Major Findings and Conclusions

This chapter summarizes the major findings and conclusions from the fourth year of the Mobility Monitoring Program. To date, this Program has gathered and analyzed archived traffic detector data from 2000 through 2003. However, this fourth year provided the first opportunity to analyze more than two years of annual trends for more than 20 cities. Thus the findings are mainly focused on this Program’s key objectives of tracking nationwide estimates of traffic congestion and reliability.

This chapter is presented in the form of “frequently asked questions” and as such departs from the normal tradition of a technical report. Our intention is to make the most desired information readily accessible and understandable, as opposed to making readers slog through numerous pages of data tables and charts that simply summarize the data but provide no interpretation or message. The frequently asked questions that are answered in this chapter are as follows:

Are traffic congestion and/or travel reliability getting worse?

How does reliability relate to average congestion levels?

How are the performance measures different, and what does each tell me?

Are Traffic Congestion and/or Travel Reliability Getting Worse?

The short answer is yes, it appears that traffic congestion and travel reliability have gotten worse since 2000, the earliest data gathered for the Mobility Monitoring Program (Table 8). The congestion trends in this study differ among cities but the national estimates generally agree with those reported in the Urban Mobility Report. The four years (2000 through 2003) of archived detector data in the Mobility Monitoring Program point to an overall national trend of steady growth in traffic congestion and decline in travel reliability.

There are, however, several footnotes and caveats that must be highlighted and considered in this same discussion of worsening traffic congestion and travel reliability.

The measurement system is changing every year — Ideally, trend analyses would be based on a stable measurement system. However, the growth in number of cities as well as the growth in ITS deployment has produced a congestion measurement system that has grown substantially in its first four years. The national estimates in Table 8 address some of the measurement system change by using the same 20 cities that were able to provide data from 2001 to 2003, while dropping year 2000 estimates that only contained 10 cities. However, even in the 20 cities considered in this table, the freeway lane-mile coverage has increased at 11 percent between 2001 and 2003, whereas the total VMT measured in these 20 cities has increased at 32 percent over the same time period. This trend is likely to continue as ITS deployment progresses across the nation.

Several cities appear to have suspect data — Some charts in the city reports for certain cities have odd or unexpected trends. For example, the data from several cities indicate that congestion and delay is 3-4 times worse in the evening peak period than the morning peak period, or vice versa. Other charts indicate low vehicle speeds during times of typically light traffic (such as the early morning). Although most performance measures and their trends fall within the range of possibility, the trends in several cities are nearly outside the range of probability. In other words, it is possible but doubtful that some of these trends truly exist, and therefore the data in those cities should be considered suspect.

The national estimates include only freeways where traffic detector data have been collected for operations purposes and then archived — Traffic detector data are typically collected on the most congested freeways, which may not be a representative sample of areawide freeway conditions. Several cities do have significant freeway coverage—for example, 14 of the 29 cities have more than 50 percent of their freeway lane-miles instrumented with traffic sensors. Across all 29 cities, the average percent coverage of freeway lane-miles is 53 percent (Table 4). Therefore, the actual performance measure values in Table 8 may be slightly high because they reflect the half of the freeway system that is most congested. The trend values are also affected by this coverage issue, as some of the outlying freeway sections that have the fastest congestion growth may not be instrumented with traffic sensors yet.

Table 8. National Congestion and Reliability Trends

Measure

Measure Value

Change in Measure Value

2001

2002

2003

2001-2002

2002-2003

2001-2003

Mobility Monitoring Program (includes a sample of freeways in 20 cities1)

Further analyses were performed to test the hypothesis that the 11 percent increase in freeway coverage between 2001 and 2003 affects the trends presented in Table 8. In these analyses, comparisons were made only between those freeway sections that were collecting data from 2001 through 2003, thereby eliminating the effect of increasing freeway coverage. The results of this analysis are shown in Table 9.

We make the following observations from the results in Table 9 (which keeps the freeway coverage constant):

With regard to average peak period congestion level (travel time index), more freeway sections are getting better than getting worse, but only by a small margin (4 percent).

With regard to peak period travel reliability (buffer index), more freeway sections are getting better than getting worse, but only by a small margin (3 percent).

With regard to total delay experienced at all times of the day, both weekday and weekend, more freeway sections are getting worse than getting better, by a relatively large amount (12 percent).

Table 9. Trends (2001-2003) in Congestion and Reliability at the Freeway Section Level

National Estimate
(20 cities)

Number (%) of freeway sections:

Getting worse1

No significant change2

Getting better3

Travel Time Index

151
(31%)

168
(34%)

174
(35%)

Buffer Index

174
(38%)

99
(21%)

188
(41%)

Total Daily Delay

241
(49%)

70
(14%)

182
(37%)

Notes:1Getting worse means that, from 2001 to 2003, 1) the travel time index increased by 3 or more points; 2) the buffer index increased by 3% (percentage-points) or more; or 3) the delay increased by 10 percent or more.2No significant change means that, from 2001 to 2003, 1) the travel time index changed by less than 3 points; 2) the buffer index changed by less than 3% (percentage-points); or 3) the delay changed by less than 10 percent.3Getting better means that, from 2001 to 2003, 1) the travel time index decreased by 3 or more points; 2) the buffer index decreased by more than 3% (percentage-points) or more; or 3) the delay decreased 10 percent or more.

A possible explanation for the different outcomes for different performance measures in Table 9 is that peak period congestion and reliability may be getting better, but delay during other times of the weekday and weekend have grown considerably. There are small differences (3 and 4 percent) between the percentages getting better and worse for both the travel time index and buffer index, so “no significant change” is also within the margin of error for this type of analysis. The economic conditions between 2001 and 2003 also could have some effect on slightly lower congestion levels.

Table 10 provides detailed results for all cities for all available years of data. This chart indicates that the congestion, delay and reliability trends differ among the cities. For the travel time index, several cities show relatively stable results between 2000 and 2003, such as Detroit, Phoenix, and Seattle. Other cities travel time index values have grown considerably, such as Atlanta and Houston. Still other cities have shown an up-and-down fluctuation over the past three to four years, such as Cincinnati, Los Angeles, and Minneapolis-St. Paul. The buffer index values in Table 10 typically exhibit less fluctuation than the travel time index values, as buffer index values for most cities range between 10 and 20 percent.

Figure 4 illustrates the necessity of keeping the measurement coverage (number of cities and miles of coverage) relatively constant when examining congestion and reliability trends. This figure shows the national day-to-day and rolling averages (30-day) for the travel time and planning time index from 2000 through 2003. The planning time index is shown here with the travel time index because it has the same units (buffer index is reported as percentage) and displays the near-worst case travel time index values. Several observations are made:

There is a significant amount of day-to-day fluctuation in both the congestion and reliability measures, indicating the importance of including day-to-day reliability as a performance measure.

Because the chart includes all cities and all available data, it shows the effects of adding freeway coverage, as the trend lines show an abrupt change at the beginning of 2001 when 11 additional cities were added. Also, the trend lines dropped in 2003 when six cities were added (Baltimore, Dallas, El Paso, Orange County (CA), Riverside-San Bernardino (CA), and San Francisco).

How Does Reliability Relate to Average Congestion Levels?

The travel time index values reported through the Mobility Monitoring Program are peak period averages for all non-holiday weekdays. As such, the travel time index values represent average traffic congestion levels when considering weekday traffic. The reliability measures (buffer index and planning time index) represent how the travel time index varies between weekdays. A travel time index that is consistently high (does not vary much during the weekdays) should have a low buffer index. Conversely, a travel time index that varies considerably during the weekdays should have a higher buffer index value. But does the congestion level have a relationship with reliability?

In analyzing the archived data, we have found a fairly consistent relationship between congestion and reliability levels. That is, when the travel time index increases, the buffer index also increases by a corresponding increment. Figure 5 shows an example of the relationship between congestion and reliability levels as seen in the 2003 data. The figure shows travel time index and buffer index values for three cities. A simple regression line has been drawn for the data from each city.

Figure 5 shows that, for comparable congestion levels (travel time index = 1.40), each of the three example cities would have different reliability levels. For example, City 1 has a predicted “best fit” buffer index value of 47 percent, whereas City 3 has a predicted “best fit” buffer index of 72 percent.

This preliminary finding is important because it implies that it may be possible to improve the reliability of travel even if the congestion level remains the same. We are only beginning to explore the relationship between average congestion levels and reliability. There are numerous factors that affect the reliability of travel, and future analyses will attempt to better understand the relationship of these factors to congestion and reliability levels:

Inclement weather;

Work zones;

Traffic incidents and incident management practices;

Availability of alternate routes;

Level of traveler information services;

Level of ITS deployment and operations activities; and

“Aggressiveness” of traffic management and operations activities.

How are the Performance Measures Different, and What Does Each Tell Me?

There are cases when a single performance measure can be used in a mobility analysis, but most situations can benefit from more than one measure. Mobility measures like the travel time index and reliability measure like Buffer Index are related, but they identify different elements of performance. In many cases, the various measures identify different trends. For example, Table 11 shows the trends for several performance measures for three cities: Cincinnati, Houston, and Pittsburgh.

The following observations are made regarding the trends in Table 11:

Cincinnati shows declines in congestion levels but worsening reliability conditions. This goes against the general trend seen in most areas, but the changes are relative in both cases. A number of factors could have degraded the reliability while the average congestion level remained the same. Delay per 1000 VMT declines significantly due to the significant rise in peak period VMT.

Houston shows an increasing travel time index and decreasing delay per 1000 VMT. This could be the result of increases in travel outside of the normally congested times and road sections, since the travel time index measures peak period conditions only and the delay measures is total daily delay. While the congestion levels can increase, travel in the uncongested road sections and times of the day can increase faster because congested sections typically carry less traffic per lane than freeway sections with moderate traffic congestion.>

Pittsburgh saw an increase in congested travel from 2002 to 2003, but a decrease in the travel time index. This may reflect the fact that the definition of congested travel is binary—either a section is congested or it is not congested. The travel time index, conversely, is a continuous measure that reflects average peak period conditions.

Table 11. Different Performance Measures May Reveal Changes in Different Elements of Performance

Measures

Current Year

Last Year

Two Years Ago

2003

2002

Change

2001

Change

Cincinnati, OH-KY

Travel Time Index

1.29

1.30

-1%

↓

1.33

-3%

↓

Planning Time Index

1.61

1.56

+3%

↑

1.63

-2%

↓

Buffer Index

21%

18%

+3%

↑

20%

+1%

↑

% Congested Travel

67%

71%

-4%

↓

81%

-14%

↓

Total Delay (veh-hours) per 1000 VMT

4.09

5.03

-19%

↓

5.55

-26%

↓

Houston, TX

Travel Time Index

1.29

1.22

+6%

↑

1.11

+17%

↑

Planning Time Index

1.71

1.55

+10%

↑

1.40

+22%

↑

Buffer Index

26%

22%

+4%

↑

19%

+7%

↑

% Congested Travel

27%

30%

-3%

↓

24%

+3%

↑

Total Delay (veh-hours) per 1000 VMT

3.34

3.78

-12%

↓

3.40

-2%

↓

Pittsburgh, PA

Travel Time Index

1.20

1.23

-2%

↓

1.16

+3%

↑

Planning Time Index

1.43

1.47

-3%

↓

1.31

+9%

↑

Buffer Index

16%

16%

0%

—

10%

+6%

↑

% Congested Travel

55%

50%

+5%

↑

49%

+6%

↑

Total Delay (veh-hours) per 1000 VMT

3.72

4.05

-8%

↓

3.30

+13%

↑

Note: Colored arrows represent relative change from a comparison of the current year (2003) to previous years (2001 and 2002).

These examples demonstrate the following key principles for performance measures:

There is no single best performance measure for all issues/problems.

Each performance measure represents different dimensions of the issue/problem.

Performance monitoring programs should include an interpretation that addresses the fact that different performance measures reveal different aspects of the issue/problem.