Exploring best practices for stewardship of our urban future through government, technology and private sector collaboration, working as stewards to preserve and protect publicly owned assets for the benefit of future citizens.

Website Breadcrumbs

D R A F T for COMMENT

Governance Performance Reporting: Who Cares?

Call it what you will – performance management, managing for results, data-driven decision making – it‘s what good government managers aspire to these days. We true believers are convinced that reliable, timely data on inputs, activities, outputs and outcomes is essential for effective and efficient governance. And we believe that public accountability demands that this performance data be publicly available. But who actually uses the vast amount of information that is produced? And does it actually enhance government performance?

Based on my 20 years of performance reporting for six public jurisdictions, and providing advice to many others, I examine usage by five stakeholder groups – government managers, elected officials, the general public, issue advocates and community leaders. My conclusion – reliable, periodic public performance reporting is helpful, but not sufficient, to improve government results.

Government managers run the engine rooms of government performance improvement. As such, one would expect them to be heavy users of performance reports which, to a certain extent, they are. Managers can be divided into two relevant categories – agency heads and operations directors.

In my experience, most agency heads find public performance reporting an obligation that is of little use to them. While managing publication of the venerable New York City Mayor’s Management Report (MMR), an agency head once told me, “No one gives a s%%t about the MMR.” He and his peers want actionable, current data, not a compilation of what happened last year. As a result, earnestly crafted performance measures are often ignored, or, at best given lip service, at many executive sessions. Perhaps the one positive effect for agency heads of public performance reporting is the mere existence of the report forces them to look at the data before publication or risk public embarrassment.

Agency heads, more than any other stakeholder, will attempt to shape the way data is reported to cast as positive a light as possible on their agency. In New York City, I could count on a last minute call from certain agency heads to my boss, the head of the Mayor’s Office of Operations, or, in some cases, directly to the deputy mayor for operations demanding changes in the placement of a chart or the ‘tone’ of a paragraph in a draft MMR. While nothing was ever deleted, more than once I reshaped paragraphs and moved charts to satisfy an agency director.

Operations managers, on the other hand, are inherently data people with performance measures often reflecting directly on their own personal performance. The street cleaning manager of the New York City Sanitation Department would call my office wanting to know where the data was if my monthly published street cleanliness rating was even a day late. Why? Because, in this instance, the data was useful to him. He regularly deployed cleaning crews to areas with low ratings that month. However, neither he nor his superiors were at all interested in altering the way resources were deployed when an analysis of the data demonstrated that a significant percent of cleaning districts’ ratings would suffer not at all by cutting frequency of cleaning in half. Why? Too much trouble.

Elected officials should, in an ideal world, be using performance data to guide the ship of state toward better results. Like managers, this group can be divided into two categories – executive and legislative branches.

My number one maxim of performance reporting is, “If the chief elected official is interested, everyone is interested”. The opposite also holds true. In my experience, chief executives, such as mayors and governors, tend to lose interest, over the long term, in performance reporting, especially if the data is not going their way. In 2003, the newly elected governor of Oregon chose food insecurity (a more inclusive surrogate for hunger) from among the state government generated indicators (called Oregon Benchmarks) as his top human services priority. Having experienced hunger as an orphaned boy, he was determined to improve Oregon’s abysmal ranking as one of the hungriest states in the nation. Four years later, after concerted efforts by the governor and his allies, Oregon remained one of the hungriest states in the nation. While losing none of his fervor for hunger reduction, Oregon’s hunger ranking fell from the governor’s rhetoric after a few years of poor performance.

Data fatigue can similarly affect chief elected executives. In New York City, year upon year of relentless performance data of > 1,500 indicators, reporting required by law, appeared to lessen my mayor’s legendary passion for data to the point that a new issue of his own Mayor’s Management Report, in his final years in office, merited little more than a perfunctory press release.

Every once in a while, I’ve come across a legislator, city councilor, or member of parliament who was genuinely interested in performance reporting. In Oregon I was lucky to have influential members in both the House and the Senate who genuinely wanted to use Oregon Benchmark data to hold state agencies accountable for achieving results. A law was passed requiring that agencies develop performance measures showing how they contributed to the high level outcomes represented by the Oregon Benchmarks. Agencies had to report their performance as part of the budget review process and they had to issue detailed performance reports analyzing their results. Unfortunately, a decade later the legislature has eliminated all funding for Benchmark reporting and the entire process has been allowed to atrophy.

In New York City, two City Council committee chairs expressed interest in better using the performance measures found in the Mayor’s Management Report to improve agency performance. Unfortunately, the hard ball politics of New York City government dictated that Mayor’s office employees could not actively support these discussions, so little happened. Like the mayor, the council leadership appeared to have lost interest in performance reporting not bothering to go to the trouble of holding yearly hearings on the MMR mandated by the City charter.

In an ideal world, the interestedgeneral public should have at least passing knowledge of a jurisdiction’s performance reporting. They are, after all, the avowed audience for developing a public performance reporting approach. And engaging them lends legitimacy to the entire exercise. In my experience, the interested public is happy to know that a trustworthy set of measures is generated for their use but few will actually take the time to study the information unless engaged in a formal review process.

Public awareness of government performance reports is almost an oxymoron. In Oregon, a biennial survey showed that a robust one Oregon adult in five had heard of the Oregon Benchmarks (many pats on the back) but that was because the measures themselves, including the name, were part of the state’s educational attainment standards. In New York City the web-based performance reports issued by the Mayor’s office receive about 20,000 visits per year which is not bad until you consider that New York has 8.5 million residents. (In comparison, the City’s 311 system which allows citizens to request city government to address a problem, ranging from noise to broken signs to leaking fire hydrants, generated over 400,000 requests in 2013.)

Issue advocates, like operations managers, are data junkies. They want data on their particular issue and they want lots of it. In the early days of the current wave of performance reporting, say 20 years ago, they were enthusiastic users of information generated by government performance reports. Back then, the first, and often the only, place to turn for data was a government performance report. In Oregon the state’s largest newspaper would devote substantial column inches to a new Benchmark report because we were providing new information on the state’s well-being. Today, New York City is required to post all of its machine readable data, including everything that goes into the Mayor’s Management Report, on an open data website – meaning advocates can do their own analysis of measures of their own choosing. (In 2013, the City’s open data website received 1 million + hits.)

While public performance reports are still important to this group – first to validate the importance of their issues and second to put an official imprimatur on a set of data for the ages – the data are often of secondary importance compared to unformatted data that is theirs for the taking, either from individual agencies or from jurisdiction-wide open data sites.

Opinion leaders are my personal favorite audience for government performance information. These self-declared civic improvers usually have no stake in what the data is saying unlike most other stakeholder groups. And they also often have the clout needed to make things happen when needed.

In every jurisdiction I’ve worked, engaging community opinion leaders has been a high priority. In Oregon, I regularly made presentations to organized groups of opinion leaders like Rotary and children’s advocacy groups. In South Australia, where I was director of community engagement for strategic planning in the premier’s office, I organized the first-ever statewide ‘congress’ of opinion leaders where state government officials, with great trepidation, received their marching orders regarding strategic priorities. In New York City, I attempted, with limited success, to engage the City’s 59 community boards in acting on local-area data generated by the Mayor’s Office. (Not unlike the relationship with the City Council, the generally-held view in the Mayor’s office was that community boards were adversaries not partners.)

So, what have I learned?

LESSON # 1 – Performance reports must be trustworthy.

Some form of validation is needed for a purely government report to be taken seriously. In Oregon the Oregon Progress Board, a state entity, was made up of widely recognized civic leaders who served this function. In New York City, the report is issued by the mayor with oversight provided by the independent city auditor. In South Australia, an independent expert advisory group, called the audit committee, approved all performance reports before publication. The Columbia River Gorge Commission, an agency set up by the U.S. Congress to look after the Gorge, used an empowered community leaders’ group to oversee the reporting process.LESSON # 2 – The information provided must matter to the boss.

Many jurisdictions go out of their way to create a performance measures and reports that will ‘transcend’ political administrations. In my experience that approach only works if the process has a built-in ‘refresh’ button that allows each new administration to put its stamp on the measures and the underlying vision that is supported by the measures. If it doesn’t matter to the chief elected official, it doesn’t matter.LESSON # 3 – Cater to the user groups that value the information.In Oregon, county public officials hungered for performance information that showed how their county compared to Oregon’s other 25 counties so I developed special reports and I travelled the state extensively to give them the information they needed. For years, their vocal support in the legislature protected the Progress Board from funding cuts in very lean years. Civic improvers are, as a rule, usually the influential group that is most interested in performance data, both government-specific and community-wide. Generally, performance information targeted to some entity with a personal affiliation – an agency, a city or county, an issue – generates the greatest interest.

LESSON #4 – Public performance reports should support a robust data analytics capability.

In New York City, the Mayor’s Management Report was the ‘Grande Dame’ of management improvement but the data analytics unit, drawing upon myriad data sources, was the driver for solving complex management challenges faced by the City. Performance reporting can identify the what – chronic storm-related sewer overflows in particular areas – but good data analytics is required to answer the why – illegal dumping of restaurant brown grease into storm drains. At the very least, any data focused performance report should contain commentary explaining the data presented.

For the last few years, I’ve administered a ‘data literacy’ test to audiences I address. I ask the audience to interpret a simple scatter graph (individual data points plotted against the X and Y axes). Even audiences made up of professional performance reporters barely ever score above 50% when asked to answer a less than-intuitive question. Those of us in the business must constantly go the extra mile to assure that data is understandable.LESSON #6 – Just because you can measure it doesn’t mean you can manage it.

We’ve all heard the performance credo, “If you can’t measure it, you can’t manage it” which I, personally, find a gross simplification of how management occurs. Lots of good management occurs in the absence of measures. On the other hand, just because we can measure hunger, or job creation, or water usage, or murders doesn’t mean we can bend those indicators to our collective will. Good measures are better than no measures but they are far from the solution to persistent societal issues addressed by governments.

######

From 2011 to 2014, Jeff Tryens was the deputy director for performance management in the New York City Mayor’s Office of Operations. Before that he served as executive director of Community Engagement for South Australia’s Strategic Plan in the Premier’s office. For ten years, he served as executive director of the Oregon Progress Board. He has also helped the Columbia River Gorge Commission, the Portland area Metro Council and Portland Parks and Recreation develop performance reports. He currently consults on performance management from Sisters, Oregon.

Jeff is a graduate of Harvard’s Kennedy School of Government and the Massachusetts Institute of Technology. He has twice received awards from the American Society for Public Administration for career excellence.