“You can’t manage what you don’t measure” is often attributed to business guru Peter Drucker. But Oxford professor Christopher Hood raises a provocative question: Does “management by numbers” actually lead to improved performance?

Dr. Hood, who is a well-respected public management professor, poses this question in a recent article in the Public Administration Review and his answer is: it depends.

He posits that what the numbers are used for -- and the operational culture in the organizations in which they are used -- will influence the effectiveness of any “management by numbers” strategy.

Dr. Hood says that research into performance management is now extending “beyond the strictly technical qualities of performance numbers” and is now examining the “effects they have in different social contexts” such as why similar targets have different effects in different organizations, as was found in the United Kingdom’s healthcare system.

He says there are basically three commonly used performance numbers: targets, rankings, and “intelligence.” Each of these works better, or less well, in different kinds of organizations. While he writes based on his observations in the British government, each of these three approaches has been used in the U.S. government as well, so there may be some commonalities.

The “Targets” Approach.

Targets are “using numbers to set and monitor minimum thresholds of performance,” such as carbon dioxide emission reduction targets. Proponents of targets argue that “they can provide powerful incentives for performance improvement.” But critics argue that “targets can have an unintentionally negative effect on performance” by creating “threshold effects” (only meet the target, don’t attempt to exceed it), “ratchet effects” (don’t try to meet the target or it will get raised), or “output distortion” (focus only on the target, at the expense of other important values).

Hood said the output distortion effect led to claims that “patients tended to be kept in ambulances parked outside hospitals until emergency rooms . . .were ready to see them . . .because there was no target applying to the amount of time a patient might spend in an ambulance” but there were targets on how long it should be before a patient was seen by a doctor once in the hospital.

The Bush Administration’s President’s Management Agenda was heavily driven by the use of targets, with both positive and negative effects. It wasn’t clear which outweighed the other, however, and nothing was quite as dramatic as the British experience!

The “Rankings” Approach.

Rankings “use numbers to compare performance of different units, such as individuals, work groups, organizations, cities, or countries.” They are powerful ways to attract media and senior level attention. Proponents say they motivate those being ranked to improve their performance. This approach overcomes the “ratchet” and “threshold” effects of the use of targets. But critics note that there can be a strong “output distortion” effect, like with the use of targets. In the education world, this would include teachers “teaching to the test” if schools are being ranked, or even fraud and cheating, such as when teachers change students’ answers after the test is over.

The “Intelligence” Approach.

“’Intelligence’ means using numbers for the purpose of background information for policy development, management intervention, or user choice” rather than for setting targets or rankings. Hood says this a well-established approach in both the public and private sectors. The increased use of this approach in recent years has come through greater transparency and allows users of public services greater choices in services such as schools, health care, or housing. But this approach requires a greater level of ability to make such choices by users.

Proponents say that this approach allows evidence-based decision-making and focuses attention on progress. Hood says the late quality guru Edward Deming, who founded the total quality management movement, thought that was “the best way to produce continuous improvement in production” without creating the distortions that the target and ranking systems can produce. Critics say that such a system of measurement allows performance information to be ignored, and that it would be used only if managers found it useful. The critics think that only systems with carrots and sticks produce predictable results.

The Obama Administration has largely focused on this approach with its emphasis on assessing progress in its quarterly reviews of agency priority goals, not the achievement of targets or scoring and ranking agencies on their performance.

While there are proponents and critics of each approach, Hood observes that the effectiveness of each approach can vary, depending on the operating cultures in different organizations. For example, he says that the use of targets seems to be more effective in organizations that have a tradition of being hierarchical and where there are clear priorities and a sense of common purpose. For example, meeting response times in call centers may be a good use of targets. In contrast, he says that the use of rankings can be more effective in organizations where there is a strong culture of individual effort and there is a basic urge to compete. For example, the rankings of states seemed to generate some friendly competition between governors.

So this leads to Dr. Hood’s conclusion that there is no single approach that works across the board and that the use of different performance measurement strategies is “just as delicate a balancing act as many other types of management.”

John M. Kamensky is a Senior Research Fellow for the IBM Center for the Business of Government. He previously served as deputy director of Vice President Gore's National Partnership for Reinventing Government, a special assistant at the Office of Management and Budget, and as an assistant director at the Government Accountability Office. He is a fellow of the National Academy of Public Administration and received a Masters in Public Affairs from the Lyndon B. Johnson School of Public Affairs at the University of Texas at Austin.

By using this service you agree not to post material that is obscene, harassing, defamatory, or
otherwise objectionable. Although GovExec.com does not monitor comments posted to this site (and
has no obligation to), it reserves the right to delete, edit, or move any material that it deems
to be in violation of this rule.

Database-level encryption had its origins in the 1990s and early 2000s in response to very basic risks which largely revolved around the theft of servers, backup tapes and other physical-layer assets. As noted in Verizon’s 2014, Data Breach Investigations Report (DBIR)1, threats today are far more advanced and dangerous.

In order to better understand the current state of external and internal-facing agency workplace applications, Government Business Council (GBC) and Riverbed undertook an in-depth research study of federal employees. Overall, survey findings indicate that federal IT applications still face a gamut of challenges with regard to quality, reliability, and performance management.

PIV- I And Multifactor Authentication: The Best Defense for Federal Government Contractors

This white paper explores NIST SP 800-171 and why compliance is critical to federal government contractors, especially those that work with the Department of Defense, as well as how leveraging PIV-I credentialing with multifactor authentication can be used as a defense against cyberattacks

This research study aims to understand how state and local leaders regard their agency’s innovation efforts and what they are doing to overcome the challenges they face in successfully implementing these efforts.

The U.S. healthcare industry is rapidly moving away from traditional fee-for-service models and towards value-based purchasing that reimburses physicians for quality of care in place of frequency of care.