The Crime Numbers Game: Management by Manipulation

Near the start of this book’s critique of the New York Police Department’s (NYPD) Compstat program, Drs. Eterno and Silverman invoke Campbell’s Law. In 1976, methodologist Campbell warned that, “The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor." The authors set out to prove that the NYPD engages in various forms of statistical legerdemain to hold down the rates of serious crime, including routinely downgrading offenses from index to non-index status and attempting to discourage the public from reporting certain crimes. While the authors agree that serious crime has declined very substantially in New York City during the past twenty years, because of these alleged statistical manipulations they feel impelled to “strongly question the extent of this decrease.”

Both authors have longstanding ties to the NYPD. Eterno retired as a captain and now teaches at Molloy College. Silverman is an emeritus professor at John Jay College and the author of a 2001 book that conveyed a generally favorable view of the department’s management changes of the 1990s. Their book is in direct conflict with Franklin Zimring’s recent study of the NYPD, which I reviewed in the March issue of this journal. Zimring portrayed Compstat as a key component in the NYPD’s highly successful strategy, which he credits with sharply reducing crime since the 1990s. (I will discuss these authors’ mutual critiques below.)

Since its launch in 1994, Compstat (originally short for “compare stats”) has evolved into a centralized planning and research unit that sets performance standards – in the form of numerical crime reduction targets – for the NYPD and its subunits. At bi-weekly strategy meetings, the unit’s managers review the latest crime data and assign credit or blame for the results. Compstat has been widely replicated: at least one-third of the nation’s 515 largest police departments have adopted similar programs, as have many major agencies abroad.

Eterno and Silverman offer several different types of evidence to support their critique of Compstat. These include: a survey of retired NYPD field commanders; statements issued by union officials representing patrol officers and sergeants; a leaked internal NYPD report describing manipulation of statistics in one precinct; comparisons of trends in index versus non-index crimes, and of index crimes versus hospital records; undercover recordings of supervisors’ instructions to patrol officers; and reports of abuses in other agencies that have adopted the Compstat approach.

The retirees’ survey was conducted in 2008 among 491 retired members of the Captain’s Endowment Association, and it was supplemented by in-depth interviews with more than 30 of the respondents. The 309 respondents who had served during the Compstat era reported much less management emphasis on maintaining integrity in crime statistics than their counterparts who had worked in prior years. Among the 160 commanders who had served during the Compstat era and who were aware of post-incident changes in crime reports, 78% said they regarded at least some of those changes as unethical. Compared to respondents from the previous era, those who served during the Compstat era also reported greater pressures from management to downgrade crimes from index to non-index categories. Eterno and Silverman conclude: “Overall, the obvious pattern throughout these data is that there are pressures generated by the leadership of the NYPD emanating from Compstat, which lead to unethical manipulation of crime reports.”

(This summer, in their blog and in a recent New York Times interview, Eterno and Silverman released preliminary findings from a survey they conducted this year among 1,962 retired NYPD officers of all ranks. About half of the 871 respondents who had served during the Compstat era said they had “personal knowledge of crime report manipulation,” and more than 80% of these retirees said they knew of three or more instances “in which officers or their superiors rewrote a crime report to downgrade the offense or intentionally failed to take a complaint alleging a crime.”)

The pressures reported in the retirees’ surveys closely mirror complaints in statements issued by union officials. In a 2004 article, the recording secretary of the Patrolmen’s Benevolent Association (PBA) alleged that managers respond to Compstat pressure to “fake a crime decrease” by failing to file some crime reports, misclassifying crimes from felonies to misdemeanors, undervaluing property losses so that an incident is not listed as a felony, and reporting a series of crimes as a single event. The PBA official added: “A particularly insidious way to fudge the numbers is to make it difficult or impossible for people to report crimes – in other words, make the victims feel like criminals so they walk away just to spare themselves further pain and suffering.” In a 2004 press release issued jointly with the Sergeants Benevolent Association, the PBA charged that downgrading of offenses “is a truth that is widely known by members of the department… .”And in a 2009 statement, the PBA’s treasurer deplored “a culture of knee-jerk felony-to-misdemeanor downgrades in an effort to improve their showing at those famously stressful Compstat meetings at One Police Plaza. Those numbers represent the dark side of Compstat.”

A retired detective told the Village Voice in 2010 about an extreme example of crime downgrading. As summarized by the authors, the detective said that during an interrogation of a rape suspect he discovered six previous apartment rapes involving the same offender that had been categorized as lesser crimes, mostly criminal trespasses. The detective explained how paperwork was doctored by patrol supervisors: “They look to eliminate certain elements in the narrative. One word or two words can make the change to a misdemeanor.”

Data on non-index crimes committed during the years 2000-2009 are another key source of evidence for Eterno and Silverman. (Until a lawsuit was filed by the New York Times in 2010, these NYPD misdemeanor data were not available to the public.) The authors call attention to the large differences between similar types of index and non-index crimes, including: a 70.7% increase in misdemeanor criminal trespasses versus a 41% decrease in felony burglaries; a 5% drop in misdemeanor sex crimes versus a 38% decrease in felony rapes; and a 9% decline in misdemeanor assaults versus a 42% drop in felony assaults. The authors believe these differences indicate manipulation because trends in similar types of index and non-index crimes would be expected to be roughly parallel.

For additional evidence of impropriety, Eterno and Silverman compare Compstat data to hospital data on assaults and drug-related crimes. The NYPD reported nearly a 50% drop in assaults in the years 1999-2006, but hospital records for these years showed a 90% increase in emergency room visits for assaults, a 129% surge in ER visits for firearms assaults, and a 15% rise in hospitalizations for assault-related injuries. “Absolutely none of the hospital data showed the marked decrease in assaults that the NYPD claims. These data are in stark contrast to the NYPD’s and clearly are evidence of manipulation,” the authors claim. They also cite the disparities between a 22.5% decrease in drug use felonies reported by the NYPD between 2000 and 2009 (and a 32% decrease in drug-related misdemeanors between 2002 and 2006) in contrast to the 14% increase in the proportion of hospitalizations that were drug-related, according to data from the city’s Department of Health and Mental Hygiene Department for the years 1999 to 2006.

What the authors call their strongest evidence of NYPD manipulation comes from three audio tapes secretly recorded in two precincts, in which supervisors are heard issuing instructions on how minimize the number of victim reports of serious crimes. In one tape, a supervisor instructs officers not to take robbery reports unless the victim is willing to come to the station house to speak with detectives. In another tape, officers are instructed not to take robbery reports from victims if they anticipate that the district attorney will decline to prosecute. The authors comment: “Why do supervisors instruct police officers to question the veracity of robbery victims? It is not because they are interested in fighting crime but because robbery is a number that will be reflected at Compstat meetings. It will make the commander look bad.” An internal report ordered by NYPD Commissioner Raymond Kelly in response to publicity about one of the undercover tapes contained further revelations. The Village Voice obtained a leaked copy of the report in which the department’s own investigators confirmed that “crimes are being improperly reported in order to avoid index-crime classifications, [which is] indicative of a concerted effort to deliberately underreport crime in the 81st precinct.” Eterno and Silverman note that at least four commanders and seven other NYPD officers have been convicted of manipulation of crime statistics, and at least four more cases were pending. They also note that manipulation of crime reports to produce better numbers has been extensively documented in other departments that have adopted Compstat-like performance management systems, including agencies in Atlanta, Baltimore, Dallas and New Orleans as well as policing agencies in the U.K., Australia and France.

Zimring has responded to Eterno and Silverman’s findings, first in a brief appendix to his book and more recently in an interview with the New York Times. Zimring finds their evidence of manipulation to be unconvincing in comparison to the strong correlations he obtained in comparing Compstat data on four serious crimes (homicide, robbery, auto theft and burglary) to outside data from insurance claims, health statistics and victim surveys. Zimring told the newspaper that there is “some underreporting, and there is some downgrading in every police force that I know of,” but those distortions are too small to substantially discredit NYPD statistics for index crimes. In turn, Eterno and Silverman criticize Zimring’s “flawed and incomplete arguments,” such as his reliance on evidence from the medical examiner’s office about murder and non-negligent homicide cases. That office, they assert, is not independent of influence from the NYPD. Further, in answer to Zimring’s use of insurance industry data showing a decline in auto theft claims, the authors report that since 1999 victims of auto theft in New York City have been questioned extensively and required to go back to the scene for a full investigation. This policy, they say, has succeeded in discouraging many victims from filing claims. And they fault Zimring for failing to note the disparity (noted above) between NYPD data showing large decreases in drug complaints versus hospital data showing no such decreases.

Eterno and Silverman’s critique gains credibility from their reliance on multiple and diverse types of evidence. They have clearly demonstrated that data manipulation has occurred at different times and multiple places in the Compstat system. But the debate will continue as to whether they’ve conclusively disproven the NYPD’s and Zimring’s contention that the manipulation is the work of a few “bad apples” looking to advance their careers.
In January 2011, responding to media exposés and complaints from numerous critics, Commissioner Kelly appointed a panel of three former federal prosecutors to investigate the allegations of data manipulation under Compstat. The NYPD’s chief spokesman said that the commissioner appointed this Crime Reporting Review Committee because “there’s been a lot of false, or unfair, accusations against the Police Department….” The panel was given three to six months to complete its report, but so far no report or statement has been issued, according to the City Council’s Public Safety Committee. In December 2011, one of the committee’s three members died and no replacement was named. Last winter, when asked by a local television reporter why no report had been issued after 13 months, Kelly replied, "I can’t control the pace of the work of an independent committee."

Richard Allinson is the former editor and publisher of Criminal Justice Press.

THE CRIME NUMBERS GAME: MANAGEMENT BY MANIPULATION2012-09-012016-08-14https://clcjbooks.rutgers.edu/wp-content/uploads/2016/08/logo.pngCriminal Law and Criminal Justice Book Reviewshttps://clcjbooks.rutgers.edu/wp-content/uploads/2012/09/crime_numbers_game.jpg200px200px