E-gov leaders look for a measure of satisfaction

As the 25 e-government projects move into the third phase of their lifecycle—usage—the Office of Management and Budget is asking agencies to measure customer satisfaction.

While it’s not a new concept, the way agencies will measure whether the Quicksilver initiatives meet their goals will be more consistent in 2007.

In its annual report to Congress on the benefits of e-government, OMB released the first set of measures for 18 of 25 Quicksilver projects focusing on three areas: customer satisfaction, adoption and participation, and usage (GCN.com, Quickfind 724).

“We are trying to measure what success means,” said Karen Evans, OMB’s administrator for e-government and IT during a meeting held by the Treasury Department’s Federal Consulting Group on the results of the 2006 Customer Satisfaction Survey. “We want measures that show results. We want to increase the usage of the 25 initiatives.”

For customer satisfaction, Evans said agencies pulled the metrics directly from the American Customer Satisfaction Index, which is put together by the University of Michigan and ForeSee Results of Ann Arbor, Mich.

ForeSee measures satisfaction by randomly surveying selected site visitors and then grading on a 100-point scale.

The real question is how agencies are serving citizens better—and how they are able to track that, said Andrew Ciafardini, OMB’s Government-to-Citizen portfolio manager.

“One of the most important things about these metrics is that they are actionable,” he added. “That is one of the guiding principles we built into the performance measures: making sure they are there to do something about them and make initiatives better.”

The projects have received mixed reviews by citizens, businesses and federal users. Under the E-Training initiative, for instance, almost every major agency is using a learning management system or has hired one of the E-Training providers, said Norm Enger, Office of Personnel Management’s director of the Human Resources Line of Business Program Management Office.

But under E-Travel, only 25 percent of agencies have fully deployed one of three available systems, OMB said in its report.

GovBenefits.gov and GovLoans.gov also experienced low citizen payoff, with only 35 percent of the visitors transferred to agency-specific benefits sites. Recreation One-Stop found slightly more success, as 55 percent of all reservations for national parks in the fourth quarter of 2006 were made through the site.

Evans said projects for too long were focused on outcomes such as Web site visitors, but that now these metrics provide outcome-oriented goals.

“The metrics will put clear focus on uptake, so that mature projects will shift their energy to get the good capabilities that are in place used more,” said Keith Thurston, an assistant deputy associate administrator in the General Services Administration’s Office of Governmentwide Policy. “[The metrics] are better in that they give external-view metrics about utilization that a high-level executive would ask.”

In part, Evans credits OMB’s deputy director for management Clay Johnson for asking outcome-oriented questions that her staff or the agency project managers could not readily answer.

Johnson’s questions became a key factor in how OMB and agencies develop consistent and new measures for each project, Ciafardini said.

“Part of our guiding principles in this: Put it in context, so when you are measuring metrics, what is the universe it is out of rather than just saying you want to get to 100,000 hits a month on a Web site,” he said. “How many should you be getting, and what is the universe of people who could use the Web site? The other thing is putting it into plain language. How can we make sure people can clearly understand them from how you list them? So we don’t get into technical jargon.”