Getting the Truth Into Workplace Surveys

THERE'S NO DOUBT that companies can benefit from workplace surveys and questionnaires. A GTE survey in the mid-1990s, for example} revealed that the performance of its different billing operations, as measured by the accuracy of bills sent out, was closely tied to the leadership style of the unit managers. Units whose managers exercised a relatively high degree of control made more mistakes than units with more autonomous workforces. By encouraging changes in leadership style through training sessions, discussion groups, and videos, GTE was able to improve overall billing accuracy by 22% in the year following the survey and another 24% the year after.

Unfortunately, not all assessments produce such useful information, and some of the failures are spectacular. In 1997, for instance, United Parcel Service was hit by a costly strike just ten months after receiving impressive marks on its regular annual survey on worker morale. Although the survey had found that overall employee satisfaction was very high, it had failed to uncover bitter complaints about the proliferation of part-time jobs within the company, a central issue during the strike. In other cases where failure occurs, questionnaires themselves can cause the company's problems. Dayton Hudson Corporation, one of the nation's largest retailers, reached an out-of-court settlement with a group of employees who had won an injunction against the company's use of a standardized personality test that employees had viewed as an invasion of privacy.

What makes the difference between a good workplace survey and a bad one? The difference, quite simply, is careful and informed design. And it's an unfortunate truth that too many managers and HR professionals have fallen behind advances in survey design. Although the last decade has brought dramatic changes in the field and seen a fivefold increase in the number of publications describing survey results in corporations, many managers still apply design principles fonnulated 40 or so years ago.

In this article, we'll explore some of the more glaring failures in design and provide 16 guidelines to help companies improve their workplace surveys. These guidelines are based on peer-reviewed research from education and the behavioral sciences, general knowledge in the field of survey design, and our company's experience designing and revising assessments for large corporations. Managers can use these rules either as a primer for developing their own questionnaires or as a reference to assess the quality of work they commission. These recommendations are not intended to serve as absolute rules. But applied judiciously, they will increase response rates and popular support along with accuracy and usefulness.nvo years ago, International Truck and Engine Corporation (hereafter called "International") revised its annual workplace survey using our guidelines and saw a leap in the response rate from 33% to 66% of the workforce. These guidelines - and the problems they address - fall into five areas: content, format, language, measurement, and administration.

Guidelines for Content

1. Ask questions about observable behavior rather than thoughts or motives. Many surveys, particularly those designed to assess performance or leadership skill, ask respondents to speculate about the character traits or ideas of other individuals. Our recent work with Duke Energy's Talent Management Group, for example, showed that the working notes for a leadership assessment asked respondents to rate the extent to which their project leader "understands the business and the marketplace:" Another question asked respondents to rate the person's ability to "think globally:"

While interest in the answers to those questions is understandable, the company is unlikely to obtain the answers by asking the questions directly. For a start, the results of such opinion-based questions are too easy to dispute. Leaders whose understanding of the marketplace was criticized could quite reasonably argue that they understood the company's customers and market better than the respondents imagined. More important, though, the responses to such questions are often biased by associations about the person being evaluated. For example, a substantial body of research shows that people with symmetrical faces, babyish facial features, and large eyes are often perceived to be relatively honest. Indeed, inferences based on appearance are remarkably common, as the prevalence of stereotypes suggests.

The best way around these problems is to ask questions about specific, observable behavior and let respondents draw on their owvn, firsthand, experience. This minimizes the potential for distortion. Referring again to the Duke Energy assessment, we revised the question on understanding the marketplace so that it asked respondents to estimate how often the leader "resolves complaints from customers quickly and thoroughly:" Although the change did not completely remove the subjectivity of the evaluation- raters and leaders might disagree about what constitutes quick and thorough resolution - at least responses could be tied to discrete events and behaviors that could be tabulated, analyzed, and discussed.

2. Include some items that can be independently verified. Clearly, if there is no relation between survey responses and verifiable facts, something is amiss. Conversely, verifiable responses allow you to reach conclusions about the survey's validity, which is particularly important if the survey measures something new or unusual. For example, we formulated a customized 36o-degree assessment tool to evaluate leadership skill at the technology services company EDS. In order to be sure that the test results were valid, we asked (among other validity checks) if the leader "establishes loyal and enduring relationships" with colleagues and staff; we then compared these scores with objective measures, such as staff retention data, from the leader's unit. The high correlation of these measures, along with others, allowed us to prove the assessment's validity when we reported the results and claimed that the survey actually measured what it was designed to measure. In other assessments, we frequently also ask respondents to rate the profitability of their units, which we can then compare with actual profits.

In another case, we designed an anonymous skill assessment for the training department of one of the nation's largest vehicle manufacturers and found that 76% of the engineers believed their skills were above the company average. Only 50% of any group can be above the average, of course, so the survey showed how far employee perceptions about this aspect of their work were out of step with reality. The results were invaluable for promoting enrollment in the company's voluntary training program, because few people could argue with the conclusion that 26% of the respondents - nearly 8,000 engineers- had a mistakenly favorable view of their skills.

In addition to posing questions with verifiable answers, asking qualitative questions in a quantitative survey, although counterintuitive, can provide a way to validate the results. In an employee survey we analyzed for EDS in 2000, we engaged independent, objective readers to classify the topic and valence (positive, negative, or neutral)

Dr. Palmer Morrel-Samuels is a Research Psychologist with extensive training and experience in Statistical Analysis and Assessment Design. He has done a considerable amount of research and applied work on communication, testified to the U.S. congress on employee motivation and its linkage to objective performance metrics, published several articles on survey design in Harvard Business Review, among others, and wrote several patents to assist in the administration and analysis of workplace assessments. Dr. Morrel-Samuels currently teaches graduate courses on survey design and research methodology at the University of Michigan.