Political Polling is Getting Better ~
#38

Within the broad class of commercially-motivated critics, whose columns or audio/video clips affect the profits of businesses, it is the movie critics that are best-known. Those who seek to be successful movie critics must love movies. Although working into their columns a few jibes and pans is helpful to entertain readers, the movie critic's review must generally be positive to help build the film's audience and that, in turn, keeps the critics career on the road to success and encourages more films that audiences want.

My own interest as a polling critic and lover of good polling is to serve a function analogous to the movie critic's role. I try to encourage politicians and political observers to help produce government policy that works better for the entire public by exploring what the people themselves want for policy. This approach to political polling is called "public-interest polling." I have frequently written columns criticizing commercial pollsters whose requirements to please sponsors impose limitations on what kind of questions they ask. Sponsors of politicians assume that what they need from polling is to learn how to frame and contextualize the policies they prefer – the ones they've already chosen for enactment. Another important group of sponsors, the major media, generally want to uncover only what will help their media interests. I call both the politicians' and media approach "commercial polling." Both public-interest and commercial pollsters appropriately use random sampling theory. Departures of both from using well-recognized high quality polling methods and techniques are negligible as a practical matter.

A little history is required to understand how these two kinds of polling evolved. Random sample polls, based on statistically sound theory, broke the mainstream news media barrier in 1936 and opened up the modern era of professional polling. For a few years there was some experimenting with question design and wording by two of the earliest pollsters, George Gallup, Sr., and Elmo Roper, who were enthused about the potential for polling to help politicians understand what the public wanted and needed from government and that much later came to be called "public-interest polling."

By the end of World War II, pollsters under the pressure of market forces had swung over to a different approach to polling that is the basis of virtually all political polling since then, which I have called "commercial polling." Prior to about 10 or 15 years ago, it was not clear to anybody, including me, that there was a big difference in findings between these two kinds of political polling: public-interest and commercial. A few who initiated and supported public-interest polling beginning after World War II slowly discovered that the gap between the two kinds of polling was wider than anyone had imagined. Still, the volume of public-interest polling available to most politicians and observers was negligible to non-existent, compared to the huge volume of commercial polling.

A few words about my own involvement in polling are relevant. In 1946, along with other interpreters in the U.S. Army in Japan, I conducted "man on the street" interviews for Gen. Douglas MacArthur. In the 1980s, I participated in design, sponsoring and conducting a few political and market-research business polls. I have never been a commercial pollster. Beginning in 1987, I ran the Americans Talk Issues (ATI) Foundation that through 1999 conducted over 30 surveys all aimed to uncover the public interest. It was only when I studied the methods, concepts and findings of the first 28 ATI surveys in preparing the book "Locating Consensus for Democracy – a Ten-Year US Experiment" published October 1998, that I realized the gap between the two kinds of political polling was huge. The existence and size of the gap is misunderstood or ignored by politicians, pollsters, and the mainstream media, to this day.

Now, the good news. The gap is slowly closing. And, much as I would like to take the credit, the gap is closing as a result of the enormous, growing volume of instant, global communication (aka "the Internet") pushing commercial, political pollsters to compete with each other and with public-interest pollsters. Many poll results dealing with different aspects of a major issue within each news cycle reach the broadcasters who become increasingly content-reliant on well-known polling organizations. Some organizations have their current and archival findings conveniently arranged on their own websites, and have figured out clever ways to get revenue from viewers who need those findings professionally as well as to require payments in a way that does not irritate too much those who think web-information should be free. These four are getting some revenue from fees related to full access of their websites: (1) Gallup's "Tuesday Briefings"
www.gallup.com; (2) Tom Silver's "Polling Report," an aggregator of the polls of others,
www.pollingreport.com; (3) Doug Miller's
www.globescan.com; and (4) U.K.-based
www.yougov.com. Some well-known polling organization have polling results on their websites with no charges, including (5) Knowledge Systems' collaboration with Steve Kull's University of Maryland "Program on International Policy Attitudes,
www.pipa.org; and (6) "The Public Agenda"
www.publicagenda.org.

Compared to 10 or 15 years ago, the respectful care and accuracy now given by commercial pollsters to describing the findings of their surveys has led to public acceptance that the polls are usually right and consistent with each other, or that inconsistencies are readily explained. More often now than formerly, the pundits' explanations of why policy support is shifting are believed credible by significant audiences.

The big difference still remaining between public-interest and commercial pollsters are the findings. Commercial pollsters lack the funding and the experience needed to go into issues in depth and over a wide range of policy choices that makes the public's policy choices so revealing. This difference is based on two facts: (1) although it seems that exploration in depth requires more questions and increases survey cost, the ability to get the whole story more completely is a big saving in cost for public-interest polling over commercial polling in the long run; (2) all of the good techniques for exploring the public's attitudes are still hardly used by "commercial pollsters."

These good techniques of public-interest polling include:

(1) Using the "battery." A "battery" is always three or more items in the same frame, requiring each item to be evaluated and rated by the same scale as all the other items. Batteries with larger numbers of similar items can be repeated in subsequent surveys with each item rated on the same scale, with a few new items added, a few deleted, but with most retained unchanged. When there has been no major event relevant to the issue from one survey to the next, then the retained items rank in the same order (allowing for small changes in the rating of each item due to statistical fluctuations) in both the newer survey and the earlier survey, with each new item slotted someplace in among all the items ranked. To build confidence in the results, the pollster should find that if new items are slight modifications of old – the rating change is consistent with what is expected when small wording changes occur. If one item is the negative of another, then finding that the disapproval rating of one is close to the approval rating of the other gives confidence in the internal consistency of the survey.

An example of a six-item battery is given in Column #36. Definition, characteristics, advantages, and several more examples of batteries can be found by searching on "battery" in Chapter 4 of "Spot-the-Spin."

(2) The debate format that shows how people change or don't change their opinions as more information is provided and the question is re-asked.

An interesting example of the debate format illustrating a little-known phenomenon, the dynamic equilibrium of the public's beliefs, is given in column #19. Further examples of the variations in the debate format appear in Chapter 4 of "Spot-the-Spin." To find them, search on "debate."