Thursday, May 10, 2007

Plus or Minus Three Percent is Six Percent of Nail-biting Uncertainty

SENATOR RODOLFO BIAZON was talking to ABSCBN News anchor Pinky Webb at noon time today and reminded everybody of the fact that his 2004 Senate victory to claim 12th and last seat contested in that election was achieved with a razor thin 6,000 vote margin over Robert Barbers. In an election of 30 million voters, that is one part in 5000 or .02 percent. You can't call a race like that with surveys that have plus or minus three percent margin of error...like the latest SWS survey of the Senatorial Horse Race in the homestretch...

The most recent PDI-SWS poll measures voter preferences in the Senate race just days ahead of the May Midterm elections. The table at left shows how the composition of the so-called Magic 12 Candidates has evolved over the last four SWS surveys. You can easily spot the names that have been in the winner's circle from the beginning. However, as Election Day nears, the Undecided begin to decide, and even the Decided can easily change their minds. And since the Social Weather Stations Surveys for voter preference involves a random sample of just 1200 respondents, there is a statistical sampling uncertainty or "margin of error" numerically equal to plus or minus the reciprocal of the square root of 1200, or plus or minus 2.89 or about 3 percent, attached to every statistic measured by the survey. For example when the survey reports that Loren Legarda has topped the May 1-2 SWS-PDI survey in the Senate race with 59% of the respondents saying they would vote for her, then the pollster and the media are justified in saying that if the elections had been held when the survey was taken, then the percentage of voters for Loren would be between 56% and 62%.

But I think a sample size of just 1200 respondents produces statistics that are much too coarse to make firm predictions about who exactly will compose the Magic 12. Although the first six or so places may be said to be quite firmly occupied by their present tenants, there is an awful lot of uncertainty below that statistical level. It is not enough to say that anyone below 3% of the current No. 12 candidate has no chance of breaking into it. Consider the performances in the last period of Juan Miguel Zubiri and Sonia Roco. I note with some ill-disguised gladness that in the May 1-2 survey, it looks like Ed Angara and Joker Arroyo are fighting it out for the last two seats, and both could in fact be displaced by hard-charging candidates from below. Even Antonio Trillanes is within striking distance, in my opinion and cannot be counted out, along with Migs Zubiri and Sonia Roco.

There is of course another consideration. No matter how many respondents the pollsters use, there is a natural time limit involved which means there is no way for the surveys to capture fast changing developments in voter preferences as election day nears, when the Undecide decide and even the Decided can easily change their minds.

Calling a Presidential Race is actually easier for the pollsters because there is only one winner and not twelve.

A survey is based on a sample of voters, while an election is based on the entire population of voters. The science of statistics is all about using a sample to learn the characteristics of the population from which it is drawn.

Techniques borrowed from Marketing, such as the key ideas of random sampling and statistical inference have been successfully used by public opinion pollsters to track and report upon political contests, such as elections or referendums, as well as to accurately predict their outcomes. Mahar likens the SWS to a kind of listening post or social observatory and compares Public Opinion Polling to the rest of Main Stream Journalism in order to explain how such empirical social research, which is really a form of "market research," is actually financed.

"Opinion surveys are scientific instruments for mass communication - not for disseminating, but for LISTENING to the masses...A media news company disseminates both unpaid news and paid ads to the people. The ads don't buy the news; they only finance it...SWS listens to the people's voices, on both unpaid and paid topics...The commissioned projects don't buy the election survey standings; they only finance them."

As one of its principal founders, Mahar Mangahas is justifiably proud of the Social Weather™ Stations (SWS) and its scientific accomplishments, which have gained international recognition. He lays down the gauntlet for any would be public opinion pollster--

The global litmus test of sample survey quality is ability to predict an election. If it weren't for elections, in fact, there wouldn't be regular demonstrations that the science of statistics works for sample surveys about people's attitudes and intentions.

"If it weren't for elections," Mahar Mangahas admits, "there wouldn't be regular demonstations that the science of statistics works for sample surveys about people's attitudes and intentions."

It cannot be denied that SWS has accurately predicted the winners and the winning margins in numerous national and local elections over the years. It's well-deserved reputation for conducting scientific voter preference polls has been earned the hard way: by conducting random sampled surveys

Public Opinion Polling produces a very special kind of information, quite analogous to the physical weather information of Pagasa weather bureau, but pertaining to trends in public opinion itself, often in response to national events such as elections or other major news events. Pollsters like SWS have established the notion that Public Opinion itself is a measurable quantity. As such, the surveys and their results have acquired a newsworthiness with a commercial, journalistic, social and political value to the pollsters, others in the Mass Media, as well as the subjects and objects of the polls. Having assiduously built up a well-deserved scientific reputation over many years of hard work and hitting the statistical bullseye most of the time, the survey results of the SWS are deemed by its subscribers and customers to have diagnostic as well as a predictive uses. One can only imagine how many vainglorious ambitions have been saved from folly and penury after seeing the cold hard figures of a privately commissioned SWS survey. Conversely, how much vainglory has been stoked by the same, or disasters wrought, how are we to know? Nonetheless, the professionalism of the SWS has made it into a sustainable business AND a scientific research institution, a "social weather observatory" if Mahar likes that term better...

SWS is a research entrepreneur. It is an enterprising non-profit -- a term used in the Harvard Business Review -- or, if you like, a business-like non-business. It is misleading to term it simply as "a business" or "a company" or "a firm". A good generic term is "institute." SWS is venturesome. It gathers data on topics without earmarked funding. It deliberately focuses on critical gaps in data on meaningful development, even if the topics are un-commissioned - in particular, the data gaps on poverty, hunger, governance, and opinions on important public issues like charter change. SWS will definitely take up anything that could be tested in a referendum or an election.

Now let me make a point about the last statement above. I think that WHEN a pollster like SWS takes up anything that will be tested in a referendum or election, the survey results usually have the advertised statistical accuracy and the scientific value of the survey measurements are of the highest quality available. But WHEN any survey, no matter how scientifically conceived and carried out, seeks to measure the Public Opinion about something that will NOT be tested in a referendum or election, or some other similar universally experienced event, the scientific VALUE of the survey measurements, whether diagnostic or predictive, is far less compared to surveys that probe real-world events and reactions.