At RCP’s latest poll page there are all these RT Strategies/CD polls which tend to all tilt leftward of the other polls. Look at any close race and you will see RT Strategies/CD as the most left leaning polls in the lot – here for example, where this would not even be close if not for the RTS/CD polls. And when you look at internals, they sometimes have huge biases towards dems (and sometimes towards Reps). But no matter what race you look at, RTS/CD results stand out. And now I know why. The process used by RTS/CD is not a validated process:

The poll was conducted Oct. 24 to 26 using interactive voice recognition technology over the telephone.

Majority Watch is a project of two polling companie s: RT Strategies and Constituent Dynamics.

This is a variant on the Rassmussen process which I believe uses recordings and dial tone response verses voice recognition SW. But as we all know, the voice recognition is a bit buggy. That is why most of us give up on those response systems and just try and get an human operator. I would take all RTS/CD results with a huge grain of salt. There statistical error is probably swamped by the error in their methodology and voice recognition system. They even illustrate the problem in this article by comparing their results to a poll taken simultaneously using conventional methods:

In August, the poll had Kellam up by eight points – 51 percent to 43 percent. But then, earlier this month, the poll had Drake up by two points – 48 percent to 46 percent.

The most recent Majority Watch poll was taken about the same time as one commissioned by The Virginian-Pilot, pilotonline.com and WVEC-TV.

That poll, released last week, was conducted by Mason-Dixon Polling and Research Inc. of Washington. It showed that the race was close, with the candidates about even. Drake had 46 percent to Kellam’s 44 percent, with 10 percent undecided and a margin of error of plus or minus 5 percentage points.

RCP shouldn’t even be including this methodology yet, since it has limited if any validation it can measure responses accurately.

One Response to “Unverified Polling Practices”

This doesn’t surprise me. The polls have been all over the place in many of the close races. The fundamental problem with the RCP is that it averages these polls with no regard for quality or accuracy. We’ll see on Nov. 7 how good the phone methodologies are…I think they’ll be found to be worthless. Could someone please explain how a sample of 500 people who (for whatever reason) bother to respond to a stupid phone poll is somehow representative of a diverse population of millions?!?