With no manufactured outrage to hammer Mitt Romney at the moment, liberal journalists are now eagerly touting a series of polls which appear to show President Obama pulling away from the GOP nominee in several key states.

Unfortunately, these polls are relying on sample sizes which are skewed tremendously leftward with far more Democrats than Republicans and as such, they are unlikely to be good predictors of actual Election Day turnout. Do the pollsters themselves actually believe in their own sample sizes though? At least one appears not to.

Interviewed last month by conservative talk show host Hugh Hewitt, Peter Brown, assistant director of the Quinnipiac polling operation was particularly squeamish about sampling under tough questioning from Hewitt about a poll which Quinnipiac had released showing Democrats with a 9 percentage point advantage in the state of Florida.

In the conversation, Brown defended Quinnipiac’s sampling techniques but admitted that he did not believe that Democrats would outnumber Republicans to that degree in Florida come November. Pressed by Hewitt, the pollster said he believed that was a “probably unlikely” scenario. Instead, Brown kept saying that he thought his poll was an accurate snapshot of reality at the time.

“What I believe is what we found,” he insisted while also touting his organization's record of polls closer to actual elections.

Unfortunately, this cavalier attitude toward accuracy is actually widespread throughout the entire polling industry. As NewsBusters noted in June, exit polls, which rely on far larger sample sizes than those conducted by Quinnipiac and others have long been known to oversample Democrats, sometimes even drastically. Sadly, the awful record that many pollsters have is something that most people barely know anything about. As such, it is one of the media’s “dirty little secrets” since Americans certainly won’t hear about it from the press.

Despite not believing that Democrats would have a 9-point advantage, Brown defended his organization, claiming that he and his colleagues were not intentionally trying to skew their sample size:

“We didn’t set out to oversample Democrats,” he protested. “We did our normal, random digit dial way of calling people. And there were, these are likely voters. They had to pass a screen.”

But what if that screen is simply not enough? The 2012 presidential election is unlikely to have an electorate which is similar to the ones before it. In the 2008 election, young and black voters turned out in record numbers and voted in even higher percentages for Obama. As specific surveys of these two voter groups have shown, however, both are dispirited this time around and are less likely to turn out for Democrats.

This point is particularly crucial given that the electorates in the years following 2008 have been much more Republican skewed. It could be argued that these were off-year elections and thus less likely to have blue-collar and college kid Democrats turn out to vote but ultimately no one knows today what the party breakdown will be November 6.

That’s why it’d be best for pollsters like Peter Brown to double-check their work the way that Scott Rasmussen does against a running party ID poll, especially considering by Brown’s own admission that Quinnipiac’s process for determining who will actually vote is “not a particularly heavy screen.”

A partial transcript of this highly illuminative interview is provided below courtesy of Hewitt show. Please see this link for the complete discussion. (Hat tip to Da Tech Guy who has more on the sampling controversy.)

HUGH HEWITT: Why would guys run a poll with nine percent more Democrats than Republicans when that percentage advantage, I mean, if you’re trying to tell people how the state is going to go, I don’t think this is particularly helpful, because you’ve oversampled Democrats, right?

PETER BROWN: But we didn’t set out to oversample Democrats. We did our normal, random digit dial way of calling people. And there were, these are likely voters. They had to pass a screen. Because it’s a presidential year, it’s not a particularly heavy screen.

HEWITT: And so if, in fact, you had gotten a hundred Democrats out of a hundred respondents that answered, would you think that poll was reliable?

BROWN: Probably not at 100 out of 100.

HEWITT: Okay, so if it was 75 out of 100…

BROWN: Well, I mean…

HEWITT: I mean, when does it become unreliable? You know you’ve just put your foot on the slope, so I’m going to push you down it. When does it become unreliable?

BROWN: Like the Supreme Court and pornography, you know it when you see it.

HEWITT: Well, a lot of us look at a nine point advantage in Florida, and we say we know that to be the polling equivalent of pornography. Why am I wrong?

BROWN: Because what we found when we made the actual calls is this kind of party ID.

HEWITT: Do you expect Democrats, this is a different question, do you, Peter Brown, expect Democrats to have a nine point registration advantage when the polls close on November 6th in Florida?

BROWN: Well, first, you don’t mean registration.

HEWITT: I mean, yeah, turnout.

BROWN: Do I think…I think it is probably unlikely.

HEWITT: And so what value is this poll if in fact it doesn’t weight for the turnout that’s going to be approximated?

BROWN: Well, you’ll have to judge that. I mean, you know, our record is very good. You know, we do independent polling. We use random digit dial. We use human beings to make our calls. We call cell phones as well as land lines. We follow the protocol that is the professional standard.

HEWITT: As we say, that might be the case, but I don’t know it’s responsive to my question. My question is, should we trust this as an accurate predictor of what will happen? You’ve already told me there…

BROWN: It’s an accurate predictor of what would happen is the election were today.

HEWITT: But that’s, again, I don’t believe that, because today, Democrats wouldn’t turn out by a nine point advantage. I don’t think anyone believes today, if you held the election today, do you think Democrats would turn out nine percentage points higher than Republicans?

BROWN: If the election were today, yeah. What we found is obviously a large Democratic advantage.

HEWITT: I mean, you really think that’s true? I mean, as a professional, you believe that Democrats have a nine point turnout advantage in Florida?

BROWN: Our record has been very good. You know, Hugh, I…

HEWITT: That’s not responsive. It’s just a question. Do you personally, Peter, believe that Democrats enjoy a nine point turnout advantage right now?

BROWN: What I believe is what we found.

Update 17:40. Via Jim Geraghty, I found a National Journal article in which Brown's boss, Doug Schwartz, defends not weighting poll results for party identification via a strawman argument:

"If a pollster weights by party ID, they are substituting their own judgment as to what the electorate is going to look like. It's not scientific," said Doug Schwartz, the director of the Quinnipiac University Polling Institute, which doesn't weight its surveys by party identification. [...]

Schwartz, whose institute conducts polls in battleground states for CBS News and The New York Times, asserts that pollsters who weight according to party identification could miss the sorts of important shifts in the electorate that could be determinative.

"A good example for why pollsters shouldn't weight by party ID is if you look at the 2008 presidential election and compared it to the 2004 presidential election, there was a 7-point change in the party ID gap," Schwartz said. Democrats and Republicans represented equal portions of the 2004 electorate, according to exit polls. But, in 2008, the percentage of the electorate identifying as Democrats increased by 2 percentage points, to 39 percent, while Republicans dropped 5 points, to 32 percent.

"There are more people who want to identify with the Democratic Party right now than the Republican Party," he added.

There are several problems with Schwartz’s argument:

No one is asserting that party distribution in any poll should be exactly similar to exit polls conducted in the previous electoral cycle. This is a strawman notion.

While it is true that the emergence of the Tea Party movement has spurred dissatisfaction with the Republican Party among those most likely to vote for it, such voters are more likely to identify as independent instead of Republican. If that is the case, the D-R split is important but so is the D-R-I split. Unfortunately, many recent media polls have featured smallish numbers of independents as well as Republicans.

Schwartz claims that it is unscientific to use party identification as a means of determining “what the electorate is going to look like” and he is correct in that regard. Party identification is a fluid characteristic, people can and do change their minds as to what candidate they will vote for. That being said, the means whereby a pollster identifies who is a “likely voter” are all similarly arbitrary which is why almost all survey companies keep their means for doing this a closely guarded secret.

While it is somewhat fluid, party identification is something that can be the differentiator in helping a pollster to clarify whether or not his/her sample is actually a representative one. Simply having a large sample which is representative via fixed characteristics like location, income, race, and ethnicity will not necessarily lead to more accurate data. This has been demonstrated repeatedly by the many failures of exit polling to accurately predict actual vote totals. If there is indeed a greater willingness to respond to pollsters’ questions on the part of Democrats (which exit polling data indicate is indeed the case), then a truly scientific polling company needs to take this into account.

Federal employees and military personnel can donate to the Media Research Center through the Combined Federal Campaign or CFC. To donate to the MRC, use CFC #12489. Visit the CFC website for more information about giving opportunities in your workplace.