Set against this context, it is not unreasonable to ask if the polls could be wrong. But based on the evidence, it would be unreasonable to conclude that the polls are giving us a qualitatively incorrect impression of how the election is shaping up.

To be sure, different pollsters will generate different estimates of the same quantity of interest. Even with large samples (such that sampling error is driven close to zero), we'd see differences across pollsters due to differences in methodology, question wording, question order, response formats, etc.

Current estimates of house effects for the more prolific pollsters in the Pollster data base are shown below, in the form of a "caterpillar plot." Each plotted point is our best guess of the house effect of the indicated pollster; the horizontal lines cover a 95 percent credible interval or MOE. For pollsters that appear more often in the data (e.g., Rasmussen) and/or report bigger sample sizes, we're able to estimate the house effect more precisely.

The house effects are estimated subject to a "zero on average" constraint. That is, we assume that averaged across the polling industry, the polls "get it right." This is perhaps a little unsatisfying, but unavoidable. There just isn't a known, objective, truth against which to calibrate our modeling. Put differently, we're trying to estimate an unobserved level of voter support at the same time as as we're trying to estimate house effects. Assumptions of some kind are required.

One consequence is that we're not able to directly test the assertion that the polls are collectively biased towards one candidate or the other, at least not until the actual election occurs.

What we can do is assess biases of pollsters relative to one another. There is no "smoking gun" when we examine these relative house effects. As the figure makes clear, there are some big pro-Obama house effects out there, particularly before pollsters started using likely voter filters.

PPP and SurveyUSA were generating RV two-party voting intention estimates were too favorable to Obama, by just over a percentage point; that has dropped a little since they shifted to filtering for likely voters.

Rasmussen's house effect is precisely estimated because they contribute so many polls to the data set; relative to the industry-average, Rasmussen runs about 1.3 points in a pro-Romney direction, on average (less than what many observers think). Gravis, Purple Strategies, Mason-Dixon and ARG also generate numbers that run in pro-Romney direction, with the Gravis house effect particularly large (about 2 percentage points on average). Gallup's RV numbers are close to the middle of the pack, as are Shaw, Anderson Robbins LV (or RV, for that matter), We Ask America and YouGov.

These house effects simply don't suggest that something is vastly wrong this year. Rasmussen's numbers are more pro-Romney than the industry-wide norm. But only by a point or so.

So try this. Momentarily assume that "Rasmussen is truth." Then the rest of the industry is pro-Obama by a little over a percentage point. Then give back a point or so on every poll you've seen over the last couple of weeks. North Carolina might flip back into lean-Romney, and Virginia back to toss-up.

But the big picture remains unchanged. Obama's leading comfortably. That is hard for some to accept. There just isn't enough bias in the polls to change that conclusion. The "polls are skewed" take on this campaign is a fantasy.

Bill McInturff and Peter Hart conduct polling for NBC and The Wall Street Journal. ORC conducts polling for CNN. Anderson Robbins Research and Shaw & Company Research conduct polling for Fox News.