A bipartisan Lifetime poll released this week made the rounds for showing women "up for grabs." Obama's 11-point lead among women was called "lackluster" since it fell short (by a point) of majority.To me, this sounds much like the "Obama can't close the deal" Republican talking point.In this critique, Obama should be performing as well as an incumbent, in an open seat, and if he's not then he must be somehow weaker than McCain, even if McCain is trailing.It strains credulity.

In fact, Obama's lead among women is comparable to past elections, looking at national exit polls in the graph below (for this purpose, 1996 Dole and Perot support are combined).If Obama's support is lower, it's because, with 10% undecided, and presumably 3% voting for a third party candidate (the polling release is unclear), the sub-total of 87% is lower than the 100% in exit polls.If, as the Republican pollster said, Obama is underperforming with 49% (compared to 54% in 2000), then McCain is also underperforming with 38% as opposed to 43%.

A gender gap update

As I wrote last week, Obama's gender gap is currently at the high end of what we've seen in past elections.As one commenter correctly noted, Obama's 10-point gender gap from July 21-27 indeed had increased dramatically since June, and was reaching historic highs.But I didn't express alarm because I wasn't convinced the increase would continue.Indeed, an update to our Gallup gender gap graph shows that to be true.

Another commenter wondered what was causing the fluctuation in Obama's gender gap--Obama's support among women or men.The chart below shows both Obama and McCain's support by gender.And, in fact, Obama's support among women is (slightly) the most volatile.

However, by volatile, I mean a fluctuation of four points, compared to a fluctuation of two or three points for the other groupings.Now, four points obviously can mean a lot on election day, but this far out, in a national survey (as opposed to battleground state analysis) "slightly more volatile" is as far as I'm willing to go when talking about Obama's support among women."Lackluster" it is most certainly not.

Here are a few short but relevant updates to topics covered earlier in the week that I did not want to get lost at the bottom of older posts:

Yesterday, here and in my column, I looked closely at the small percentage (10%) of 18-to-29-year-olds among the "likely voters" in the most recent USA Today/Gallup poll. Later, I also noted the different approach to modeling likely voters taken by the recent Time/AbtSRBI poll that appears to reduce the volatility in these early numbers.

One thing we overlooked in the Time poll: The self-identified "registered" voters included an even smaller percentage of 18-to-29-year-olds (9% - see QF1) than the "likely voters" in the USA Today/Gallup survey (10%), and six points fewer than the self-ID'd registered voters in the Gallup survey (15%).

Earlier in the week, I also pointed to some data from the News Index surveys by the Pew Research Center to make the point that most voters in July are not following the campaign as a jury follows a trial. This passage in the CBS News analysis of their follow-up survey of uncommitted voters makes the point even more clearly:

One possible reason the uncommitted voters haven’t changed much: they’re paying much less attention to the campaign in the last few weeks.

When asked in mid-July how much attention they’d been paying to the 2008 campaign, generally, 45% said they’d paid a lot and just 14% said not much or none. When asked in this poll how much attention they’d been paying in the last few weeks, only 18% reported paying a lot lately.

Finally, University of Wisconsin-Milwaukee political science professor Tom Holbrook was thinking along the same lines as I was on Monday morning regarding the Washington Post-Kaiser-Harvard survey of low wage workers. Apologies to Tom for not linking sooner.

[Editor's Note: We are pleased to add yet another contributor to the Pollster.com lineup. Kristen Soltis is currently the Director of Policy Research for The Winston Group, a Republican affiliated public opinion research and strategic consulting firm in Washington, D.C. Welcome Kristen!]

The debate over party ID and whether or not weighting for party ID is appropriate has raged on for years, with a very thorough treatment by Mark Blumenthal and others that raises good questions about whether or not party ID is stable at the individual level. Recent media polls with wide ranging spreads between Republicans and Democrats make it all the more appropriate to bring this debate back.

Those on the side favoring weighting say that it is important to compare "apples to apples", to see if more people actually are voting for Obama than last month, or if we just happened to get a sample more favorable to him. On the other side, you have folks who view partisan identification as a question response, not a demographic group, and view weighting by party as methodologically unsound.

Though it's controversial, I believe that weighting for party ID is appropriate if done in a manner consistent with historical norms. I fall into the camp that believes party ID is far more static - that voters can change their preferences and the intensity of their partisanship often, but do not as frequently take the step of giving themselves a new party with which to identify. To me, party ID falls somewhere in between "demographic fact" and "variable question response". Preventing wildly fluctuating data outside historical norms provides a better picture of what real movement is occurring in the electorate on questions like the ballot test.

On Election Day, the partisan makeup of the electorate is rarely dramatically different from the election four years prior, and the exit polls from the last twenty years corroborate this. The National Election Study at the University of Michigan back in the 1960s showed party ID was stable at the individual level, but some have dismissed this as an example that works today. So let's take a look at more modern day politics, with a time frame of last twenty years (presidential elections since 1988). Washingtonpost.com has a great, simple table of this exit poll data.

In 1988, Democrats had a three-point party ID advantage over Republicans (38-35). In 1992, Democrats still had a three-point party ID advantage over Republicans (38-35). In 1996, that advantage increased to four - a shift of one point (39-35). In 2000, Democrats were steady, up by four (39-35), and in 2004 they dropped to even (37-37).

During presidential years, over the last five presidential elections, the biggest party ID gap was four points, and the greatest swing was four points as well.

Arguments can certainly be made that in this environment, Democrats should be expected to have a huge partisan shift in their favor. But note that in 2006, when Democrats clearly found enormous success at the ballot box, that the advantage in party ID was only three points (38-35). Polls leading up to the election showed party ID gaps as big as eleven points (Newsweek's poll on Oct 5-6, 2006), rarely showing party ID gaps of less than +5 for the Democrats.

On Election Day, as measured by the exit polls, the party ID divide was just three points.

Just because people are voting Democratic doesn't mean they are becoming Democrats.

Truth be told, the decision to use weights for party ID has everything to do with whether or not a pollster views party ID as a "response" or a "demographic", and when it is a fairly stable characteristic of the electorate, I feel comfortable placing it on the spectrum closer to "demographic". It's not perfect, to be sure, but I'd rather compare surveys month to month and observe movement by comparing apples to apples.

However, whether or not weighting is used, the partisan makeup of a poll must factor into the understanding of whether the poll is presenting a realistic piece of information. I certainly don't believe all polls must weight for party ID in order to be useful. But regardless of whether the party ID is organic or weighted, it should still look reasonable.

So let's take a current example that I have trouble with. As "bambi" noted in the comments (taking quite a bit of heat, and with some calculations that I do disagree with) just this morning about the most recent CBS poll, after weighting for demographics, the difference between Republicans and Democrats nearly doubles. While the unweighted sample has 317 Republicans and 381 Democrats (out of 1034 adults), the weighted sample has 284 Republicans and 406 Democrats. This changes the spread from a 6 point spread (31-37) to a 12 point spread (27-39).
Truth be told, if a poll shows a six-point party ID spread, I wouldn't immediately dismiss it. Furthermore, the CBS poll is of adults, not registered or likely voters, so that gives it freedom in my opinion to veer a bit outside the norms. I'm not dogmatically tied to historical precedent, though I think it's very instructive in determining what is "reasonable".

But a twelve point spread? Whether this is a blip or what consistently turns up in the numbers, I have incredible difficulty believing that a margin of that magnitude is an accurate reflection of the electorate. A six-point lead is within the realm of possibility given a really great year for Democrats. But a twelve-point spread is simply outside the bounds of history, given that in twenty years of political change and history, the greatest margin has been four.

Case in point, the comment left yesterday by George Mason University political science Professor Michael McDonald about the latest Time/SRBI poll:

Continuing my war on likely voter models...

Here we have 808 "Registered Likely Voters." Q1 reports 100% of the sample is registered and Q2 reports 90% are "definitely" going to vote and 10% "probably." I guess this means that registered likely voters must have to respond affirmatively to being registered and "definitely" or "probably" to voting. This is different from Gallup, which requires likely voters to have a past history of voting and to express an interest in the campaign. There is no indication of weighting in this survey, so who knows what it going on there.

If I am correct, then this two-question likely voter model seems less biased against young voters and less volatile due to changing interest. This may explain the stability since June in this poll compared with the USAToday/Gallup poll.

Mike's theory seemed plausible, so I sent an email to Mark Schulman, CEO of Abt SRBI, the firm that conducts the Time poll. Here is his full response:

Mike, the Time sample is indeed weighted based upon the entire cross-section sample, as are most election surveys. We retain demographics for the entire sample, registered or not, and weight the entire cross-section sample on the usual Census demographic variables. The 100% you cite is the total of self-reported registered voters who are then asked about likelihood to vote. It does not include unregistered screen outs, who skip straight to the weighting demographics. I see that this can cause confusion. I'm glad that you requested this clarification.

You are correct in that we are not currently using past vote in our model. My objective in the pre-convention polling is to be fairly inclusive in the voter model until after the nominating conventions, when the campaigning starts in earnest. We're likely being a bit too inclusive with the light voter screen, but this still improves upon reporting based upon registered voters. Research on models which include "interest in campaign" and related questions finds variability in the composition of the likely voter profile during early campaign period, leading to some volatility in the estimate. This volatility is reduced as the election approaches.

We always tighten the model a notch after the nominating conventions. To be perfectly honest, I don't claim to have all the answers at this point on which approach we will use to tighten the model. I'm concerned about the likely influx of new voters, young voters, newly registered voters, newly activated voters. In 2004, we had an increase in turnout, even with an incumbent whose job rating was still just below 50% at that time. I don't have a fix at the moment on what to expect in 2008. Our plan is to consult with several leading experts in turnout models later this month and then make some decisions on which approach to take on our turnout model and targets. We're not wedded to any one approach. FYI, for internal purposes, we do break out our horse-race data by likelihood to vote to gauge the impact of smaller vs. larger turnouts.

I do wish to emphasize that we should not strictly abide by past turnout percentages reported by the U.S. Census. Our landline telephone universe is smaller than the Census CPS universe because of undercoverage. Therefore, our target turnout number will be higher than Census turnout trend data would suggest.

Thank you again for requesting this clarification.

If all of this detail confuses you, here is the short version: The Gallup Likely voter model, as applied to the last two USA Today/Gallup polls, uses self-reports of past voting and interest in the election to help identify "likely voters" (in addition to questions about registration and intent to vote). In surveys conducted before the conventions, the Time/SRBI poll does not -- it uses only questions about registration and intent to vote.

Update: Although the Time/SRBI poll uses a simplified likely voter model that should produce less volatility, their sample of registered voters managed to include an even smaller percentage of 18-to-29-year-olds (9% - see QF1) than the "likely voters" in the USA Today/Gallup survey (10%) discussed earlier, and six points fewer than the self-ID'd registered voters in the Gallup survey (15%).

My NationalJournal.com column for the week is now online. It revisits the nearly two-week old USA Today Gallup poll that showed a big difference between registered voters and those selected as "likely voters" with a focus on the age of the likely voter pool.

After you read the column, the following data may be of interest. First, notice that while the most recent, conducted in late July, showed a net shift of seven points between registered and likely voters, no such gap existed in the poll conducted just a month before. In mid-June, Obama led by six percentage points among both registered and likely voters.

What makes that difference interesting is the additional data generously provided by Jeff Jones of the Gallup organization showing how respondents in different age groups answered the four questions used to identify likely voters. As noted in the column, younger voters tend to score lower on all four questions. Notice that the percentage of 18-29-year-olds who said they had given "quite a lot of thought" to the election plummeted from June (60%) to July (45%). Similarly, the percentage who rated their chances of voting as a 9 or 10 on a 1-10 scale dropped ten points (from 69% to 59%).

Thoughts anyone?

Update: Nate Silver has additional thoughts. Note that the method he describes as "the most logical way to handle" the likely voter problem is, in essence, the way the CBS/New York Times poll will model likely voters in October. Their most recent release provides results for registered voters, but not likely voters.

Also, see the related comments we just posted from Time/SRBI pollster Mark Schulman.

Nick Panagakis is president of Market Shares Corporation, a marketing and public opinion research firm headquartered in Arlington Heights, Ill.

This post is the forth installment of a dialogue between pollsters David Moore and Nick Panagakis about the best way to measure and report how many voters are "undecided." See their earlier installments here, here and here.

I agree with much of what David Moore says in his response, including percentage undecided that seems too low as is currently being reported. Where we differ is on terminology. The potential for mind-changing is a lot less than you think.

Yes we are "interested in portraying what the electorate is thinking today". Now that general election national polling is underway, we will be interested in finding (needless to say) whether voters did change their minds about the candidate(s) since the last poll asking how they would vote "if the election were held today".

My issue is about reporting results with low conventional undecideds followed by a large number in the 20%+ range who could still change their minds. It's enough to give readers and viewers whiplash.

In my last post on this subject I hypothesized that such high numbers are not "indecision" as implied by "could change their minds". I said some voters willing to decide on a candidate in a poll won't rule out the possibility that some incident or candidate disclosure, however remote, could lead them to vote otherwise.

The ABC Polling Unit provides some validation of this. Their polls have been asking this question of decided voters since 2004: "Would you definitely vote for ___ or is there a chance you could change your mind and vote for someone else?" This has been asked three times since May this year and eight times in 2004, from June 20 to September 26. This year, "could change your mind" has ranged from 25% to 29%, similar to response levels seen in current polls, dissimilar wording not withstanding. In 2004, "could change your mind" was 28% in the June 20 poll then steadily declined to 16% in late September.

But unlike other polls, ABC then probed potential mind-changers by asking "Is there a good chance you'll change your mind, or would you say it's pretty unlikely?" So far this year, about half say "pretty unlikely" as did respondents in June, 2004 polls. July to September 2004 showed another pattern. 'Pretty unlikely" voters began to consistently outnumber "good chance" of mind-changing voters by a ratio of 2 to 1. This could mean that two-thirds of possible mind-changing voters in current polls, if asked their chances of doing so, would rate their chances as pretty remote. Should mind-changing as currently being presented be part of any story when the chances of doing so are so slim? I don't think so. I prefer the ABC qualifier.

Another thought. Shouldn't there be some analysis to validate such high could change their mind numbers? The analysis could compare poll stated undecideds with "could change their mind" levels with actual candidate vote preference changes from poll to poll and to election outcomes.

Another subject. David mentioned the recent CBS poll. According to their release they had 12% undecided which seems reasonable to me. If you go to pollster.com's national summary you will find many polls with much lower undecideds. However, half of Gallup's higher undecideds shown there are actually vote for "neither" which should not be combined with undecideds in that table. Click the Gallup links. Moreover, "neither" response is not very meaningful. It would be more precise to replace it with "vote for other" and "won't vote" with non-voters excluded from the base for calculation of voter percentages. All for now.

Bear with me. The post that follows links to three seemingly unrelated items that will hopefully add up to a coherent point about our tendency to over-analyze day to day "change" in polls on the presidential race.

The first is Ellen Gamerman's recent Wall Street Journalfeature on "whether people are telling the truth" to pollsters. She reviews the steps well known political pollsters are taking to check for the "Bradley Effect" (sometimes also called the Bradley-Wilder effect), "the idea that some white voters are reluctant to say they support a white candidate over a black candidate." The piece notes that both CBS and ABC News will be checking whether results vary with the race of the interviewer have an impact on vote preferences.

The article also includes a useful review of the work being done by academic survey researchers on whether respondents will be more honest on "self administered" surveys (those without a live interviewer). Don't miss the interactive graphic featuring audio commentary from the researchers on examples of other ways that respondents are sometimes less than honest on surveys. While there are some intriguing new findings (see especially those on "good TV" and "M&M's"), the bulk of the research on this subject warns us to watch out for circumstances where respondents tell us "what they expect [the interviewers] want to hear" (as Tim Byers, a researcher at the Colorado School of Public Health, puts it).

That possibility leads me to a second finding from last week's "News Interest Index" survey from the Pew Research Center. The result that caught my eye received no mention in their analysis, largely because it involved a question they have been tracking on a weekly basis since January that showed no meaningful change last week:

[In the past week] Did you follow news about candidates for the 2008 presidential electionvery closely, fairly closely, not too closely or not at all closely?

So, taking these results at face value, we know that less than a third of Americans are paying "very close" attention to the presidential race. More than a third (36%) say they are following the campaign "not too closely" or "not closely at all." Now consider the 34% who say "fairly closely" in light of what survey researchers tend to take for granted: Respondents sometimes tell us things they think we want to hear. In the context of a survey about about how much attention people are paying to the news, some respondents may be exaggerating their attentiveness to news. I would take the "fairly closely" result with a grain of salt.

Next consider the results from the second and third questions asked on the same survey. Fifty-nine percent (59%) said they heard nothing about Barack Obama in the previous week that made them either more or less favorable to Obama, and 62% said they heard nothing about John McCain that changed their view of him.

These data paint a clear picture for me: Most Americans are paying far less attention to news about the campaign than most journalists, pundits and readers of this site. If we assume that all Americans are following the campaign as a jury follows a trial, we are in error.

We too often expect knee-jerk reactions to events of the day; rarely, in fact, do we see them. With few exceptions public opinion proceeds, instead, by a process known as considered judgment: People obtain information as it develops, evaluate it, let it accumulate to the point that it warrants reconsideration of existing attitudes, and at that point re-evaluate and either maintain or change their views.

Attitudes, this means, are far less flighty or reactive to individual events than is commonly assumed; for the most part they are, actually, rational. Obama's trip, like everything else he's doing - and ditto for John McCain - are therefore about building a case, not about changing daily numbers (which, at this stage, are fundamentally silly).

Combine Langer's description with the fact that many Americans are "obtaining" information about the candidates at glacial pace, and we should be surprised to see much meaningful change in the polling numbers right now, especially those measuring vote preference.

As Mr. Franklin posted on Monday, the polls in 2004 and 2000 began to shift quite a bit right around now--the 100 days out mark. In 1992 Bill Clinton also started rising significantly in mid-July, after the convention. Kerry's summer swoon started just after his convention, Gore's summer rise started just before his. So are you saying the summer movement in those races was not "meaningful change," or that it was, and we should simply be waiting till the conventions grab our attention to see meaningful movement this year?

I meant mostly the latter. The conventions were earlier in previous years. While convention "bumps" may not endure, voters do pay more attention and the information they receive during conventions is important. Nate Silver made a similar observation last night:

Pundits -- including yours truly -- generally exaggerate the speed with which political news reaches a saturation depth in the American electorate. There are a few exceptions -- debates, conventions, and major victories in the primaries can have measurable effects almost immediately, and certainly within the first 48-72 hours. So can DEFCON-2 level controversies like Jeremiah Wright. But most of the things we write about here, or the National Review talks about, or Keith Olbermann talks about, take a long time to penetrate the electorate if they do so at all.

John McCain won the earned media battle last week because the predominant political discourse centered on the issues of race and celebrity--not the economy, the war and George W. Bush. Any time the focus of this election is about something other than the aforementioned three issues it is good for John McCain. Team McCain didn't just knock Obama off-message; it sent his entire campaign bus careening down a back road.

Count me as one of the few analysts who actually thinks the celebrity ad with Paris Hilton and Britney Spears was a good one. Sure, the execution was a bit awkward, but the net-net is that the images stick and they resonate with a good number of swing voters who worry that Obama lacks the substance to be President. The images fit the preconceived notion that some voters already have in their heads. Any time you can tap into these stored perceptions it is that much easier to get your point across. The ad works because it rings true.

So Obama spent the week counterpunching instead of talking about gas prices and the housing slump. And remember the substantive attack points in the "Celebrity" spot: Obama isn't ready, he hasn't accomplished anything, he has no energy plan and he'll raise your taxes. Pretty darn good bogeymen if you ask me.

New Survey Results: Presidential Ballot Test

Recently we conducted a national survey of 850 registered voters. If you're in a hurry: Obama is currently ahead 40% to 35% (we didn't push respondents to make a choice between the two, which is why we have a large "undecided" contingent of around 16%). We think this is a more accurate reflection of the electorate given the early stage of the election.

Cutting the sample to only likely voters (n=647), however, reduces the Obama lead to just two points (40%-38%). This confirms some of the public polling data (and conventional wisdom) that McCain does better in polls of likely voters--and, indeed, perhaps at the polls on Election Day--than he does in polls of all registered voters. While right now that discrepancy is not quite close enough to suggest a McCain victory, it seems fair to say that a lead of less than five points for Obama in polls of registered voters--whether national or statewide--may not indicate much of an advantage at all.
Looking at the demographic breakdown of these registered voters, John McCain has a six-point lead among men and Obama has a 14-point advantage among women. The gender gap has widened somewhat since a previous national poll we conducted in May, where McCain was +4 among men and Obama was +10 among women (this reflects a normalization of the race along recent Presidential voting patterns).

Given his struggles to woo older voters away from Hillary Clinton, it is somewhat surprising that Obama is in a statistical dead heat with McCain among voters 65 and older (he actually leads among those ages 55 and older). With Obama continuing to carry all voters under 35 by the wide margin that propelled him to his primary victory, it's natural to wonder where McCain's support comes from.

The answer is middle-aged and older men. The only age/gender categories where McCain leads? Men aged 35-54 (McCain + 10), 55-64 (McCain +7) and 65 and older (McCain +17). Of course in the past these cohorts have been the most likely to make it to the polls on Election Day.

For a couple of weeks now we've been talking about this election as a referendum on Barack Obama (rather than a choice between Obama and McCain). While we'd like to have a few more surveys to confirm this, it appears that--despite the groundswell in Democratic support as measured by party identification--12% of registered voter Democrats remain undecided, compared with 9% of Republicans.

The fact that 12% of Democrats have yet to throw their full support behind their party's most appealing candidate since Bill Clinton is stunning. Furthermore, Republicans have also traditionally had the advantage in turning out their own partisans. For example, VNS exit polls in 2000 show that 91% of Republicans voted for George W. Bush, while "only" 86% of Democrats voted for Al Gore. That five-point edge may seem small, but, as we have seen many times, it can swing an election. Another point of interest that may be a surprise to those who feel swamped by the intensity and persistence of the 2008 election coverage: 28% of independents have yet to make up their minds. This thing is a lot closer than people realize.

Obama's Overseas Trip

While it begins to fade from the news media consciousness, we do have some data to share on the impact of Obama's trip to Europe, Afghanistan and the Middle East. According to our recent survey, 75% of registered voters "definitely read, saw or heard something" about his trip. We then asked those respondents whether "learning about Barack Obama's overseas trip to Iraq, Afghanistan and other countries has made you more confident or less confident in his ability to serve as President, or has it had no impact?" The overwhelming majority (56%) claimed that it had had no impact. (A caveat: it sometimes may take days, weeks or even months for voters to "digest" an event like this, and even then they're sometimes reluctant to admit that it had an impact on their attitudes). Twenty-three percent of these voters said Obama's trip made them "more confident" in his ability to serve as President and 18% said that it had made them "less confident." Among likely voters, the impact of the trip was roughly the same. Of course, the majority of those who claimed the trip had instilled greater confidence in Obama were Democrats. Among undecided voters, 66% said the trip had no impact and just six percent said it had made them "more confident."

We also presented respondents with two statements about Obama's trip and asked them which one they agreed with more. The statements were:

Barack Obama's overseas trip to Iraq, Afghanistan and other countries is a sincere effort on his part to get a first-hand look at conditions in those areas so that he can make informed foreign policy decisions.

Barack Obama's overseas trip to Iraq, Afghanistan and other countries is just a political stunt so that he can have campaign-style photo opportunities with foreign leaders in an effort to look presidential.

Half (49%) thought that his trip was a "sincere effort," approximately one-third (36%) felt that it was a "political stunt," and the rest thought it was either a mix of both or weren't sure. Interestingly, those undecided voters who had an opinion either way were more cynical: slightly more than one-third (36%) felt that it was a "sincere effort," another third (35%) felt it was a political stunt and ten percent felt that it was both (the rest were unsure). So with no apparent bounce in the polls--and most voters claiming to be unmoved--in the end the trip may have been just what it appeared to be: a chance for some photo opportunities.

As we said previously, the trip was a start in the "build up Obama" process. This data suggests that this will need follow-up and reinforcement before it becomes a bankable attribute. At this point, this election is still about Obama and many voters are still unsure about him.

Thanks again to John "Zippy" Zirinsky and Pete Ventimiglia for their efforts on this week's Election Monitor.

Nick Panagakis' response to my column on a different approach to measuring vote choice reflects, I believe, the current conventional wisdom, that a forced choice vote choice question is the best predictor of how voters will cast their ballots. This approach, Nick argues, "historically comes close to the actual outcome." Not only that, he "cringes" when he sees pollsters hedge their bets on a poll, by saying "candidate A is up by 9 points - but 30% could change their minds." He says that reporting such numbers "devalues polls."

But what is the "truth" of the matter? Are we not interested in accurately portraying what the electorate is thinking "today"? If so, how can we say, as CNN does, that 100 percent of voters have made up their minds with more than three months before the election? Or as Gallup has been telling us for the past two months, that an average of 95 percent of voters have already made up their minds? Or even, as most other pollsters say, that over 90 percent have made a choice?

Pollsters get away with producing such dubious numbers, I think, because most pundits take a schizophrenic approach to the polls. At one level, they treat the results as though they are the Holy Grail. At the next moment, they dismiss the numbers as being irrelevant at this time of the campaign season, saying that we need to wait until after the conventions before people begin paying attention to the election. Dan Rather's recent column encapsulates this sentiment, headlined as "Summer polls in the presidential campaign are pure folly."

If we are concerned about devaluing polls, we might want to think about giving an accurate portrayal of what the public is actually thinking (or not thinking) weeks and months before an election. The current vote choice question clearly does not reveal the extent of public indecision, and thus, I think, undermines the credibility of polls more generally.

I am not arguing that shortly before election day, in their last pre-election polls, pollsters should not press voters for their choices. I agree that in most elections, even the "undecided" voters have an inkling of whom they will support. Barring last minute media coverage that favors one candidate or the other, the faint-hearted leanings of these undecided voters usually turn out to be decent predictors of how they will act when they get in the voting booth. (Notable exceptions at the national level occurred in the 1948 and 1980 presidential elections, of course, not to mention the 2008 New Hampshire, South Carolina, and California primaries, among others).

Still, during the campaign leading up to the election, why should pollsters "cringe" at reporting that a large segment of the population remains undecided? In fact, that's just what CBS News has done, commendably in my view, when it headlined its latest poll results as "Poll: Obama Leads, But Race Fluid." Nick, it seems, would not favor such a headline, nor apparently would most other media pollsters - at least as indicated by their own reports.

There may be better ways to get at voter indecision, other than asking first, if people have made up their minds. Andy Smith of the UNHSurveyCenter said he will be experimenting this election season with other approaches, which could include the names of the candidates, as well as asking voters who they expect to vote for in November (not "today"), with the tag line, or haven't they made up their minds yet? A follow-up question could probe their leanings, but at least up front, the question would explicitly allow for the undecided voter to indicate such a sentiment.

It seems pretty clear that the standard vote choice question sacrifices "truth" about the electorate during the campaign, whatever the question's utility in predicting results right before the election. The research task, I believe, is to find an approach that does not produce misleading results about the state of the electorate during the campaign, while still allowing pollsters to make as accurate predictions as possible right before election day.

The most common description of polls is that they are snapshots, not predictions. A good way to look at that in the 2008 election is to compare the '08 campaign with the two that came before.

The chart above shows the trend estimates for each of the last three presidential campaigns. I'm plotting the estimated margin between the two candidates, Dem minus Rep, for each year.

With 93 days to go until the 2008 election, Obama holds a 3.3 point advantage over McCain, though that has been eroding over the past six weeks. If we put a confidence interval around today's estimate, we get a race that is just barely leaning Democratic.

But what about the future? The dynamics of the next 92 days are all important for where we stand on November 4. Since we can't foresee those 92 days yet, let's see what happened during the same time in 2000 and 2004. That gives us a better idea how much change we might anticipate in the next three months.

In 2004, Kerry slowly built a 2 point lead by this time, and held a small lead through much of the summer. But then the race took a sharp turn, with Bush making a 6 point run, taking a four point lead with 50 days to go. Kerry gained back 3 points of that in the polling, but less than 2 points of it in the actual vote, losing by a 2.4 point margin.

In 2000, Bush led in most of the early polls, holding a 6 point lead with 107 days to go. Then Gore moved sharply up, erasing Bush's lead and then adding a 3 point lead for Gore with about 56 days left. Bush promptly reversed Gore's gains with a six point move in the GOP's direction, and led by about 3 points over the last three weeks of the campaign. Of course, the 2000 polls were misleading in predicting a Bush win. Gore won the popular vote by 0.6 points.

So far in 2008, Obama has enjoyed a run up of 5.5 points since his low point in late March. That run is on a par with Bush's in 2004 but still a bit less than Gore's 9 point run in 2000, and on par the Bush's 6 point rebound that year.

Judging from the dynamics we've seen in the past it is quite reasonable to expect the current trend to shift by half-a-dozen points. August and the conventions have been periods of substantial change in both previous elections, so if history repeats itself the next 4 or 5 weeks should be pretty interesting.

The bottom line is neither campaign should be complacent or despondent. There is a lot of time left and recent history shows that both up and down swings of 6-9 points are entirely plausible.

As a P.S. here are the three campaigns with educational confidence intervals around them.

The current 2008 estimate is just barely inside the "lean Dem" range, and will move to toss up if the current trend continues for another couple or three polls.

The 2004 estimate was pretty close to the outcome which was well within the 68% confidence interval around the trend.

The polls in 2000 were troubling for having the wrong popular vote winner, but even there the outcome was inside the 95% confidence interval. With races as close as the last two, it is worth appreciating just how wide those confidence intervals are.

Our efforts to characterize races rely on the best estimates of those confidence intervals, but it is all too easy to focus on who's ahead and not remember how much uncertainty there is. That uncertainty is both about where the current estimate says the race stands today and about how the race may change in coming weeks. The data here show that unless one candidate builds a bigger lead than either has held so far, the uncertainty remains pretty big.

Note: My trend here is slightly different from the Pollster National trend because I'm working off the difference between candidates, not each trend separately, and because I've made 2008 comparable to 2000 and 2004, just a slightly different amount of smoothing compared to Pollster's standard estimator this year. None of those differences change the qualitative picture or shift the magnitude of changes I cite above.

How is Barack Obama doing among low income white voters? Quite well, says the headline and lead of a front page story in today's Washington Post. But before leaping to conclusions, we might want to take a closer look at the complete survey questionnaire.

Under the headline "Obama leads, Pessimism Reigns Among Key Group," The Washington Post tells us that Barack Obama "holds a 2 to 1 edge" over John McCain "among the nation's low-wage workers." The Post, in partnership with the Henry J. Kaiser Family Foundation and Harvard University, interviewed 1,350 randomly selected adults under 65 earning $27,000 a year or less and working at least 30 hours a week. Obama's margin was 56% to 27% among all adult respondents,, slightly more (58% to 28%) among registered voters. But the result getting much attention today came in the second paragraph (emphasis added):

Obama's advantage is attributable largely to overwhelming support from two traditional Democratic constituencies: African Americans and Hispanics. But even among white workers -- a group of voters that has been targeted by both parties as a key to victory in November -- Obama leads McCain by 10 percentage points, 47 percent to 37 percent, and has the advantage as the more empathetic candidate.

Taken at face value, Obama's margins do look strong, even stronger than what John Kerry received four years ago among similar voters according to exit polling. While I cannot precisely replicate the universe sampled by Post/Kaiser/Harvard study, the respondent level exit poll data from 2004 available from the Roper archive get us pretty close. I tabulated results for voters under 65 with incomes under $30,000 a year who said they were employed full time. Those voters supported Kerry by a nearly twenty point margin (59% to 40%), while the white voters in the subgroup divided almost evenly, 50% for Kerry to 49% for Bush.

So a survey showing Obama leading by 10 point among low income white voters would certainly represent an improvement.

But take a closer look at the complete questionnaire that -- to their credit -- the Post published online. The presidential vote preference question (#36) comes (by my count) 59 items and roughly 15 minutes into the interview. Before asking about presidential vote preference the survey probed respondents about their personal financial situation, the state of the American economy, their priorities for the things "the government might do to try to improve people's financial situation." They suggested seven different things "you or someone in your family done in the past year to make ends meet," and asked if any applied.

They also asked respondents if their "personal financial situation" has improved or declined since "George W. Bush took office in 2001" (48% said it had declined, 11% said it improved). And finally, immediately before asking the the presidential vote preference question, they asked:

During the past year, have you or has someone in your family had your overtime or regular hours cut back at work, or not?

During the past year, have you or has someone in your family been laid off or lost your job, or not?

And then they asked which candidate they would be most likely to support. Do we think that priming respondents for 15 minutes about the state of the economy and their own personal financial insecurities would have no impact on their vote preference?

Don't get me wrong. The researchers that designed this study are among the best in the field. The survey itself represents an extraordinary and unparalleled effort to "take a close look" the the lives of low wage workers "and try to understand how they are faring amidst all the economic changes around them," as Washington Post economics correspondent Michael Fletcher puts it in a companion video analysis. Among other aspects of their rigorous methodology, the pollsters used used cell phone interviewing to get an adequate representation of low wage workers living without land line phones.

However, if the pollsters wanted to measure where the presidential vote preferences stand now among low income workers, they should have asked the vote preference question up front. The results they obtained tell us something about how low wage workers might react to a campaign framed entirely on economic and pocketbook issues (a finding that presents an obvious strategy for Obama). As such, its measurement of vote preferences is hypothetical and possibly misleading.