Come next Tuesday night, we’ll get a resolution (let’s hope) to a great ongoing battle of 2012: not just the Presidential election between Barack Obama and Mitt Romney, but the one between the pundits trying to analyze that race with their guts and a new breed of statistics gurus trying to forecast it with data.

In Election 2012 as seen by the pundits–political journalists on the trail, commentators in cable-news studios–the campaign is a jump ball. There’s a slight lead for Mitt Romney in national polls and slight leads for Barack Obama in swing-state polls, and no good way of predicting next Tuesday’s outcome beyond flipping a coin.

In Election 2012 as seen by the stats guys–there are many, though Nate Silver of the New York Times’s Fivethirtyeight blog has been getting most of the attention–the campaign is not a lock, but Obama has a clear advantage. By averaging out the results of state and national polls, considering the accuracy and trends in those polls over past years and figuring in economic and historical factors (or not), the data guys have generally (though not universally) given the President a clear edge: currently 77% in Silver’s model, north of 90% at the Princeton Election Consortium. (Their reasoning boils down to the President’s small but consistent lead in enough battleground state polls to win the Electoral College, plus the historical record of state vs. national polls. I’m oversimplifying here, though.)

Can you call an election by applying math rather than going to rallies and talking to cab drivers and diner customers? As a political-news junkie, I’ve been noticing some passive-aggressive sniping against the data guys by more traditional reporters for a while now. Howard Fineman, for instance, recently tweeted that the Des Moines Register endorsement of Romney made clear for the first time that Obama might lose—though “it’s not scientific or quantifiable by Nate Silver.” Meow!

And in a Politico article by Dylan Byers Monday, that low-simmering catfight became an all out cat war. Byers suggested that Silver could become “a one-term celebrity” if Obama loses, and quoted media stars like Joe Scarborough scoffing at Professor Science’s efforts to quantify something that, to them, can’t be measured in a spreadsheet. “Nate Silver says this is a 73.6 percent chance that the president is going to win?” Scarborough said. “Nobody in that campaign thinks they have a 73 percent chance — they think they have a 50.1 percent chance of winning.” Silver’s statistics? A “joke,” said Scarborough.

You can read politics into this dispute. It’s undeniable that Silver has become a kind of polestar—or poll star—for nervous liberals, as his rating of Obama’s chances have never dipped below 60% in the general election. Some conservatives, meanwhile—many of whom have criticized the polls as “skewed” toward Democrats this whole election—have insinuated that Silver, once a commentator at Daily Kos, has his thumb on the scales for liberals. (The counter to this being that, politics aside, results are results, and Silver’s have been very strong—if only, at this point, for a few years.)

But there’s also a kind of territorial, John Henry vs. the Steam Drill, Kasparov vs. Deep Blue, man-vs.machine tone of defensiveness here—a sense that the Hari Seldon–like attempt to quantify mass human reaction through numbers rather than intuition is fruitless, if not just flat-out wrong.

And it’s a familiar one, especially if you’re a sports fan, or at least a movie fan like me. When I tweeted about this phenomenon the other day, fellow TV critic—and much bigger baseball fan than I—Alan Sepinwall noted that this is the exact same kind of reaction that sabermetric baseball analysts got from old-school coaches and scouts, as chronicled in the Michael Lewis book and movie Moneyball.

This is no coincidence. Before Silver was calling elections, he was a leading sabermetrics analyst, writing for Baseball Prospectus. As Moneyball showed us, the idea of choosing players on the basis of number crunching generated a pushback that was truly visceral—in the sense of defending your gut over someone else’s brain. Baseball isn’t calculus! Your computer can’t tell you whether a player has a spark! Likewise, the reaction against Nate Silver and company rings of defending experience, show leather and motherwit against algorithms. Your computer can’t feel the public mood shifting in Ames, Iowa!

The baseball scouts threatened by Bill James and his disciples were at least defending a way of life. Their job was simply to identify and place a value on baseball players. The skepticism toward poll analysts comes from political reporters whose jobs have become increasingly all about the horse race—telling us who’s “winning” and “losing” and why—but don’t necessarily need to.

Really, there’s no reason that analysts like Nate Silver—should their forecasts prove more reliable over time—need to “replace” political journalists. Instead, they could free up political reporters to things that are more useful for all of us: reporting on the differences between candidates’ policies, about what government does and doesn’t do, and about how elections can address our problems. If pundits could offload the quantification to the quants, it would be much better all around for the news audience.

Instead, though, it looks like some pundits are looking for an opportunity to delegitimize the poll aggregators and analysts en masse—especially if they give Obama better odds and he loses.

Unfortunately, that would depend on a misunderstanding of odds that their audience may not appreciate. If you say Obama has a three-in-four chances of winning, it means that Romney has a one-on-four chance: still quite a decent shot. But herein lies the difference between using these metrics in presidential elections vs. baseball. In a baseball season, you get 162 chances to be right or wrong—the accuracy or inaccuracy of your model can be tested over a number of results. When you have one Presidential election every four years, in the public mind–unfairly, but still–your reputation for being “right” or “wrong” will be grossly overinflated on the basis of a relatively few results. (Also, Silver et al. are only as good as their data—they’re not pollsters themselves and so are as vulnerable as any prognosticator if polls are wrong en masse.)

There is where the traditional pundits have some advantages, ones that have protected them and their reputations since time immemorial. As we’ve seen with one political development after another, it’s a rare pundit whose job security has been threatened by being wrong about anything—especially if they manage to be wrong in the same way that most of their colleagues are.

And maybe more important for the pundits: the beauty of calling an election a 50/50 tossup is that 100% of the time, you will be right.