Win probabilities are one of the most interesting and informative byproducts of baseball data's new wave. But, being the kind of person I am, the first thing I did when college basketball probabilities were released (they're explained by Ken Pomeroy here) was to look at The Chandler Parsons Game.

And it was so worth it. That outrageous graph does a better job than I could ever do of explaining just how over that game really was. NC State was at the line, with the lead, with 3.3 seconds on the clock. And then Florida won.

So, obviously, I spent some time having fun with the graphs. And finding games like Morehead State at Eastern Illinois (where MSU took the lead when guard Maze Stallworth buried a go-ahead three with 14 seconds left, only for the Eagles to lose on a too-many-timeouts technical with a second left). But, eventually, I had to ask myself: So what? I already knew that the Chandler Parsons shot was unbelievable, and anyone who knew about that Morehead-EIU game already knew it was crazy. What can we actually learn from these things?

Well, here’s my idea. I’ve spent relatively equal amounts of time with the sense that my idea works and the sense that it’s ridiculous. What if we ran all the win expectancies over again, with the teams assumed to be evenly matched (with home-team advantage built-in)? Then we’d allocate credit to the teams by, essentially, averaging their win probabilities at each point in the game. So, if a team plays a possession from 10:46 to 10:22 in the second half with a win probability of .546, we’d weight those 24 seconds of game time with a win probability of .546. Then we’d add up all the pieces and see what it looked like for the whole game.

Why I love this
1. It eliminates any and all fear of a team benefitting statistically by running up the score. No longer could it be argued that a team looks better on paper simply because running up the score is rewarded. The whole reason that running up the score is unsportsmanlike is because it puts bigger numbers on the board without changing the probability of who will win the game, right? But in win-probability terms, the only way that more points is helpful is if the game is still in doubt. Maybe that unnecessary dunk will change the win probability for that space from 99.2 percent to 99.3 percent, but it is, all in all, pointless.

2. Winning games by 50 points no longer puts massive skews into the ratings. One team that the Pomeroy Ratings (and, thus, I) loved going into last year’s tournament was BYU. But I started looking closer at why the Cougars were so highly rated, and it seemed to be due in part to a series of enormous blowouts over unimpressive opponents. With the win probability system, though, a 30-point win will look more or less the same as a 40-point win. Right now, conversely, that difference is seen as the same as the difference between an 11-point win and a one point win. And that just feels wrong.

3. Free throws at the end of a game hold much less sway in win probability terms. There are lots of games where the teams go back and forth for the entire 40 minutes, only for one team to get the ball and the lead late and end up winning by eight. The next day, that game looks the same as a 17-point game where the losing team goes on a meaningless and lightly defended nine-point run in the last two minutes. And that’s silly. in terms of which winning team played better, the answer is clear.

4. You play to win the game. The best teams should be the ones that put themselves and keep themselves in a position to win. The final score shouldn't matter as much as which team dominated the game.

Sounds pretty good, right? Well, hold that thought....

Why I hate this
1. If I were a different kind of person, it would really disturb me that people were judging teams like this. So, not only do we care about the score, but teams who win the game might now be judged as "worse" than teams that lose? That’s weird. On the one hand, I want to say that the kind of team who gets down by 15 points and wins is playing such an unsustainable game that they deserve to be negatively judged. The kind of team that takes big leads should be rewarded for putting themselves in position to win, even if they end up falling apart. But, on the other hand, can I really live with giving more credit to a team that falls to pieces and less to the team that comes back?

2. Similarly, think about this: In the first half the home team plays consistent basketball and goes up by 22. But in the second half, the visiting team plays equally consistent basketball and ends up winning by a point. However, since the home team led the whole game, they would get credit for being in the lead throughout the visiting team’s comeback. In other words even though both teams played identical halves, the team that played a good half first looks "better" in win probability terms. The graph for the home team would look like this:

So the home team would get credit for .75 wins, while the visiting team would get credit for .25. But, then again, maybe that’s how it should be. Maybe a team should be penalized for falling far enough behind that their chipping away at the lead is hardly noticed. After all, should a team really get credit for living on the edge like that? Nobody would follow that game plan and end up with a graph like that--clearly the home team dominated to start.

My current feeling: I think win probability should play a role in the evaluation of teams, but not necessarily in the way I've outlined it here--at least not unless someone can talk me through these two issues. Maybe win probability should be weighted more as the game progresses. Right now I'd probably be most comfortable using some combination of the Pomeroy Ratings as they stand now and a similar system based on win probabilities.

Anyway, if you've made it this far feel free to tell me why I'm wrong. If one of my advantages isn't an advantage, or one of my disadvantages isn't a disadvantage, tell me why. I'll revisit this if and when things become more clearly defined.