Nerd Alert: Nerd Fight! (Sort of)

One nice thing about the sporting nerdosphere on the internet is the free exchange of ideas that it facilitates. This is certainly the situation for baseball, but one could make the case that -- owing to the relative difficulties and youth of advanced basketball research -- that such conversations are actually more important for the latter sport.

The last 10 days have seen just such an exchange between two individuals whose opinions on the topic of advanced research in basketball are deserving of our attention -- Phil Birnbaum, who writes at the blog Sabermetric Research, and Kevin Pelton of Basketball Prospectus.

In this post I hope both to (a) provide a decent summary of the main points covered in the conversation and (b) relate the the significance of the findings to the basketball fantasy owner.

You know all those player evaluation statistics in basketball, like "Wins Produced," "Player Evaluation Rating," and so forth? I don't think they work. I've been thinking about it, and I don't think I trust any of them enough put much faith in their results.

That's the opposite of how I feel about baseball. For baseball, if the sportswriter consensus is that player A is an excellent offensive player, but it turns out his OPS is a mediocre .700, I'm going to trust OPS. But, for basketball, if the sportswriters say a guy's good, but his "Wins Produced" is just average, I might be inclined to trust the sportswriters.

I don't think the stats work well enough to be useful.

I'm willing to be proven wrong. A lot of basketball analysts, all of whom know a lot more about basketball than I do (and many of whom are a lot smarter than I am), will disagree. I know they'll disagree because they do, in fact, use the stats. So, there are probably arguments I haven't considered. Let me know what those are, and let me know if you think my own logic is flawed.

As noted, Kevin Pelton took up Birnbaum's offer, writing a response to Birnbaum's initial volley. With Birnbaum's response back to Pelton, we have three useful documents representing some of the best and most recent thoughts on these matters.

The pair covers a number of topics (with a number of words), but three issues stand out: the relationship between player and team rebounding, the relationship between individual and team scoring efficiency, and the predictive abilities of advanced metrics.

Here are those three issues digested and lightly commented upon for the reader's benefit:

The problem is that a large proportion of rebounds are "taken" from teammates, in the sense that if the player credited with the rebound hadn't got it, another teammate would have.

We don't know the exact numbers, but maybe 70% of defensive and 50% of offensive rebounds are taken from a teammates' total.

More importantly, it's not random, and it's not the same for all players. Some rebounders will cover much more of other players' territory than others. So when player X had a huge rebounding total, we don't know whether he's just good at rebounds, whether he's just taking them from teammates, or whether it's some combination of the two.

So, even if we decide to take 70% of every defensive rebound, and assign it to teammates, we don't know that's the right number for the particular team and rebounder. This would lead to potentially large errors in player evaluations.

The bottom line: we know exactly what a rebound is worth for a team, but we don't know which players are responsible, in what proportion, for the team's overall performance.

Pelton, for his part, respond to this by running some year-to-year correlations -- not for every player, but rather only for players who had just switched teams, in order to strip out the effects of team context and, instead, isolate something like true talent.

Shooting statistics, already unreliable from year to year, become even less predictable when a new team is added to the mix. Same with turnover percentage. By contrast, defensive statistics--including rebounding--seem to stay relatively steady. If players really had a significant effect on their teammates' rebounding percentages, I think we would expect to see more inconsistency from year to year--especially among players who changed teams.

In his response to the response, Birnbaum appears unmoved:

Kevin shows that when players change teams, their rebounding numbers stay fairly constant, compared to other statistics. Doesn't that suggest, Kevin asks, that rebounds are relatively independent of the player's team, coach, and environment?

To which I answer: no, not really. I think that players have a certain style of how they approach rebounds, and that doesn't necessarily change from team to team. I might be wrong about this, but if a player is known for his rebounding, it doesn't seem like the new coach will say, "yes, we thought player X was excellent at rebounding, which is why he acquired him, but we're now asking him to cover less territory and pick up fewer rebounds."

Conclusion: So far as the fantasy owner is concerned, there's little to be concerned about so far as a player's "true" rebounding talent is concerned -- as opposed to his "perceived" or "measured" talent, that is. Whether Player A is an actually great rebounder or is just a rebounder of the "stealing" variety, he still get his numbers. That said, it's possible that players around a stealing-type rebounder could suffer.

To read one line off the chart: for every one percentage point increase in shooting percentage by [one player] (say, from 47% to 48%), you [see] an increase of 0.26% in each of his teammates' [shooting percentages (say, from 47% to 47.26%).

This actually isn't a point of contention between Birnbaum and Pelton. In mid-January, Nate Silver (of Five Thirty Eight and, before that, Baseball Prospectus) demonstrated howCarmelo Anthony, despite not being particularly efficient himself, is actually responsible for making his teammates more efficient.

Numbers Maven Dave Berri submitted a criticism of Silver's methodology, but it was actually Pelton himself who responded to Berri's criticism, using a different means to find something similar to Silver's point.

Fantasy owners understand all this to some degree -- i.e. that a high-volume scorer can improve the teammates around him. It'll probably be a while before we have a hard-and-fast means to predicting that improvement, but it appears not to be insignicant.

Conclusion: Context matters. A relatively efficient, high-volume scorer makes his teammates more efficient scorers. For more on this, feel free to consult the Advanced area on Maurice Williams' Basketball Reference page.

Point #3: Advanced stats aren't especially predictive.

Birnbaum references a study by David Lewin and Dan Rosenbaum of the Cleveland Cavaliers.

What Lewin and Rosenbaum did was try to predict how teams would perform [in one] year, based on their previous year's statistics. If the new sabermetric statistics were better evaluators of talent than, say, just points per game, they should predict better.

As you can see, "minutes per game" -- which is probably the closest representation you can get to what the coach thinks of a player's skill -- was the second highest of all the measures. And the new stats were nothing special, although "Alternate Win Score" did come out on top. Notably, even "points per game," widely derided by most analysts, finished better than PER and Berri's "Wins Produced."

While Pelton responds -- and Birnbaum ultimately concedes -- that box-score statistics actually are somewhat predictive of future plus-minus numbers, the most compelling passage so in this sub-conversation is tour-de-force from Birnbaum (from a post called, appropriately, "Box-score statistics are the RBIs of basketball"):

I bet if you rank every NBA player by even the best of the box-score statistics, and then got a bunch of NBA scouts to ranked them based on their own expertise, the scouts would beat the crap out of the stats. That wouldn't happen in baseball, if you used the good sabermetric stats -- I bet the stats would beat the scouts, or at least come close -- but it WOULD happen in baseball if you just used RBIs.

The analogy between sabermetric basketball box-score statistics and RBI's is actually pretty strong. In both cases:

1. When you add up the individual totals, the correlation to team totals is almost perfect.

2. If you're a better player, your individual numbers are better.

3. Year-to-year individual player correlations are fairly high.

4. Individual player correlations to known-good stats (plus-minus in basketball, OPS in baseball) are also fairly high.

5. However, individual numbers depend not just on skill, but on teammates and role within the team.

6. If you move teams, you generally keep your same role, which means the correlation stays high.

7. This means that the statistic is biased for certain types of players, and the bias does not disappear with sample size.

8. Still, if you look casually, players at the top are much better than players at the bottom, which means the statistic looks like it works.

9. But there will be many cases where players with significantly higher totals will actually be worse players than others with significantly lower totals.

In fact, I think this is my new argument in one sentence: "Box score statistics are the RBIs of basketball." They just don't work well enough to properly evaluate players.

Conclusion: Because, as Birnbaum notes, players frequently play very similar roles even when they change teams, it's unlikely that their box-score stats will change all that much. As a result, certain deficiencies in talent can be obscured or masked. Fantasy owners care about true talent, but role is also a very important consideration. Ultimately, the fantasy owner needs to understand what a player will produce, not how good he is in the Platonic sense. That's good, because it's a much easier thing to know.