With or Without You: OKC, Perk, and Nazr

In continuing my series of With-or-Without-You (WOWY) analyses, I will next look at my hometown Oklahoma City Thunder, one of the hottest teams going into the playoffs. In the first two posts on WOWY, I looked at Oklahoma City in January (before the trades), and yesterday I looked at the Bulls and their strength going into the playoffs. In the Bulls post, I also revised and expanded this method with regression to the mean and Bayesian weighting for best future prediction.

Since January, the Thunder traded away Jeff Green and Nenad Krstic, both of whom looked very bad from a WOWY perspective, and added Kendrick Perkins and Nazr Mohammed to the regular rotation. After the trades, the Thunder have gone on a tear, going 19-4 since the trades came through and 13-3 since Kendrick Perkins entered the lineup. Of course, the opponents have been rather weak over the last 23 games, with the average game being about 1 point easier than average. There have been 3 huge wins since the trade, however: at Miami, at Denver, and at the Lakers–those were the 3 hardest games since the trades and the Thunder won them all.

Let’s start out this analysis with the normal charts:

Oklahoma City Thunder Performance Chart

Oklahoma City Thunder Efficiencies Table

From the team performance chart, we can see the strong upward trend the Thunder have been showing. In the efficiencies table, we can see who was missing for each game, how difficult the game was, and how the team did on offense, defense, and overall. Like I mentioned in the last OKC WOWY post: Nick Collison is important to this team!

Okay, let’s get down to the nuts-and-bolts of WOWY: first, the raw regression, no weighting, assigning value to each player that’s missed at least 3 games and is a regular otherwise.

What do we see? First of all, the lack of stabilization in the regression shows: KD can’t hurt the team by 6 points per 100 possessions on defense! That’s the small sample size coming through. For the players with a larger sample size, the results look pretty reasonable, though Nick Collison again looks like a superman.

Let’s look at the stabilized results, regressing towards average within the regression:

The stabilization really shows, here. Nick Collison is still really good per minute (that +2.9 translates approximately to a +/- of 2.9 as well), while both Jeff Green and Nenad Krstic at least make it above replacement level! Both of them have a translated +/- of below -2.5, though. KD looks rather average, probably because the small sample size has trouble overcoming the regression to the mean, but one thing is clear: he’s better on O than on D.

Finally, let’s look at the Bayesian results, with more recent results weighted more heavily than older ones. This is more useful for predicting the team future performance than quantifying the impact of individual players. Here’s that table:

So here we go. The prediction for the team, if everybody is healthy and Jeff Green and Nenad Krstic stay gone, is for a +8.2 efficiency differential. That is scary good. Not as scary good as the Bulls’ +10.0 going forward, but that +8.2 is better than any team has maintained for the whole year this year (Chicago leads with a +7.38 average).

In other words, the Oklahoma City Thunder must be considered a contender since the trades. They ditched their weakest links and added in 2 good contributors. (Wrapped into the numbers is the value of starting Serge Ibaka–his quality is what makes Jeff Green‘s numbers look so bad!)

I’ll try to take a look at some of the other contenders with this approach as well. One cautionary note: OKC, like Chicago, will have less of a beneficial effect from shortened playoff rotations than some of the older and thinner teams–that’s a hard effect to quantify, but it favors the Lakers, Celtics, and Heat.

4 Comments

“This is more useful for predicting the team future performance than quantifying the impact of individual players.”
Did you actually test this or is it just an assumption? How does the weighing work, exactly?

The weighting is the same I used to maximize out-of-sample prediction when I constructed my league-wide Bayesian rating system, here: On Bayesian Predictive Efficiency Ratings. Example: Most recent game weighted 1. 25 games ago, weighted 0.50. 50 games ago, weighted 0.30. 75 games ago, weighted 0.20. Etc.

Because the more recent performance is weighted more highly, but the regression to the mean portion of the equation still remains the same, players who had most of their time off court more recently vs. more distantly would probably not be handled correctly. Do you have any more insight on this?