Trade Thoughts

Normally when there is a trade I offer a post talking about how the transaction impacts each team. But this past week there were too many trades for a post on each one. So here are a few comments on the winners and losers from the flurry of the past week.

The Big Winners

The big winner didn’t do anything this past week. Of all the trades this season, the Lakers acquisition of Pau Gasol still ranks as the best. The Lakers acquired an above average player for basically nothing. What Memphis received — Kwame Brown, Javaris Crittenton, Marc Gasol, and two low first round draft picks – is unlikely to offer significant productivity either now or in the future. Certainly these assets were not going to help the Lakers this year. Consequently the Lakers defied the adage “you must give up something to get something.”

The Spurs are another team that might be defying the common trade wisdom. The Spurs traded Brent Barry and a first round draft pick to the Sonics for Kurt Thomas. The Sonics then cut Barry, who may re-sign with the Spurs after 30 days. Given that a low first round draft choice from the Spurs also has little value, San Antonio has acquired Thomas – a player with a 0.274 WP48 this year (and a career mark of 0.120) – for basically nothing. Of course, the “nothing” argument assumes Barry returns. If not, the Spurs may not be much better.

After these two, though, every other team had to give up something to get something. Cleveland comes the closest to emulating the Lakers and Spurs. The main commodity Cleveland game up was Drew Gooden, who has been above average in the past. This year, though, Gooden has not played well. So the Cavs really have given up nothing to get Ben Wallace – who is still above average (although not what he once was) and Wally Sczerbiak. The latter is below average, but still better than Larry Hughes.

Maybe Winners

After the Lakers, Cavaliers, and Spurs, we see a few teams that are probably better. As noted, the Mavericks added Jason Kidd (which helps) but had to give up Devin Harris and DeSagna Diop (which hurts). The net effect, I think, is positive.

The Hornets gave up Bobby Jackson (which hurts a bit) and added Mike James (which hurts if he doesn’t start playing better. But they added Bonzi Wells, which helps.

The Rockets did the reverse, so they might be helped if Luther Head – who probably gets more minutes now that Wells is gone – plays as well as he did last year. Minutes do impact performance, so perhaps Head will start playing better.

The Rebuilders

The Bulls and Nets both entered the season hoping to make the playoffs. But it didn’t work out. Now each team is rebuilding.

With the trade Chicago gets to find out if Joakim Noah and Tyrus Thomas can play. I think both players, if given consistent minutes, can produce. The team is still weak in the backcourt, where the play of Kirk Hinrich and Ben Gordon conspired to destroy the team’s season.

As for the Nets, Diop definitely helps. Sean Williams and Josh Boone are also above average players, giving the Nets a solid frontcourt. This is something they have not had for many years, so that problem might be solved. Unfortunately, the team still has problems elsewhere. Richard Jefferson, for example, is not quite the player he once was (he especially has problems rebounding). Nevertheless (and this might be surprising) it might be possible that the Nets are better after the trade of Kidd. It does depend on who plays, and whether some players can re-gain what they were in the past. Still, both teams involved in the Kidd trade might be better after this deal.

And then there are the Sonics. This team has exactly two above average players, Nick Collison and Chris Wilcox. After that, the team appears to have an aversion to employing productive talent. Right now the Sonics are heading back to the lottery. My sense is the future team of Oklahoma City is hoping to add a productive rookie, and then hoping that Kevin Durant and Jeff Green develop into productive players. If Durant and Green do not develop, then Oklahoma City will be hosting lottery parties for many years to come.

The Big Picture

Okay, how does this all impact the playoff picture? Here are my top teams in each conference:

Eastern Conference: Boston and Detroit are on top. These two are followed by Cleveland, Orlando, and Toronto. After that we have a collection of teams who could make the playoffs but will probably lose in the first round.

Western Conference: If Bynum comes back and produces, the Lakers will be the top team. They may not finish with the best record, but if this team is healthy they will be the favorites entering the playoffs. After the Lakers are a collection of very good teams. This collection is led by San Antonio, and includes in no particular order, New Orleans, Utah, Dallas, and maybe Phoenix (although I do not like the Shaq trade). And then close on the heels of these teams we have Houston, Denver, and maybe Golden State. All of these teams can’t make the playoffs, so at least one very good team will miss the post-season in the West.

Although I see the Lakers the favorites in the West, it is still going to very hard to predict the playoffs in the West. The 1 through 8 seeds are all very good teams. And any one of these teams has a chance to be in the NBA Finals.

A Note on Team and Player Evaluation

You will observe that I have not forecasted wins for each team involved in these trades. That’s because such a forecast requires that I guess how minutes are going to be allocated on each team. And that can be hard to do when so many pieces have been changed. Because it is hard to predict minutes, it is hard to pin down exactly how the teams will be impacted (see the discussion of the Nets above).

In addition the minutes problems, there are some other issues to consider in looking at these trades. NBA players are not robots. When we look at the link between present and future performance here is what we find:

1. For the most part, what you see is what you get. NBA players, relative to baseball and football players, are quite consistent across time.

2. That being said, player performance is adversely impacted by switching coaches and teammates. Plus, changing minutes will also impact productivity.

3. And then there is the issue of diminishing returns. If you move a player from a bad team to a good team, performance will be reduced. In other words, diminishing returns – as explained clearly in The Wages of Wins – does apply to the NBA. For the most part, this is not a huge effect. Again, generally what you see is what you get in the NBA. But any reader of The Wages of Wins would expect Jason Kidd going from the Nets (a bad team) to the Mavericks (a good team) will now offer less production (and/or someone else on the Mavericks will offer less). And this less production will take the form of fewer rebounds, fewer shot attempts, etc… In other words, it’s not just one stat that declines.

One last note on diminishing returns…. this is how this effect is measured in research I have published. First we measure how productive a player has been. This can be done via Win Score or Wins Produced. We then look at how the productivity of teammates impacts performance. And this is done via a model that controls for many other factors that impact performance.

In sum, you can’t look at how one factor (x) impacts (y) if you don’t control for other factors that impact (y). Well, you can. But your analysis won’t tell us much.

The Rockets trade also frees up more time for Carl Landry, who might have per minute numbers that rival most NBA all stars. In 15 minutes a game, he’s averaging 7.5 points and 5 rebounds on 61% shooting. The trade also gets rid of a bad contract (Mike James had 2 years left) and gives them money to possibly sign a decent veteran this year (maybe Brent Barry?).

I’ll respond here to some of the comments on diminishing returns in both this post and the previous one.

“And the big point I am making is that offensive and defensive rebounds have the same correlation. The argument offered says these correlations have to be different.”

This is incorrect. If there are greater diminishing returns for defensive rebounding than offensive rebounding, this does not necessarily imply that there will be a greater year-to-year correlation for offensive rebounding. You appear to be using year-to-year correlations as a proxy for the context-dependency of a stat. For a variety of reasons, that is not a great way to get at context-dependency.

YTY r’s are at their core a measurement of the true player-to-player variation in a stat. As Tom Tango and the other authors of The Book have shown for baseball ( http://www.tangotiger.net/archives/stud0084.shtml ), if a stat varies a lot between players, then it will have a higher YTY r than a stat that varies only a little. And it has been shown by Ed Kupfer that there is more variation between players in defensive rebounding than in offensive rebounding, even when controlling for position ( http://www.sonicscentral.com/apbrmetrics/viewtopic.php?p=6994#6994 ). Based on that, one would expect that defensive rebounding would have a higher YTY r than offensive rebounding. But that doesn’t say anything about which is more context-dependent or more subject to diminishing returns. So the apparent fact that offensive and defensive rebounding have similar YTY r’s doesn’t really speak to the issue of diminishing returns one way or the other.

What is interesting is that given the greater variation in DRB than ORB on the player level, there is actually greater variation in ORB than DRB on the team level ( http://www.countthebasket.com/blog/2007/12/17/does-good-pitching-beat-good-hitting-in-basketball/ ). This apparent contradiction can be explained through diminishing returns. Because of the large diminishing returns effect on defensive rebounding, player differences in DRB have a lesser impact (positive or negative) on their team’s DRB than is the case for ORB.

“I would add that I do not think it is a good idea to adjust the coefficients on rebounds in Wins Produced in light of the diminishing returns effect. The impact of teammate performance, which is what we are talking about, depends on who the teammates are and what they are doing.”

It is important to adjust the coefficient because otherwise a player is credited with acquiring a full possession for his team with each rebound he gets, even if 70% of the time his team would have gained possession anyway. You can make this adjustment at the end if you want, but Wins Produced makes no adjustments for diminishing returns in rebounding at any point.

“I think a better approach is to estimate how productive the player is, and then look at the impact of diminishing returns (while controlling for other factors that impact productivity). Trying to answer the how and why questions at the same time is not a very productive approach.”

You can adjust at the end if you want, but the key is that you need a method to determine whether diminishing returns affects different parts of the game more than others (such as defensive rebounding more than offensive rebounding, or turnovers more than steals, etc.). There are many theoretical and empirical reasons to think that the effect varies, and understanding how this works is vital to creating coefficients for a player rating that correctly apportion credit. Wins Produced makes no stat-specific adjustments for diminishing returns.

“Wins Produced makes no adjustment for diminishing returns at any point.”
This quote bothers me. And it is contradicted by what is said immediately after this statement. The Wages of Wins approach is to first measure performance, by evaluating the statistic in terms of its impact on wins, and then look at issues like minutes played, coaching, diminishing returns, etc…

Given this approach, to say “Wins Produced doesn’t adjust for diminishing returns at any point” is then misleading. Wins Produced is simply the value of the statistics in terms of wins. The adjustment for other factors comes after this calculation is made.

Again, this is spelled out in the book. If we are going to discuss an issue, it is important that you actually state my position correctly. We are not disagreeing on the existence of diminishing returns. We are disagreeing on how to go about measuring the impact and how to incorporate this in evaluating a player.

You say in the book that the law of diminishing returns does apply to the NBA. But the Wins Produced calculation does not account for that in any way that I can see. I certainly do not see how Wins Produced makes an adjustment for diminishing returns in defensive rebounding.

I should amend what I said somewhat. Wins Produced does make a kind of diminishing returns adjustment for blocks and assists in the MATE48 calculation. But I do not see any adjustment at any point for diminishing returns in rebounding.

Eli, can you explain how offensive rebounding can vary more than defensive rebounding? Intuitively, I don’t see how this can be since a gained offensive rebound is a defensive rebound not gained by the other team. One goes up, it seems the other must go down, but you seem to be suggesting one varies more than the other.

I think the easiest context to understand the idea is free throw shooting on the team level. Every team has a FT% and a FT% allowed on the season. Averaging across all teams, every season the league FT% will be equal to the league FT% allowed. Different teams have better or worse free throw shooters, and thus there is a fair amount of variance in team FT%. But teams don’t vary much in their free throw “defense,” and thus there is very little variance in FT% allowed. Other stats like rebounding don’t have this extreme difference, but the same idea of identical means for offense and defense but differing variances applies.

Eli, the two situations, FT% and offensive and defensive rebounding are not comparable. Your team’s FTs do not impact the other team’s free throws. Both range independently as a made FT does not detract (or add) to the statistics of the other team.

This is not true of rebounding %. A gain in offensive rebounding percentage must mean a compensatory decline in the opposition’s defensive rebounding percentage unless you’re using a different denominator (in which case, you are not actually measuring the same things).

You’re right that there’s a difference between FT% and rebounding, but it’s not relevant to the issue of differing variances. On the game level, one team’s FT% is not independent of the other team’s FT% allowed, just as one team’s ORB% is not independent of the other team’s DRB%.

But forget the theory, I’d suggest just grabbing some data from Basketball-Reference or Dougstats.com and calculating some standard deviations on your own.

Eli, I’ve crunched more than my share of numbers. The issue isn’t crunching numbers. The issue is one where you are comparing variation between two variables that can vary entirely on their own and variables that are not entirely independent.

When one decides to “forget the theory” one runs into problems. That’s why I asked the question about variation in rebound rates and asked how it can be that the rates vary seemingly independent of each other to produce differing patterns when they are actually tied to each other.

It suggests to me that there may be a problem with the underlying assumptions beyond your crunching else a problem with the underlying data source(s).

(and there *are* issues with those datasources if they are your only datasources. I have noticed that for season data, dougstats team minutes do not equal opposition minutes. This suggests that this is not a particularly reliable source.)

Just try crunching numbers – if you do so you’ll see that on both the team and player level there are differing variances for ORB% and DRB%. I’ve explained the theory but you don’t seem to be following my explanation. That’s fine, maybe I’m not explaining it well. But just try it out for yourself – that’s the simplest way to become convinced.

Dougstats team minutes are summed from player minutes, so there are rounding errors. If you want to avoid those use Basketball-Reference, or DatabaseBasketball, or ESPN, or whatever. Whatever data you use you’ll find differing variances for ORB% and DRB%.

Eli, you did *NOT* explain the theory. You explained what numbers you crunched and gave an example of something that is not theoretically the same. There is a difference between crunching numbers and explaining theory. The issue I have is not that the data are showing different standard deviations. It’s that this seems like a peculiar result given that offensive rebounds and defensive rebounds are not independent of each other. An example of FT% does not address this issue sufficiently.

I *have* crunched the numbers, Eli. Seriously. I’ve crunch numbers upon numbers upon numbers. I can compute the same thing you are. It’s not for a lack of playing with the numbers. Telling me to crunch the numbers is not explaining the theory or dealing with the issue of linked variables. It’s that the numbers seem counter to some other observation: a gain in offensive rebounds is offset by a drop in opponent defensive rebounds. Crunching numbers doesn’t solve this problem. What is required is a real explanation of how this can be.

The SD for team offensive and defensive reb %’s are close, but not the same. (If you instead compute *opponent’s* %’s you get exactly the opposite, which is exactly what has to happen when you compute the numbers by using opponent’s D-rebounds as part of the denominator for O-rebound opportunities, etc. But why is one more appropriate than the other, one indicating that the opposite is more highly variable.) Now are they close but not the same because they are really not the same, or is there some underlying problem with the data set or the theory of computing rates in that manner? It’s not a problem if you say you don’t know–I don’t know either, but the empirical observation that the standard deviation of a series of rate averages, each computed with a different number of opportunities doesn’t convince me that the theory itself in manipulating the numbers in such a way actually means anything when you then try to interpret it to mean that there’s any actual difference between the numbers.

I think WP alreadyattempted an adjust for diminishing return when it did a position adjust. An entertained but unnecessary adjust. A well rated metric doesn’t need neither position adjusts, nor any possible invented adjust for diminishing return. Other metrics like PER that rate rebounding against the LgAVE R%, are not completely solving the diminishing return either, because that rating method has player variation at grabbing the rebounds, but make everybody the same (average) allowing them.

Imagine a league where coaches had vastly differing strategies on crashing the offensive glass. Some sent all five players to the boards, while some sent zero players and had everyone get back on defense instead. This variation in strategies would lead to a large team-to-team variation in ORB%, with teams that sent five players to the offensive glass having much higher offensive rebounding percentages than those that sent zero. On the other hand, suppose that at the same time, all coaches used the same strategy when it came to defensive rebounding – send three players to the glass and have the other two leak out down the court for potential fast break opportunities. Because all teams used the same strategy, there would be little variation in DRB%. The end result would be greater variation between teams in ORB% than DRB% – there’s nothing contradictory about that.

The key is that for each team, their ORB% is independent of their DRB% – they happen at different ends of the court. It’s true that a team’s ORB% is identical to (one minus) their opponent’s DRB%, but that’s just like how a team’s FT% is identical to their opponent’s FT% allowed.

Or going back to the FT% example, imagine a league where those where made baskets didn’t count for any points, and you could only get points by making FTs. In that league teams would vary fairly widely in their points scored (because some teams have better free throw shooters than others), but vary little in their points allowed (because all teams are about equal in free throw “defense”). This would be the case in spite of the fact that for any game, one team’s points scored would equal the other team’s points allowed, just as for any game, one team’s ORB% equals (one minus) the other team’s DRB%.

There was a post on Berri’s site last night by Guy giving some actual standard deviations from last season, but unfortunately it looks like it was deleted. So I went ahead and put together a quick spreadsheet with the data just from last year. The SD’s are close but not the same because they are really not the same. You can try it yourself for any other season – there’s no hiccup in the raw data that’s causing it.

Anyways, I don’t want to stray too far off topic. This issue doesn’t really affect the issue of diminishing returns in rebounding, where the theory and data both show differing effects on the offensive and defensive glass.

“This would be the case in spite of the fact that for any game, one team’s points scored would equal the other team’s points allowed, just as for any game, one team’s ORB% equals (one minus) the other team’s DRB%.”

I hope, at the end of the season, you will evaluate each of these trades again, not just from the perspective of the performance of the players traded, which everyone does, but more usefully, how did each team do after the trade and who contributed?