Ordinarily I don’t presume to know your business, but something you probably didn’t miss was Robinson Cano leaving the New York Yankees to sign a 10-year contract with the Seattle Mariners. Something you more probably did miss was the following comment, left below a FanGraphs post on the subject:

I would love to see a full article on how teams do the year after losing a superstar. History could teach us a few lessons, I’m sure.

Within the post, I included a few examples off the top of my head, but it was hardly anything rigorous. The idea seemed worthy of something more rigorous, so, here it is.

For these purposes, I always love recalling the Mariners. The team won three more games the season after trading Randy Johnson. Then they traded Ken Griffey Jr. and improved by another 12 wins. Then they lost Alex Rodriguez to free agency and got another 25 wins better. In 1998, the team was below .500. Three missing superstars later, they were arguably the best regular-season team ever. As much as anecdotal evidence can prove a point, those Mariners teams did it.

If you prefer the Oakland A’s, they got a win better after losing Jason Giambi. They actually scored more runs after losing Miguel Tejada. They allowed far fewer runs after losing both Tim Hudson and Mark Mulder. That was the story with the A’s: They wouldn’t be able to hang onto their stars, but somehow they’d be able to remain competitive. I guess this is still more anecdotal evidence, and I did promise something more rigorous. To get right into the details, I based my study around numbers available on the FanGraphs leaderboards. The first step was figuring out how to define a star.

No matter what, this was going to have to be arbitrary, so I settled on a 6 WAR season. Six WAR is a lot of WAR, and those are seasons put up by good players. As a happy coincidence, this was inspired by Cano’s free agency, and last year Cano was worth exactly 6 WAR. Between 1988 and 2012, there were 399 individual player seasons worth at least 6 WAR, combining position players and pitchers. The next thing to do was figure out which of those players remained with the same team, and which of those players left, either by trades or as free agents.

Of those 399 individual player seasons, 365 stuck around. That means, in 34 instances over 25 years, the offseason saw the movement of a six-win player. The only difficult step remaining then was linking team performance, and team performance in the subsequent year. I decided winning percentage would do, because the whole point is to accumulate wins. Over bigger samples, the noise should more or less go away.

Let’s now examine those 34 instances. The most recent is Michael Bourn, who left the Atlanta Braves. People don’t really think of Bourn as a star-level player, but he certainly played like one, and he’s just one example of many. In their final years, players who left averaged 7.4 WAR. Their teams averaged a winning percentage of 52.9%. The next year, without the stars, the same teams averaged a winning percentage of 50.7%, which represented a drop of about 3.5 wins over a full season. In 38% of the instances, the team that lost the player went on to post a higher winning percentage.

That seems like a pretty sizable drop, even if we’re barely talking about an extra loss every two months. But what we also have is something of a control group — 365 instances in which a six-win player didn’t go away. It’s worth looking at those numbers, as well.

The players who didn’t leave averaged 7.2 WAR. Their teams averaged a winning percentage of 53.9%. The next year, those same teams averaged a winning percentage of 52.5%, which represented a drop of about 2.3 wins over a full season. In 40% of the instances, the team that kept the player went on to post a higher winning percentage.

In short, based on these samples:

Teams losing players coming off 6+ WAR seasons lost about 3.5 wins, on average

Teams keeping players coming off 6+ WAR seasons lost about 2.3 wins, on average

Average, of course, ignores that every circumstance is different. Sometimes teams lose star players because they’re about to begin rebuilding. Teams will behave differently if they’re still looking to win right away, or if they’re setting their sights down the road. But on average, you’re looking at a difference of about a win, as regression happens to everybody. Teams that lose star players have historically regressed from being so far above .500, but the same goes for teams that have kept their star players, because it’s an unavoidable principle. The mean is a powerful magnet.

The only team in the window to lose two star-level players in the same offseason is the Atlanta Braves between 2003 and 2004. Combined, in 2003, Javy Lopez and Gary Sheffield were worth 14.3 WAR. The next year, the Braves dropped from 101 wins to 96 wins, and their Pythagorean record hardly budged. Lopez and Sheffield turned into Johnny Estrada and J.D. Drew, and Drew was worth 8.6 WAR alone. Then he left and the Braves got worse, but they still won 90 games. And their Pythagorean record was comparable.

A fun fact of certain interest, even though it doesn’t have to do so much with this particular topic: Those 365 players who stuck around averaged 5.1 WAR the next season. The 34 players who departed averaged 4.1 WAR the next season. This even though, before, both groups averaged a little over 7 WAR. It could be nothing, or it could be teams making the right decisions on whom to keep and whom to lose. Or maybe players end up missing where they used to play. Or maybe more players in the “departed” group are flukes. It’s hard not to notice 2009 Chone Figgins, although the Angels definitely missed him the next year.

Teams with star-level players in one year tend to perform a little worse the next year. If they lose a star-level player, they tend to perform worse than if they had kept the star-level player, but the difference is small, and might be entirely attributable to different circumstances. Certainly, losing a player doesn’t cripple a team, and losing a player in the offseason means the loss can be prepared for. Teams are always in some way prepared to lose their free-agents, and of course, trades of star players don’t just happen at random. Teams that lose stars can respond proactively or reactively, and we can see this with the Yankees, as they made up for losing Cano by signing both Jacoby Ellsbury and Brian McCann. Money can be spread around, or it can be put toward new stars. A void in one place can often mean upgrades in other places.

What this captures, naturally, is that teams that lose stars work to make up for the loss. If you lose a star and do nothing about it, you’ll project to be worse by quite a bit. But that generally isn’t how teams behave, unless they’re in the process of tearing everything down. And maybe the ultimate point here is to just issue a reminder that baseball teams are made up of a whole bunch of parts, and one player can mean only so much. Baseball isn’t basketball or football, and it’s a lot more like hockey. A star player on a 90-win team contributes plenty of wins. His contribution is drowned by the combined contributions from everyone else, though. At the end of the day, losing a star is entirely survivable, because stars mean more in hearts than they do on the field.

+33

there’s just way too many variables to take all 34 examples and draw large sweeping conclusions. You compare the mariners or a’s to what the marlins have done with their firesales…they’re just completely different scenarios.

it really is based on the GM, and how the org is run to be able to judge if a team can continue on without a star player and harvest more from their farm system, or get back enough in a trade.

Also… The point about teams being able to prepare in the off season. What about teams that had a 6+ WAR player in one year, retained them, but the player got hurt and only played a small percentage of games the following year. Another arbitrary number in determining the number of games. Also, as a part of the analysis, maybe extrapolate what the player’s WAR would have been if they averaged their performance out over the course of the season for what they did play.

I would also like to see your suggestions, and add the the players dietary habits (daily intake broken down into percentages of total of the folllowing: protein, carbohydrate [sub categories of complex and simple], fiber [sub catergories of water soluble and non water soluble], and salt. Sleep habits, number of hours per day, whether they have sleep apnea, etc., and their daily exercise regimens.

“Those 365 players who stuck around averaged 5.1 WAR the next season. The 34 players who departed averaged 4.1 WAR the next season. This even though, before, both groups averaged a little over 7 WAR.”

Would the departing players leaving be older and into their decline? Many departures are free agency related (whether actual free agents or trades motivated by potential free agents). Free agents, having done their service time, may be older than other players.

Matt Schwarz has established beyond all argument that players kept outproduce equivalent players waved bye-bye to, by a significant amount. In other words, teams know what they’re doing in this regard. This is just a subset of that broader truth.

If you could actually find a fair return for his value, it’s not a crazy idea. After all, the A’s did far better in the AL West with a schmear of good players across the diamond than the Angels did with a hyperstar at one position. But a trade of that magnitude is basically unheard of.

In baseball today I am not sure that any 1 entire team has a collective “excess” value in player contracts that Trout represents. Certainly not that would represent upgrades to the Angels in a 25-man roster.

This is a trade that literally cannot happen with Trout at 9.2 WAR and half-a-million bucks…

You’re right: trading a pile of players for Trout couldn’t work. But if the Angels included some liabilities like Pujols and Hamilton, I think there are teams that could come up with enough assets to make a deal.

I don’t know, I’m an Angels fan and would never trade Trout for just about any reason out of homerism, but trying to be objective I can think of a few teams that have pairs of players that would be tempting:

Those are a few that come to mind. I still wouldn’t do any of them, but it would be tempting.

But it goes beyond subjective homerism. The saying “better the bird in hand than two in the bush.” If we say that Trout will average 8 WAR over the next decade, or 80 total, while Sano and Buxton could surpass that, odds are that they won’t.

It seems like there are too many variables here to really capture everything in a sample size of only 34 teams (although the results do make intuitive sense). How many of those “stars” were coming off a career year and due for regression? How many were older, and in decline? What were the teams like around them, both before and after the move? What did the teams that they left do to compensate?

You mentioned some of these issues, and these things definitely average out over a large enough sample size, but I suspect the error bars on a 34-team sample are large enough that those results aren’t statistically significant.

I think it’s tempting to ignore small sample size issues when the results make sense, like they do here. We expect that teams losing a star player will get moderately worse, so it’s easy to accept data that confirm that, even if the error bars on that data are huge.

Maybe dropping the WAR to 5 and adding a second year would remove some of the “career year” flukes and could get rid of some noise. But still 34 is not a terrible sample size. It would be interesting to just do a T-test to see if the two groups (34 team vs 365 team) was significantly different (or just a reporting of the variance of each group would be cool). My gut says a little over 1 win would be significant, since our projects having 5-10 wins of noise in a single team season and we’re talking about 34 vs 365 team seasons. So the 365 estimate should have very small error. While the 34 team seasons should still have some, my guess would be less than 1 win.

While I agree with the question (is the difference between keepers and droppers greater than what would be observed by random chance?),The sample size of the two distributions are grossly unequal, making the t-statistic a poor measure for comparison.
Perhaps it would be better to use a bootstrap to evaluate the mean and variance of the difference between the 34 and 365 teams.

I’ve never really ran into someone saying a T-test for unequal samples sizes is no good when a T-test is actually the right test by the experimental design, but I’m simply familiar with statistics but not statistician.

Any sample of 6 WAR players is going to suffer from a huge selection bias for players that have had very lucky years, and therefore we would expect the next year sample of the sample players to experience a sharp decline in WAR, and indeed that is what we see. The question is whether the magnitude of this regression might correlate with whether that player changed teams.

If a team is smart, they will probably know the difference between sustainable performance and a player who has had a very lucky season. They will be more likely to sell high on a lucky player, and that player would be expected to regress more than a skilled player.

I would have thought that teams losing a star would fare better than teams keeping a star. Often times, the reason for losing a star is because there is a viable in-house alternative that can at least replicate enough of that production to justify re-allocating resources to fill other holes. When you keep a player, you’re losing that flexibility. On the other hand, you’re keeping your star player, so I guess that is the more important factor.

Sample size issues are exemplified by the difference in winning percentage of the sample with 34 and the sample with 365. We would expect both populations to have the same mean winning percentage pre-free agency/trade, but the smaller sample skews toward a smaller mean, reducing our certainty in the actual magnitude of the difference.

Another cool thing we could get from this data: what’s the payroll of teams keeping/losing their stars?

This is a nice data set and some more meta-data associated with it could lead to to neat results. Might be hard with “just” 34 teams, as subdivisions of that group would quickly drop numbers down to were noise would certainly obscure things, but playing with the cut offs might get a bigger sample.

Interesting stuff; just a quick complaint about the structure. The bit on Bourn being the most recent example of one of these players seems out of place, plopped right in the middle of the results paragraph. It’s only a small non sequitur, one which didn’t detract that much from the piece as a whole, but it’s still irksome.

– The players who stayed averaged a 2.1 WAR drop in performance, and their teams won on average 2.3 fewer games.

– The players who left averaged a 3.3 WAR drop in performance, and their former teams averaged a 3.5 win decline.

In other words, it seems like no matter what a GM does, the fates of team and star are intertwined. :)

The first one, of course, could be a direct causal relationship. The second one is more coincidental. I guess you could say that it suggests that GMs are, on average, doing an adequate job of replacing their departing stars, not with players who are as good as the stars were, but with players who are as good as the stars project to be the following season.

I’ve seen do much written this offseason about losing X WAR players, but doesn’t spreading that WAR over multiple players account for something? Yes, the Yanks lost 6 WAR Robbie Cano, and the total of players that they got to replace him might fall short of the WAR value, but last tone I checked Robbie only knocked himself in 27 times last year. If the WAR is spread out a bit isn’t the sum value added bound to be greater?

1. Free agents are older, thus more likely to decline the next year, even if they were 6 win players.

2. The players that move are a much smaller sample size, meaning Figgins’s .3 WAR or whatever it was really impacts the average.

3. Maybe signing big free agents to massive contracts is dumb, so looking at the teams that don’t re-sign these players selects for smarter teams which are more likely to win games the next year than dumber teams.

Aside from the many factors listed in the comments above, did you find the change in record to be statistically significant. I look at a two-ish win difference with a N of 34 and assume it’s not significant. Similarly, I would bet that there isn’t any difference between the WAR of players who stay and go once we include dummy and interaction variables. My guess is the players that left are more likely to have had a fluky .400 BABIP season, crazy HR/FB, or a 88% LOB% (among other “lucky” outcomes).