I don’t know if it’s ever been addressed, but when Dewan released his Fielding Bible Volume 2, his outfield arm ratings were messed up.

Due to what the stats are trying to convey, if you add up all the defensive stats for a position in a league, you get zero (or extremely close to it).

This is not the case with the arm ratings. You get something like +150 at RF, +100 at CF, +50 at LF (I don’t remember the exact numbers, I had added them up when the book came out a year ago).

Meaning that total plus/minus for OFers were incorrect because the arm runs weren’t runs above average. I mean, comparing two players was fine, i.e. if OFer A is five runs better than OFer B, it’s going to show that. But the +10 OFer wasn’t 10 runs above average, he was less than that.

RZR is simply the rate of balls in a player’s zone converted into outs. You can change this into plays and runs above/below average by comparing the player’s RZR to the league average. UZR is a rate, but it is presented as runs saved or cost compared to the average player at the position.

Let’s say, for example, we have a league average RZR at SS of .800. The player we’re looking at has 440 balls hit into his zone, and he converts .810 of them into an out. This means he’s made 356 plays. A league average SS, however, is expected to make .800*440 = 352 plays, meaning that our player has made 356 – 352 = 4 plays above an average shortstop. Converted to runs, that’s about +3 runs saved.

This isn’t taking into account a player’s Out Of Zone (OOZ) plays, which can be done by looking at the player’s OOZ rate compared to balls hit in his zone.

I do want to point out though that there seems to be a problem on the leaderboards. If I look at 2009 outfielders I see that Nyjer Morgan appears near the top at +22. If I look at 2009 for all players then Morgan is nowhere to be found when he should be near the top. Might this glitch be related to the fact that he played for two teams last year?

That’s correct. Morgan isn’t “qualified” at either CF or LF in 2009, but for OF overall, he is. The “overall” leaderboards break out each player by individual position and doesn’t include “OF” overall as a position.

Do park factors come into play at all with rARM? ( I guess you could look at home/road splits but I’m guessing the sample sizes are probably already pretty small?)

For example LF in Boston seems like a huge advantage for preventing an extra base (either 2nd to home 1B to 3B or even 1B to home on on a double). And on the opposite end a large park may hurt (where it may be easier to score from 1st on a double)? Or say Coors where you may be playing deep and it might be easier to take an extra base on a single.

It’s unfortunate that it’s still like that, I had tried to contact him about it when I first got the book but to no avail. As you said, it’s something that likely could be normalized on Fangraphs end, and if you guys were to actually do that I think that would be amazing. While UZR is likely the superior stat, it’s still better to look at as much information as possible.

As with all the stats, there is not a single point where they switch from “unreliable” to “reliable.” Larger sample sizes mean they will be more reliable, and the larger the better.

There is, however, a point where there is more signal than noise in the data. In other words, once we have n datapoints we think a player’s true value is closer to his observed value than to the mean value. Is that what you are after? If so I would guess it’s pretty much the same as UZR…

Thinking about using this stuff for a THT article Tuesday…how is this data gathered? By BIS? And are certain ballparks not covered, or covered differently? Only 20 players have Scoops to their name from last season.

So WAR is still based on UZR Defensive runs saved?. There is quite a difference between +/- DRS and UZR Defensive runs saved for some players (example +4 for JD in +/- and + 10.5 by UZR). Any chance of averaging the 2 to smooth things out and improving overall accuracy (in meterology several models are averaged to improve accuracy in forecasts).

For people with questions on the different components, here’s an explanation from the website:

Defensive Runs Saved (Runs Saved, for short) is the innovative metric introduced by John Dewan in The Fielding Bible—Volume II. The Runs Saved value indicates how many runs a player saved or hurt his team in the field compared to the average player at his position. A player near zero Runs Saved is about average; a positive number of runs saved indicates above-average defense, below-average fielders post negative Runs Saved totals. There are eight components of Runs Saved:

• Plus Minus Runs Saved evaluates the fielder’s range and ability to convert a batted ball to an out.
• Earned Runs Saved measures a catcher’s influence on his pitching staff.
• Stolen Base Runs Saved gives the catcher credit for throwing out runners and preventing them from attempting steals in the first place.
• Stolen Base Runs Saved measures the pitcher’s contributions to controlling the running game. • Bunt Runs Saved evaluates a fielder’s handling of bunted balls in play.
• Double Play Runs Saved credits infielders for turning double plays as opposed to getting one out on the play.
• Outfield Arm Runs Saved evaluates an outfielder’s throwing arm based on how often runner advance on base hits and are thrown out trying to take extra bases.
• Home Run Saving Catch Runs credits the outfielder 1.6 runs per robbed home run.

This is awesome and I’m one who actually likes the RZR/OOZ component that Hardball Times used to show. Plus/Minus and UZR may be the industry standard now, but I still like to see if RZR/OOZ provide support.

RZR has a defined, tangible output. it tells you the rate of plays made to plays in the zone. you combine that with plays made out of zone (OOZ), and fielding %, and you’ve got a hell of a lot more information that UZR could convey.

I agree with the sentiment above…this is excellent. I also like to look at OOZ, because I think it tells you a different piece of information about players. I think Hardball Times originally separated RZR and OOZ because out of zone plays should have more value, and HT decided to let the user assign their own weighting to OOZ. I think it’s interesting to see the different profiles for players who have high RZR ranking but low OOZ, and vice versa, for example. I also like comparing Fielding Bible and UZR results to get a higher comfort level in the defensive ratings.

I subscribe to Bill James on-line part because I like the defensive data; and I will continue subscribing in the future. I don’t want to punish that web site when they do a good thing like sharing defensive data with Fangraphs. (Besides, Bill James on-line is cheap.)

Glad to see the RZR numbers posted on fangraphs, Thank You!
Question- Is the “runs saved” statistic an actual number, or a projection based on other stats?
2. As a classic example, Placido Polanco led the majors in UZR among second basemen in 2009, yet he was way down the list even among AL second basemen in RZR. Which is a better measure of Polly’s fielding performance vs his peers?
And here’s the modest position from whence I come: I like RZR. Number of balls hit into a player’s “zone” that he turns into outs. Okay, I can see that it has to be taken together with OOZ plays. But UZR confuses me. The explanation of those numbers make my head spin, and that’s extremely rare for me. Any clarification using PP as an example would be appreciated.
Tigerdog

Because if you compare DPS to +/- on THT website, they’re off. In 2008, Robinson Cano posted a DPS of -13 but a +/- of -16. Every player I look at is the same. Chase Utley has a DPS of 33, but a +/- of 47.

Looking at DRS at the team level just now, I notice that teams collectively are + 232 runs above average. How is this possible, should it not come out to 0?. If it should then this inflation comes out to about 7 runs per team and must also be inflated at the player level.