Fremeau Efficiency Ratings

College football power ratings and analysis

FEI Preseason Primer Part I

by Brian Fremeau

We’re not going to get the projections for every team right.

I have to remind myself of that every offseason. But I don’t fret over teams that surprise us with unexpected runs to BCS bowl games, nor teams that crash and burn despite what the model suggested. That’s college football, and it’s awesome.

I get caught up tweaking and testing the FEI projection model not in a pursuit of perfection, but in a pursuit for more meaningful data. How important are data points like returning starters, quarterback reliance, and recruiting rankings? How important are data points from last season as compared with data collected over several seasons? Is the goal to project wins and losses or overall team strength? Do those two goals always go hand in hand?

I value the end-of-year FEI ratings, a measure of opponent-adjusted possession efficiency, so I build my projections with end-of-year FEI ratings as the target. Strength of schedule and mean wins are a function of the projections. That approach has not changed this year. What has changed is a bit of the input data.

For as long as I have been producing preseason projections, I have been using Program FEI as the baseline measure from which all other adjustments are made. The Program FEI ratings are calculated the same way I calculate single-season FEI, but I include five years of drive efficiency data instead of just one. Program FEI helps illustrate the trajectory of a given program (see the 25-year Program FEI charts for Alabama and its opponents below), but it also correlates very well with next-year FEI (.746), better than any other single measure I’ve tested.

To calculate these "Alternate FEI" ratings, I turned to the Game Splits data I have began processing over the last two years. Game Splits represent the scoring margin components of a game: the value contributed to the non-garbage time margin of victory or defeat.

Let's take last year's BCS national championship game as an example. Alabama defeated Notre Dame 42-14, but according to my methodology, garbage time began after the Crimson Tide took a 35-0 lead in the third quarter. Which units contributed to that 35-point margin for Alabama? By breaking down the value of field position at the start of each drive, the units responsible for generating that field position, and the value generated by the offenses at the conclusion of each drive, my method attributes 26.6 points of scoring margin value to Alabama's offense, 4.7 points to its defense, and 1.7 points to its special teams. An additional 1.9 points was generated by none of these units, attributed to the fact that the Crimson Tide also had one more offensive possession than the Irish at the time garbage time began. Add them all up (26.6 + 4.7 + 1.7 + 1.9 = 34.9; rounding accounts for the 0.1 discrepancy) and you get the 35-point scoring margin of the game.

The Game Splits for all 2012 games can be found here, along with links to 2007-2011 Game Splits. Two other columns in these tables indicate the turnover value and the field position value of each game. In the BCS championship game, Alabama had a 2.3 point advantage on turnovers, and a 2.7 point advantage on field position.

Readers familiar with FEI columns from last season know that I used the turnover, special teams, and field position data to recalculate "Revisionist Box Scores," the alternative outcome of games if those factors had been neutralized. Last season, turnovers were a deciding factor in 16.4 percent of FBS games, special teams were a deciding factor in 7.9 percent of games, and field position was a deciding factor in 10.1 percent of games. That is, the value generated by those factors exceeded the non-garbage scoring margin of the game.

This year, I'm taking the Game Splits analysis a step further. Instead of simply publishing a revisionist box score, I recalculated FEI for each neutralized environment. In the turnover-neutral environment, Alabama defeated Notre Dame by 32.7 points instead of 35 points (35 - 2.3 = 32.7). In the special teams-neutral environment, Alabama defeated Notre Dame by 33.3 points. In the field position neutral environment, Alabama defeated Notre Dame by 32.3 points.

In blowout games such as the title game, the impact is pretty insignificant. But in other games, the neutralized environment flips the outcome. Against Texas A&M, the Crimson Tide lost 27-22, a five point margin of victory. But Texas A&M had a turnover value advantage of 12.2 points in that game. In the turnover-neutral environment, Alabama "won" by 7.2 points.

To produce Alternate FEI ratings, I simply recalculated game efficiency according to the neutral scoring margins and made the same opponent-adjustments as with FEI. The complete set of Alternate FEI ratings for 2012 can be found here. For many teams, the neutral rankings aren't dramatically different, but there are some notable exceptions. Florida State ranked No. 13 overall in FEI, but jumped up to No. 6 overall in turnover-neutral FEI. Stanford ranked No. 7 overall in FEI, but fell down to No. 13 overall in special teams-neutral FEI.

This wasn't just an interesting exercise: I found that the Alternate FEI ratings were another factor that would help with the annual projection adjustments.

I also investigated the way I calculate the start of non-garbage time, and which data is discarded in the FEI and Program FEI ratings. I didn’t find anything substantive enough to alter the way in which I determine garbage time versus non-garbage time, but I did unexpectedly find that previous year garbage time data has some value as it relates to next-year non-garbage time value. I can only speculate at this point as to why it has value; my colleague Bill Connelly offered a plausible suggestion that garbage time value relates in some way to team depth and the strength of second stringers who won’t make their impact until the following season.

In the end, this year’s FEI projections are still rooted in Program FEI, but are now adjusted a bit by the garbage time factor, turnover-neutral FEI, special teams-neutral FEI, and field position-neutral FEI. I have also retained factors in the projection model such as number of returning starters, five-year weighted recruiting rankings, and quarterback reliance (the percentage of total offense run through the team’s quarterback). Since 2004, the correlation of projected FEI to actual FEI is .788.

The final FEI 2013 preseason projections are published below. Readers that have been following my preseason content at ESPN Insider and ESPN the Magazine will note some changes in this latest round of projections. FEI is a little higher on Louisville and Michigan now than it was a month ago, both mostly due to the inclusion of the new Alternate FEI data and garbage time factors in the projection model. In addition, these projections are based on FEI only, not both FEI and Bill Connelly's S&P+, and therefore will be different from the projections in FOA 2013.

In addition to everything discussed so far, I have new Field Position data tables published here at Football Outsiders which can be found using the drop-down statistics menus above. Here is the 2012 Field Position table, breaking down FPA into elements such as each team's actual average starting field position and the percentage of drives from short and long field position distances. The field position tables are available for all seasons since 2007. On my site, I recently published points per drive data dating back to 2007 as well.

This is only Part I of the FEI preseason primer. Next week, we'll dig into individual game projections with all this new data at our disposal.

FEI 2013 Preseason Projections

The Fremeau Efficiency Index (FEI) rewards playing well against good teams, win or lose, and punishes losing to poor teams more harshly than it rewards defeating poor teams. FEI is drive-based, not play-by-play based, and it is specifically engineered to measure the college game.

FEI is the opponent-adjusted value of Game Efficiency (GE), a measurement of the success rate of a team scoring and preventing opponent scoring throughout the non-garbage-time possessions of a game. FEI represents a team's efficiency value over average. Strength of Schedule (SOS) is calculated as the likelihood that an "elite team" (two standard deviations above average) would win every game on the given team's schedule. SOS listed here includes future games scheduled.

Mean Wins (FBS MW) represent the average total games a team with the given FEI rating should expect to win against its complete schedule of FBS opponents.

Preseason projections are a function of Program FEI ratings, previous-year FEI and garbage time data, previous-year turnover-neutral, special teams-neutral, and field position-neutral FEI, returning starters, recruiting success, and quarterback reliance. As the season progresses and actual 2013 data is collected, the weight given to projected data will be reduced each week until Week 7, at which point it will be eliminated from the rankings entirely. Offensive and defensive FEI ratings will also debut in Week 7.

Comments

As the season progresses and actual 2013 data is collected, the weight given to projected data will be reduced each week until Week 7, at which point it will be eliminated from the rankings entirely.

My reading of that statement is that "Week 7" is a hard date (e.g. the seventh Saturday of the NCAA football season) that applies to every team at the same time.

If my reading is accurate, have you considered a per-team alternative to the effect of "after the team has played seven games against FBS competitors"? My suspicion is that it would keep some jitter out of mid-season rankings for teams with early-season FCS opponents or byes.

Good suggestion of something to look into. I've tried several variations on how to reduce the weight of preseason data in the formula over the early weeks of the season, but there are always others to consider.

Georgia State opponent graph is only showing Alabama's FEI.
Also, the SOS seems intuitively backwards. A higher FEI rating should make the SOS harder, which seems should be a 'higher' ranking (lower number, 1=highest). As is, when I read a team has a SOS rank in the 100s, I think "oh, that's an easy schedule."

When a team has an SOS rank in the 100s, it is an easy schedule. I think the confusion might be in misinterpreting what the SOS rating means. It represents the likelihood that an elite team would go undefeated against the entire schedule, so a low SOS rating is a tough schedule.