Conference Comparison

I was doing some thinking today, and I realized that my post about the ACC earlier this week was a bit unfair. I threw out a bunch of stats that seemed to put the conference in a negative light without giving comparative stats for the other BCS leagues. I mean, it makes sense that a conference would have a terrible record against out of conference top-10 teams; after all, meetings of two out of conference top-10 teams are rare outside bowl season, and that skews that stat heavily in favor of the top-10 opponents. That just didn’t feel right today, although it certainly did feel good to put that post together at the time.

I decided to sit down and come up with a fair way to judge how good the conferences are right now that accounts for differences in game scheduling. After all, if the worst ACC teams play the majority of the league’s out of conference games against other BCS teams* or top 10 teams, then there would be an awfully good reason for the stats to be as they were. I pondered whether to use different stats than in the original ACC post or not, but the more I thought about it, the more those three categories made sense.

Category 1: Performance vs. BCS Teams in Prior Year: This category gives a measure of the immediate past and includes performance against any BCS team, from Duke up to USC. Everyone seems to agree that when ranking conferences right now (at whatever point in time “right now” is for the discussion), you must include the results of the previous year. Well, done and done. This stat moves the fastest, because it changes year by year.

Category 2: Performance vs. Top-10 Opponents since 2002: This category gives an intermediate time frame and narrows the scope to performance against the best teams of each year. When ranking conferences, knocking off top-10 teams out of conference consistently should definitely give a league a boost, and while there can be flukes, this category of win should still be considered. This stat moves a little slower year to year, since it includes several year’s worth of games.

Category 3: Performance in BCS Bowls: This category gives a longer-term scope and focuses solely on a conference’s very best against the very best of that year. Conference champions are usually (no, not always) the best team in a conference, and if there’s another significantly good team in a conference it usually gets an at-large bid. This is plain and simple: when it comes to elite vs. elite, whose is better? This stat moves the slowest since no conference can play more than two games a year in this category, per BCS rules.

In order to keep score, I devised a system that gives you three things:

It accounts for the quality the conference’s teams that are playing the games in question compared to the opponents they are facing in those games. I do that by calculating what I call the Strength Ratio, which is aggregate win percentage of opponents divided by aggregate win percentage of the conference’s teams. A score of 1 means all games were evenly matched; a score above 1 means the opponents overall were better than the conference’s teams; a score below 1 means the opponents overall were lesser than the conference’s teams. To calculate the aggregate win percentage, every time a team plays a game its record is used. This is because I’m counting by games, not by team. For example, in counting the ACC’s aggregate win percentage in Category 1, Duke’s two games means its 0-12 record is factored twice, but Wake Forest’s four games means its 11-3 record counts four times. This ratio does not account for number of games played.

Next, I award Performance Points. These are based on looking at whether the outcome of a game was what it should have been. It uses record at the end of the year to determine if a team is better, lesser, or a push** in its game against a given opponent. This may be overly simplistic, as it doesn’t differentiate between a 13-0 record in the SEC (like 2004 Auburn) and a 13-0 record in the WAC (like 2006 Boise State). However, it does even things out, and it is a useful and easy to understand rubric. If a better team wins or a lesser team loses, zero points are awarded. That is the expected outcome of the game, so there’s no reason to penalize the lesser team for scheduling above its head or rewarding the better team for winning a game it should have won. For a lesser team winning, three two points are awarded; for a better team losing threetwo points are deducted. In a game with two pushes, the winner gains and the loser loses one point. If a team finishes with zero performance points for a category it’s not bad, it just means that overall, its games played out the way they should have. Having negative points would be bad. This, by the way, is the metric that accounts for number of games played but not overall strength (like strength ratio does).

Finally, I use the strength ratio and performance points to create a Score for the category. If the conference has positive performance points, then the points are multiplied by the strength ratio. This rewards teams for having positive points while facing tougher competition (as signaled by a strength ratio over 1) and adjusts downward the scores of teams that have positive points against weaker competition (as signaled by a strength ratio less than 1). On the other side, if the team has negative performance points, then the points are divided by the strength ratio. This lessens the blow for a conference that has negative points while facing tougher competition (because dividing a negative number by a positive number that’s greater than one moves it upward closer to zero) and heavily penalizes conferences that rack up negative points against weaker competition (because dividing a negative number by a positive number that’s less than one moves it downward away from zero).

Once the category scores have been calculated, a Final Score for the conference is determined. It is a simple average of the three category scores. Ideally, comparing the six final scores should give an indication as to which conference is best right now. Of course, I do reserve the right to adjust the scoring if I get any results that don’t pass the smell test. The most obvious example is that I might adjust the +/- 3 point situations down to 2. We’ll see, but I’m satisfied with this scale for now. EDIT: The scale has been adjusted down to +/- two points to keep the scores closer to each other. I ended up with essentially two outliers, and I don’t want any. Now, the results seem much more realistic.

Throughout all of this explanation, I have accounted for performance against good competition over time while adjusting for differences in schedules, level of competition, number of games played against good competition, and quality differences in individual matchups. I think that’s not too bad for a first try at setting up a mathematical model to rank the six conferences. At this time of this writing, I have completed only the ACC and Big 12, so I don’t know how this will end up or where these numbers will lead.

My expectation is that three tiers will emerge: the top shelf consisting of the SEC and Big Ten, the middle area with the Big 12 and Pac 10, and the bottom rung with the ACC and Big East. I could be absolutely wrong though, and that’s part of what makes all of this fun.

*A BCS Team is defined as any team in a BCS conference plus Notre Dame. This applies to specific years, so for example 2002-2003 Miami is a Big East team, and 2004-2006 Miami is an ACC team; also, pre-2005 Cincinnati is not a BCS team.

**A push is when two teams in a game have identical end of year records. The better and lesser titles are determined by win percentage in order to differentiate between teams that played different numbers of games in the same year.