Most Recent FO Features

In honor of Thursday night's nationally televised Jags-Bucs contest, Scramble takes on the two South divisions. Can Atlanta's late-season surge to greatness continue in 2017?Are the bandwagons in Tennessee and Tampa Bay ready for prime time? And does Houston actually need a quarterback at all?

Most Recent Extra Points

Varsity Numbers: Revisiting 2005-06

by Bill Connelly

At the end of the Top 100 series, I mentioned that it is time to look ahead to 2010. That was only a half-truth, apparently. While we have been focusing on 2010 in ESPN columns and recent podcasts, I am once again using this space to look to the past. This offseason, we added the 2005 and 2006 seasons to the play-by-play database. You can now find S&P+ rankings for each of the last five seasons in the Statistics section of FO, so let's briefly revisit those two seasons.

2005

In terms of dramatic poll shifts and title game uncertainty, 2005 was one of the least interesting seasons in college football history. USC and Texas began the season ranked first and second in the AP Poll, and they finished the regular season in the same positions. The only change at the top came when Texas beat USC for the national title, knocking the Trojans to No. 2. This season was one of the easier ones to figure out. There was a clear upper tier (Texas, USC), a relatively clear second tier of about six to eight teams, and a big mush of teams after that. Advocates of a "flexible" playoffs system -- one that can choose a different number of participants each year -- can very clearly use this season as justification. At the end of the season, two teams had very clearly justified their claims for a national title bid. They were a step ahead of everybody else. As we will see, 2006 was a completely different story.

F/+

We can always find areas where the S&P+ and FEI ratings disagree, but when it comes to the nation's top teams in 2005, the variance was minimal. Of the teams that ended up in the F/+ (a merging of the FEI and S&P+ ratings) Top 15, none finished lower than 18th in either system, and the systems never disagreed by more than six spots on any one team (Miami, who finished sixth in S&P+ and 12th in FEI). Only one finished lower than 17th in the final AP poll.

2005 F/+ Top 25 (Final)

Rk

Team

F/+

S&P+

Rk

FEI

Rk

FinalAP Rk

1

Texas (13-0)

+36.8%

278.8

1

.342

1

1

2

USC (12-1)

+35.9%

278.2

2

.326

2

2

3

Ohio State (10-2)

+31.3%

268.5

3

.283

3

4

4

Penn State (11-1)

+29.7%

264.4

4

.272

4

3

5

Virginia Tech (11-2)

+27.8%

258.8

5

.262

5

7

6

LSU (11-2)

+23.4%

245.4

8

.241

6

6

7

Miami (9-3)

+22.7%

251.5

6

.196

12

17

8

Alabama (10-2)

+21.6%

246.6

7

.199

11

8

9

Georgia (10-3)

+21.1%

239.2

9

.226

7

10

10

Notre Dame (9-3)

+20.6%

237.4

12

.224

8

9

2005 F/+ Top 25 (Final)

Rk

Team

F/+

S&P+

Rk

FEI

Rk

FinalAP Rk

11

Michigan (7-5)

+20.1%

237.6

11

.215

9

28

12

Auburn (9-3)

+20.0%

237.3

13

.214

10

14

13

Florida (9-3)

+17.1%

231.7

16

.184

14

12(t)

14

West Virginia (11-1)

+17.1%

229.7

18

.193

13

5

15

Oregon (10-2)

+16.3%

230.0

17

.177

15

12(t)

16

Oklahoma (8-4)

+16.3%

238.4

10

.134

23

22

17

Boston College (9-3)

+16.3%

233.0

15

.161

19

18

18

Clemson (8-4)

+15.1%

226.3

20

.170

18

21

19

Louisville (9-3)

+14.5%

233.4

14

.123

26

19

20

Iowa (7-5)

+14.3%

223.1

23

.170

17

31

2005 F/+ Top 25 (Final)

Rk

Team

F/+

S&P+

Rk

FEI

Rk

FinalAP Rk

21

Florida State (8-5)

+13.8%

223.5

22

.158

20

23

22

Texas Tech (9-3)

+13.2%

222.8

24

.149

22

20

23

TCU (11-1)

+12.9%

217.7

30

.170

16

11

24

Minnesota (7-5)

+12.2%

225.4

21

.116

28

NRV

25

Wisconsin (10-3)

+12.0%

216.8

34

.157

21

15

Three teams finished ranked in the AP poll but outside the F/+ Top 25: UCLA (16th in AP, 32nd in F/+), Nebraska (24th in AP, 33rd in F/+), and California (25th in AP, 36th in F/+). Clearly the biggest disagreement between AP voters and Football Outsiders' ratings came with the Big Ten. Six Big Ten schools finished among the F/+ Top 25, two of which were left out of the AP's Top 25. Michigan was the biggest outlier -- they finished 11th in F/+ (9th in FEI, 11th in S&P+) but just 28th via the voters. Meanwhile, Minnesota snuck into 24th place according to F/+ but did not receive a single vote ("NRV") from the AP.

Box Score of the Year: Texas 41, USC 38

Today we'll also look at the Varsity Numbers Box Score for the 2005 and 2006 national title games.

Texas

USC

Close %

100%

Field Position %

44.7%

46.3%

Leverage %

77.6%

78.1%

TOTAL

EqPts

36.5

33.1

Close Success Rate

60.5%

53.7%

Close PPP

0.48

0.40

Close S&P

1.085

0.941

RUSHING

EqPts

23.9

19.3

Close Success Rate

61.1%

52.6%

Close PPP

0.66

0.51

Close S&P

1.274

1.035

Line Yards/carry

3.70

3.90

PASSING

EqPts

12.6

13.8

Close Success Rate

60.0%

54.6%

Close PPP

0.32

0.31

Close S&P

0.916

0.860

SD/PD Sack Rate

0.0% / 0.0%

6.9% / 6.7%

STANDARD DOWNS

Success Rate

66.1%

59.4%

PPP

0.51

0.59

S&P

1.166

1.089

PASSING DOWNS

Success Rate

41.2%

33.3%

PPP

0.39

0.10

S&P

0.806

0.429

TURNOVERS

Number

1

2

Turnover Pts

4.2

8.0

Turnover Pts Margin

+3.8

-3.8

Q1 S&P

0.647

0.711

Q2 S&P

1.338

0.663

Q3 S&P

1.108

1.381

Q4 S&P

1.078

1.163

1st Down S&P

1.119

1.030

2nd Down S&P

1.265

0.789

3rd Down S&P

0.438

0.915

Projected Pt. Margin

+7.2

-7.2

Actual Pt. Margin

+3

-3

(As a refresher, "Field Position %" signifies the percentage of a team's plays that took place in their opponent's field position, while "Leverage %" refers to the percentage of a team's plays that took place on Standard Downs instead of Passing Downs. On average, anything over 40 percent or so is good when it comes to Field Position %, while 70 percent seems to be the general average for Leverage %.)

This game could not have been closer. Heading into Texas' final drive, USC had done enough to win, both on the scoreboard and in the box score. Vince Young changed that.

Outside of the final two drives, the most damage to USC was done on Passing Downs. Despite the presence of Matt Leinart, Dwayne Jarrett, and the other talented receivers, the Trojans only managed a very pedestrian 0.429 S&P on passing downs, while Young was consistently able to get Texas out of similar jams. They also recovered four of the game's five fumbles. That helps.

The game had more plot twists than any single game deserves to have. The first quarter was mostly a feeling-out process, with USC scoring after Texas' Aaron Ross muffed a punt. Both defenses stopped fourth-down attempts, and Texas recovered Reggie Bush's ill-advised lateral attempt after a long gain into the Longhorns' red zone. Texas got rolling on offense in the second quarter and took a 16-10 lead into halftime.

From there, it was a fireworks show. Texas punted on the first drive of the second half, and here are all the drives that followed: USC touchdown, Texas touchdown, USC touchdown, missed Texas field goal, USC touchdown, Texas field goal, USC touchdown, Texas touchdown, USC turnover on downs, Texas touchdown. It really was one of the most exciting games college football has seen, and in the end, the best team truly did win. By a nose.

2006

The 2006 season appeared to be shaping up similarly to 2005, with two teams virtually running the table. Like USC, Ohio State began the season ranked No. 1 and stayed that way until the national title game. Meanwhile, Michigan began the season ranked just 14th but assumed the No. 2 spot by mid-October. Other teams thought to be contenders took turns tumbling -- Notre Dame, Auburn, USC, and Florida all lost unexpectedly, leaving the Buckeyes and Wolverines to coast to a No. 1 vs. No. 2 showdown on November 18. In the biggest game the rivalry had seen, Ohio State held off visiting Michigan by three. What followed was one of the most annoying poll developments I can remember.

One thing computer rankings systems can do that human polls typically can't is avoid bias. When No. 2 Michigan lost to No. 1 Ohio State by just three points in Columbus, the Wolverines all but verified that they were indeed deserving of the No. 2 ranking. They had knocked off all comers, but they had lost to the only team ranked above them by a margin that suggests the result of the game might have flipped in Ann Arbor. But due to both some politicking on the part of Urban Meyer, and the simple fact that pollsters can't help but punish teams after losses, even ones as understandable and justifiable as this, Florida moved into the No. 2 spot in the rankings. If Ohio State and Michigan had truly been the top two teams in the country before they played each other, they had done nothing but back that up. Yet, Florida made the championship game anyway.

And, of course, Florida then wiped the floor with Ohio State. Notice I didn't say Ohio State and Michigan were the top two teams -- it's just that if you thought they were before they played, then that epic game in Columbus did nothing but verify what you were already suspecting. Consistency is all I am requesting.

F/+

Ohio State clearly ranked first in the F/+ rankings heading into the title game, ahead of a cluster of teams (Florida, LSU, USC, and maybe Michigan or Louisville) fighting for second. And after the massacre that was the BCS Championship Game in Arizona, Florida quite obviously blazed ahead.

2006 F/+ Top 25 (Final)

Rk

Team

F/+

S&P+

Rk

FEI

Rk

FinalAP Rk

1

Florida (13-1)

+33.3%

262.5

2

.320

1

1

2

Ohio State (12-1)

+31.1%

269.5

1

.241

5

2

3

LSU (11-2)

+30.9%

262.4

3

.273

3

3

4

USC (11-2)

+30.3%

252.7

6

.311

2

4

5

Louisville (12-1)

+28.3%

256.6

4

.250

4

6

6

Michigan (11-2)

+25.7%

255.9

5

.203

12

8

7

West Virginia (11-2)

+24.6%

247.1

8

.226

6

10

8

BYU (11-2)

+24.4%

248.7

7

.213

9

16

9

Rutgers (11-2)

+22.2%

246.1

10

.182

18

12

10

Virginia Tech (10-3)

+22.1%

247.1

9

.175

19

19

2006 F/+ Top 25 (Final)

Rk

Team

F/+

S&P+

Rk

FEI

Rk

FinalAP Rk

11

Arkansas (10-4)

+21.7%

239.7

12

.206

10

15

12

Boise State (13-0)

+21.1%

238.2

13

.201

13

5

13

California (10-3)

+20.7%

232.1

16

.225

7

14

14

Oklahoma (11-3)

+20.6%

236.8

14

.198

14

11

15

Tennessee (9-4)

+19.6%

229.6

19

.216

8

25

16

Boston College (10-3)

+19.3%

230.7

17

.203

11

20

17

Texas (10-3)

+17.6%

227.9

21

.185

17

13

18

TCU (11-2)

+17.5%

243.2

11

.104

33

22

19

Auburn (11-2)

+17.4%

224.9

25

.196

15

9

20

Georgia Tech (9-5)

+16.1%

229.7

18

.145

25

31

2006 F/+ Top 25 (Final)

Rk

Team

F/+

S&P+

Rk

FEI

Rk

FinalAP Rk

21

Hawaii (11-3)

+15.9%

233.7

15

.119

30

26

22

Clemson (8-5)

+15.6%

223.3

28

.168

20

NRV

23

Notre Dame (10-3)

+15.5%

224.8

26

.159

23

17

24

South Carolina (8-5)

+15.4%

226.1

24

.149

24

NRV

25

Georgia (9-4)

+15.1%

215.7

38

.196

16

23

Whereas 2005 saw quite a bit of agreement between S&P+ and FEI, 2006 had a few examples of larger differences. Michigan ranked fifth in S&P+, but only 12th in FEI. Meanwhile, S&P+ favored Rutgers and TCU, ranking them tenth and 11th, respectively, while FEI held them back at 18th and 33rd. FEI favored California (7th to just 16th in S&P+) and Tennessee (8th, compared to 19th in S&P+).

Notice somebody missing from the above F/+ Top 25? That would probably be Wisconsin, who went an outstanding 12-1 in 2006 and finished seventh in the AP, but ranked just 26th in F/+. How does a one-loss team from a major conference (one that had been underrated just one season earlier) end up outside of a computer ranking system's Top 25? Easy: by not playing anybody. Somehow Wisconsin managed to play just one ranked opponent (Michigan), losing to them by two touchdowns. They beat a very good Arkansas team in the Capital One Bowl, but otherwise they took full advantage of the fact that the Big Ten was quite top-heavy, and they didn't have to play the toppermost team (Ohio State). Here is a list of their victims:

No. 30 Penn State (13-3)

No. 43 Iowa (24-21)

No. 54 Minnesota (48-12)

No. 55 Purdue (24-3)

No. 68 Northwestern (41-9)

No. 72 Illinois (30-24)

No. 88 Indiana (52-17)

No. 101 San Diego State (14-0)

No. 109 Bowling Green (35-14)

No. 119 Buffalo (35-3)

FCS Western Illinois (34-10)

They went 1-1 against F/+ Top 25 teams and feasted on a slate of teams mostly from the 60th percentile or lower. That is basically less than what Boise State accomplishes in a given year, no?

Regardless, Madison is a fun place to be, especially when the football team is winning, so I am thinking they were too busy enjoying their Capital (their Fest is perhaps the best beverage I have ever tasted) and jumping around to worry about what a computer had to say about them.

Other interesting differences between the AP voting and the F/+ ratings: Wake Forest finished 18th in the AP, but just 33rd in F/+. Oregon State (21st AP, 30th F/+) and Penn State (24th AP, 34th F/+) suffered similar fates.

One other thing to note about 2006: This is when the SEC began its recent surge. The Big Ten could actually have made a claim for having the strongest overall conference in 2005 (six teams in the F/+ Top 25), but the SEC had two of the top three and seven of the top 25 in 2006.

Box Score of the Year: Florida 41, Ohio State 14

Ohio State had certainly proven themselves the class of football throughout the 2006 regular season, but unfortunately for them, they didn't have this season in 1957, when final polls were registered before the bowls. Instead, they had one more game to play. Between Troy Smith losing form after taking in the Heisman eating circuit and Ted Ginn Jr. injuring himself one spectacular play in (he was injured in the touchdown celebration), the game, to say the least, did not go their way.

Florida

Ohio St.

Close %

57.3%

Field Position %

61.3%

21.6%

Leverage %

71.3%

59.5%

TOTAL

EqPts

28.6

3.7

Close Success Rate

45.5%

34.8%

Close PPP

0.44

0.13

Close S&P

0.893

0.479

RUSHING

EqPts

14.7

5.9

Close Success Rate

31.6%

58.3%

Close PPP

0.43

0.43

Close S&P

0.746

1.016

Line Yards/carry

3.22

4.21

PASSING

EqPts

14.0

-2.2

Close Success Rate

56.0%

9.1%

Close PPP

0.45

-0.20

Close S&P

1.005

-0.108

SD/PD Sack Rate

0.0% / 7.1%

22.2% / 30.0%

STANDARD DOWNS

Success Rate

43.9%

36.4%

PPP

0.42

0.19

S&P

0.854

0.558

PASSING DOWNS

Success Rate

30.4%

6.7%

PPP

0.22

-0.04

S&P

0.520

0.028

TURNOVERS

Number

0

2

Turnover Pts

0.0

9.4

Turnover Pts Margin

+9.4

-9.4

Q1 S&P

1.224

0.165

Q2 S&P

0.592

0.680

Q3 S&P

0.363

0.231

Q4 S&P

0.777

-0.080

1st Down S&P

0.538

0.275

2nd Down S&P

0.890

0.749

3rd Down S&P

0.861

0.005

Projected Pt. Margin

+34.3

-34.3

Actual Pt. Margin

+27

-27

Ginn's kickoff return gave Ohio State an immediate 7-0 lead, but Florida would score 41 of the game's final 48 points. Ohio State actually ran the ball fairly well -- Antonio Pittman averaged 6.2 yards per carry and scored a touchdown -- but the game got out of hand so quickly that the running game was useless. And including sacks, Ohio State's passing game (which helped Troy Smith win the Heisman that season) actually produced negative EqPts.

After this game, a lot was made about Florida's (and, subsequently, the entire SEC's) extreme athleticism advantage over Ohio State and the Big Ten. While Florida certainly had an edge there, it was only in one matchup where the advantage was pronounced. Florida's defensive ends just massacred both Smith and Ohio State's tackles. The Buckeyes biggest issue on offense that season had been in pass protection. They ranked just 42nd in Passing Downs Adj. Sack Rate and 35th in Standard Downs Adj. Sack Rate. Florida took advantage of that relative weakness in a major way, and without Ginn to counteract Florida's speed, Ohio State had no shot.

In the end, though the goings-on in the late-season polls were a little shady, the right team likely won the national title. However, unlike 2005, a lot of teams staked a claim for being in the nation's top tier, including Louisville. In a "flexible" playoff system, anywhere between four (Ohio State, Boise State, Florida, Louisville) and eight teams could have staked a claim for a spot in the playoffs. Although, if Florida had played like they did against Ohio State, they'd have taken out any and every opponent that stood in their way.

Random Golf Clap

First, I applaud the good folks at Sports-Reference.com for putting together a wonderful and extremely user-friendly College Football page.

Next, I also tip the proverbial cap to the good folks at the SB Nation blog, Dawg Sports, who put together a very thoughtful two-part debate (Part I, Part II) on the topic of over-signing and grayshirting, a growing issue in today's college football arms race. There are no easy conclusions, but the conversation is worth having.

Random Mini-Rant

As a numbers curmudgeon, nothing drives me crazier than offseason momentum. Some teams manage to get "hot" without an actual game being played. Each summer the hot teams are easy to pick out. In 2008, Clemson, Texas Tech, and Georgia gained steam throughout the offseason, with one pundit ranking them high and others following suit. In 2009, it was Oklahoma State and Ole Miss. In 2010, we reached a new level of offseason momentum. ESPN.com's David Ubben discussed Texas A&M recently, calling them the Big 12's 'hot team,' with nary a mention of Nebraska, who had for most of this offseason been the country's hot team. What this suggests is that Nebraska has either a) actually begun to lose momentum, again without a single game being played, or b) been hot so long that they're not even seen as a trendy team anymore. Either way, with each progressive season, the time it takes for conventional wisdom to develop and change increases (as much as I love Twitter, clearly its profligacy has not helped matters in this regard). Like Doomsday, it adapts and grows stronger and harder to defeat. Be afraid.

At War With the Mystics, Flaming LipsBoys and Girls in America, The Hold SteadyDrowaton, The Starlight MintsThe Dusty Foot Philosopher, K'naanI Am Not Afraid of You and I Will Beat Your Ass, Yo La TengoThe Loon, Tapes 'n TapesModern Times, Bob DylanPearl Jam, Pearl JamPost-War, M. WardSam's Town, The Killers

Yes, Sam's Town. Deal with it.

While 2005 may have produced one of the best national titles games ever, 2006 wins the music battle, hands down. The next ten albums from 2006 (including Springsteen's We Shall Overcome: The Seeger Sessions, Jenny Lewis' Rabbit Fur Coat, Lupe Fiasco's Lupe Fiasco's Food & Liquor, Nas' Hip Hop Is Dead, J Dilla's Donuts, the Dixie Chicks' Taking the Long Way, and even Clipse's Hell Hath No Fury and Mos Def's half-assed True Magic) might have been better than the best-of list from 2005.

Closing Thoughts

The first FBS game of the 2010 season is in just 20 days, people. Twenty! After an offseason of realignment, arrests, agents, injuries, and general annoyance (coming on the heels of what seemed like a disappointing 2009 season, no less), have we ever been more ready for kickoff than this?

Posted by: Bill Connelly on 13 Aug 2010

20 comments, Last at
15 Aug 2010, 4:41pm by
Bill Connelly

Comments

Glad to have college football heating up again! I love your emphasis on matchups determining the outcome of the FL-OSU game. Too often people who come here to examine the college rankings or the DVOA ignore specific matchups and just look at the overall rankings. I love breaking things down to battle vs. battle and try to forecast if the coaches will be able to exploit their advantages or gameplan their disadvantages. Great stuff!

I hate remembering how Ohio State was trounced by Florida for three reasons:

1) I love the Buckeyes and this was the most embarrassing loss I can remember.
2) It gave rise and national prominence to the stupid "S-E-C" chant.
3) It gets lumped together with the next season's LSU game, which was a much more competitive game largely turned by youthful mistakes by the Buckeyes.

does seem too harsh, though I'd agree that the final AP rank was over-rating them.

"They went 1-1 against F/+ Top 25 teams and feasted on a slate of teams mostly from the 60th percentile or lower. That is basically less than what Boise State accomplishes in a given year, no?"

Actually, I'd say it's reasonably comparable to what Boise achieved in 2008, to take a good example (Wisc was less dominant and didn't have a nail-biter loss, but schedules seem reasonably comparable at first glance). Was 2008 Boise not a top 20 team? And to be honest, wasn't it a legit top 15 team? I know your model consistently puts Boise way below almost everyone else, but that seems a definite reach to argue that they weren't a legit top 20 team, and still a reach to have them outside the top 15. And if 2008 Boise was a top 15 team, then 2005 Wisconsin at 26 just seems too low.

Or to look at 2006 Wisconsin another way: if a team goes 1-1 against the top 25, that would seem to support the idea that they were also a top 25 team (especially if those two were both top 15). If they beat everyone outside the top 25, including a Penn St team that was close, that also supports the idea that they're top 25 (or at least doesn't seem to throw a meaningful red flag on the argument; a team that goes, say, 1-3 against top 25 and sweeps the rest probably isn't a top 25 team, but the non-top 25 record isn't the reason).

Ultimately, I'm not really seeing the case for them not being top 25. I'm thinking that it's really a story about margin/in-game statistics, mainly that they let a very mediocre Illinois team come close, and didn't exactly blow out bad SD St and BG teams. I'm sure the fact that the Ark and Iowa wins were close, and the Michigan loss was 14 points, didn't help either. Is this correct, or am I missing something?

I can't totally explain it -- it's certainly a lower ranking than I would have expected.

But yeah, you're certainly going in the right direction with the logic. In two games against the top 25 (albeit both away from home), they were outscored 41-30. Meanwhile, they outscored two opponents ranked 26th-50th by a combined 37-24. By this logic, they should have ranked somewhere between about 16th and 30th, I guess. Obviously a lot more goes into the rankings than simply the scores of the games, but you can see how most teams ahead of them in the rankings could have produced an extremely similar record/level of performance against the opponents they played.

I'm still having trouble with this one. Take 10-4 Arkansas as an example. They went 3-3 vs top 25 (losses to USC, LSU, Florida and wins vs Auburn, Tennessee and South Carolina) instead of Wisconsin's 1-1, and had a loss to a non-top 25 team (Wisconsin). They also had a game against a AA team, just like Wisconsin did.

Obviously it's not a true apples to apples comparison, but I'm having a bit of a tough time seeing the real difference here. Both were .500 against the top 25. Arkansas's most convincing loss (USC) was at first glance worse than Wisconsin's only loss (Michigan). Arkansas lost a non-top 25 game (Wisconsin), and very nearly lost to Vandy; Wisconsin didn't lose any non-top 25 games, and presumably their close call against Iowa was rated as a tougher test than Arkansas's.

So at this point, I have to ask, what's really the difference here? If anything they seem like fairly close resumes in aggregate.

As far as I can tell, here is the main difference between Wisconsin's ranking and Arkansas': Arkansas did indeed play six Top 25 teams instead of two. In terms of output-versus-expected, that means Arkansas' expected output was much more manageable than the Badgers'. Their solid performances against Florida, LSU, South Carolina, and especially Auburn (27-10 win) and Tennessee (31-14) bought them leeway for when they were only average against Utah State and Vanderbilt (and, of course, USC). With Wiscy's weak schedule, they had to completely dominate to keep up with other teams in the rankings, and they rarely did that. They were great in early October against mediocre teams (41-9 over NWern, 48-12 over Minnesota, 24-3 over Purdue) but otherwise they didn't dominate their schedule as much as they should have. That's the best explanation I can give. I am really comfortable with where we are with the F/+ system, but I don't agree with every team's placement. I'd have figured the good record would have at least gotten the Badgers in the #15-19 range.

"And, of course, Florida then wiped the floor with Ohio State. Notice I didn't say Ohio State and Michigan were the top two teams -- it's just that if you thought they were before they played, then that epic game in Columbus did nothing but verify what you were already suspecting. Consistency is all I am requesting."

What if you thought that, given the information you had, Ohio State and Michigan were possibly the top 2 teams... but you really weren't the slightest bit sure? In that case, doesn't it make sense to hedge your bets and split them up instead of setting them in another rematch? The computers very well might have put Ohio State and Michigan back on the field, and the computers as a result never would have discovered that the only reason that Ohio State and Michigan looked so good is because the Big 10 was awful and they never got tested.

Using the "Michigan was #2 and they almost beat Ohio State, who was #1, so the rankings shouldn't change" line is circular reasoning. Michigan is #2 because they almost beat Ohio State, who is #1. Ohio State is #1 because they barely beat Michigan, who is #2. Michigan is #2 because they almost beat Ohio State, who is #1. Ohio State is #1 because they...

In the most extreme example, imagine Ohio State and Michigan began the season ranked #1 and #2, and they played each other 10 times and split those meetings with 5 wins each and a 0 point net scoring differential. Would you then pit them against each other an 11th time in the National Championship game for all the marbles, or would you split them up to test your initial assumption that Ohio State and Michigan were the top 2 teams in the nation?

I believe the point was made about the human pollsters. If they collectively believed Ohio State was #1 and Michigan was #2, you would hope they would collectively believe that Michigan would lose a close game to Ohio State in a game played in Columbus. When that happened, all of a sudden the pollsters believed Michigan was somehow now the third best team in the country.

It's the reactionary nature of the human polls that is being pointed out, and it clearly demonstrates why teams don't want to schedule tough games. Had Michigan beat Troy in that last game, they would have stayed the #2 team in the country. It doesn't make any logical sense, but hey, that's college football.

For whatever reason, that doesn't happen a sport like boxing where the pound for pound polls are also entirely subjective. If Manny Pacquiao and Floyd Mayweather fight and the fight is competitive in close, after the fight, they will still be the top two fighters in the pound for pound rankings. In football, the loser will suddenly become the fourth or fifth best pound for pound guy in the rankings.

My point is this: let's say I believe that Ohio State is the #1 team and Michigan is the #2 team. Let's say that the game plays out exactly like I thought it would given my previous assumptions. After the game, I can continue to believe that Michigan is the second best team in the nation and yet still believe that they should not participate in the national championship game for a large variety of reasons. One such reason is that it's unfair for Ohio State- they have to beat Michigan twice for the championship, while Michigan would only have to beat them once, which seems an awfully unfair burden to place on the team that we objectively believe was better and more deserving in the first place. A second such reason is one of connectivity- by passing on an opportunity to pit them against another worthy opponent, we're passing on an opportunity to put our assumption that they're the best teams to the test. There are lots of other reasons why I might move Michigan out of second, some of them logical, some of them emotional, some of them mildly irrational, but all of them compelling.

That's a big key that the "#2 shouldn't move down when it loses to #1" crowd misses- the movement in the final poll was NOT a referendum on how the pollsters viewed the quality of the teams, it was a referendum on who the pollsters wanted to see in the championship. If the final ballot was just for shits and giggles, Michigan very likely would have been the #2 team on it (after all, they were ranked above Florida the week before the final ballot)... but it wasn't for shits and giggles. The final ballot was to determine the championship game participants, and under those stakes, the pollsters decided they'd rather see Florida get a shot than see Michigan get a rematch. That's an advantage pollsters have over the cold and objective computers. Computers only take into account the variables you tell them to take into account, while pollsters are capable of adapting and reacting dynamically to unexpected situations- such as a situation where the best thing for college football is to put a supposedly inferior team into the championship game so you can put those suppositions to the test.

But is that a good thing? The whole purpose of the BCS formula is to combine the rankings of both the humans and the computers, not the rankings of the computers and the last-second preferences of the humans. Pollsters aren't voting for the top two, they're voting for the top 25. Something like the BCS formula can only work as intended when the components are used as intended.

Again, this isn't a great example -- it turns out that Michigan wasn't the second-best team in the country. But the point is simply consistency: if you thought they were second-best before tOSU-UM, then they did absolutely nothing to prove you wrong.

Other, potentially better examples:

1) 2001. Pollsters voted Nebraska in a certain spot, decided they disagreed with the BCS rankings, and bumped them lower in favor of a two-loss Colorado team. And it almost worked.

2) 2008. Mack Brown went politicking and glad-handing around the country, and Texas almost overtook Oklahoma for reasons other than what happened on the field. Like 2001, it didn't work, but it almost did. OU took care of a very good OSU team by 20 in Stillwater (Texas had beaten them by 4 in Austin) and lost ground in the human polls.

Who gets a shot at the national title is a very emotional topic, and in a way, that's why you need a computer component -- it's how you avoid over-emotional decisions. Humans get to manipulate their votes in reaction to pre-existing BCS rankings, and that makes the process less accurate. I'm almost thinking we shouldn't get to see any BCS rankings until at least conference championship weekend. That would cause a brand new set of issues, I'm sure, but it would at least avoid that last step of manipulation.

Neither of those two examples involved a rematch, though. Like I said, there are lots of valid reasons to think a team is the second best and still try to keep them out of the championship game (connectivity, fairness to the #1, etc). Those two examples you provided aren't two such examples. The 2006 season is.

In '01 and '08, the humans almost screwed up the correct championship pairing... but they didn't. In '06, the humans saved the computers from screwing up the correct championship pairing. In '03, the humans tried to save the computers from screwing up the correct championship pairing, but they failed. I don't understand railing against the human element in the polls, since so far it seems like it's been a net positive.

There are sane people who believe Miami-Nebraska was the correct championship pairing in 2001? I mean, granted, 2001 Miami would likely have beat anyone in college football easily, but I didn't think there was anyone who really and truly didn't believe, in retrospect, that the title game should have been Miami-Oregon.

In retrospect, yes, the sentiment built for Oregon (who indeed would have been crushed by Miami just like anybody else). But at the time people were changing their votes for Colorado. They got really hot at the end of the season, beating Nebraska and Texas, which apparently negated the fact that they lost to Fresno State and got whipped by Texas earlier in the season. Oregon had a case, Colorado forfeited theirs in mid-October.

Oregon was #2 at the end of the regular season in both the AP and Coaches' Poll. At the time, the BCS standings did not reflect percentages of votes from the polls, just absolute rankings. It wasn't sentiment that pushed the Cornhuskers and Buffs ahead of the Ducks in the final BCS rankings; it was the very flawed computer rankings and quirks of the old BCS formula.

Also, while it's extremely likely anyone in college football would have been crushed by 2001 Miami, 2002 Miami was not that different, and 2002 Ohio State beat them. Which is why if you're going to have a one game micro-playoff, you want the two best teams in the game even when it seems obvious what's going to happen.

I concede that Oregon was higher-ranked than I remember, but every piece of analysis I remember seeing or reading expressed outrage over the fact that Colorado was snubbed; there was more anger about that than Oregon, at least that I saw.

As for the "flawed" computers ... sorry. In the end, I know that Oregon acquitted themselves well in the Fiesta Bowl, but heading into the bowls, Nebraska's record was as or more impressive than Oregon's. Oregon played two good teams (Stanford and Washington State) and went 1-1. Nebraska played two good to very good teams (Colorado and Oklahoma) and went 1-1. Yes, Nebraska's loss was by a larger amount, but they also played a slightly harder schedule and performed better overall (they were +260 in point differential heading into the postseason, while Oregon was +136). I don't care that Nebraska's loss came later than Oregon's. I know exactly why the computers selected Nebraska, and I don't have a lot of problem with it. For the whole of 2001, they were a great team. Just not at the end. :-) I know people were outraged that they didn't win their conference but still made the championship game; all I can say is, as with the rematches, if you don't want non-champs in there, create a rule that says that. With the rules that were established, Nebraska had a decent claim to the title game.

(And I say all of this as a Mizzou fan who wants to do Nebraska no favors.)

Given that the F /+ numbers are based on in-season data and not (I think) on preseason assumptions, your example is irrelevant.

FWIW, Ohio St had been tested at a pretty good Texas team (17th F /+) and won 24-7. Michigan had been tested at a pretty good Notre Dame team (23td F /+) and won 47-21. So it's hardly like they (or the B10 in general) never had ANY tests that they did well on.

Back to the original point: Ohio St was #1 because they were already #1, and a close win against #2 wasn't enough to drop them given teams 3+. Michigan was #2 because they were already #2, and a close loss against #1 wasn't enough to drop them given teams 3+. The fundamental reason is that the entire resume for Ohio St and Michigan looked better than any of their other competitors.

To take another totally arbitrary and extreme example, let's say that Florida and Louisville both had two losses rather than just one, keeping Ohio St and Michigan undefeated other than their H2H game. Do you still say that it "makes sense to hedge your bets and split them up instead of setting them in another rematch"? At what point does that become completely absurd? I'd lean towards 2 losses for the rest, but maybe you think it should be three losses?

T.D. up at #4 made the salient point - connectivity in college football is piss-poor, and it's usually difficult to gauge relative conference strength until bowl season... at which point it's too late to rethink assumptions about the national title contenders. It's reasonable, therefore, to avoid rematches where possible (especially conference rematches.) This principle seems pretty widely accepted now; you'll recall that there was very little talk about a potential OU-Texas rematch in '08 (outside of Texas), perhaps because the example of '06 was still fresh in everyone's mind.

I would add that it's quite possible for a close game between highly ranked teams to change perceptions for the worse; the Big East heavyweights in '07 did themselves no favours with their shootouts, and there was concern in some quarters over the elevated point total in the '06 OSU-UM game. That concern proved prescient when OSU and Michigan both got shelled for the second game running by UF and USC. (I believe it was Saurian Sagacity that pointed out that national champs rarely if ever allow more than 30 or 31 points in a single game, and generally rather less.)

Connectivity really is a solid point, but I don't think it should matter when it comes to the top two in the BCS rankings. If we don't want rematches in the title game, then we should add a "no rematches" rule to the BCS system, or a "no rematches if the #3 team is within __ points of #2 in the BCS rankings" rule. (Or, ahem, a playoff. Had to be said.) As it was, the human poll voters decided to change the rules on the fly. That's what irked me.