In this week's ESPN Insider feature, Brian Fremeau investigates whether Alabama is at a disadvantage facing a South Carolina team with an extra week to prepare. It turns out there is no measurable advantage or disadvantage for teams following a bye week, either in the win-loss column or in Game Efficiency. If the Gamecocks are going to take down the Crimson Tide this weekend, they'll have to beat them play-by-play and drive-by-drive. The calendar won't have anything to do with the outcome.

Posted by: Brian Fremeau on 08 Oct 2010

8 comments, Last at
11 Oct 2010, 9:40am by
crack

Comments

that it doesn't mean anything at all. My own system indicates there's a slight advantage for bye weeks. Well less than home-field (which is anywhere from 3 - 6 points, depending on who you ask), but more than zero.

Do you say it's not a factor because it shows up as a fairly small amount, less than what is statistically significant, or because your research shows exactly zero? I would think that it should be at least half a point or so, far from a huge factor, but not nothing either.

I haven't done any analysis, but I'm wondering the same thing. In particular, the problem with looking at raw won-loss records is that it assumes that the timing of bye weeks are random, that they occur without regard to whom the next opponent is.

That's definitely not true in cases where both teams have a bye week; for example, Alabama and Auburn historically both take a week off before the Iron Bowl.

I don't know about cases of only one team having a bye, though. In the NFL, I assume bye week scheduling is random, but colleges have some control over their scheduling. Some of the scheduling shifts can even occur late; perhaps South Carolina had had a game scheduled for the week before against some minor opponent, but moved that game to a different week so that they could get a bye before Alabama. Again, I don't know.)

You seem to be confused. If the difference is found to be not statistically significant, then by definition all you can say is that the difference is not statistically different from zero. In layman's terms, any possible difference cannot be measured and verified. There may, in fact, be a difference, but if there is, it cannot be teased out of the data. "I would think that" does not substitute for data analysis. Statistical analysis is designed to correct for "I would think that."

Lack of statistical significance does NOT necessarily mean that the answer is zero. It could be due to:

1) Insufficient data. If there's only enough data to conclude it's a meaningful impact if it's worth at least 3 points (as an arbitrary example), then all that you can really conclude is that it looks to be no more than 3 points, NOT that it's zero or even approximately zero.

2) Confidence bands. How big are the confidence bands? 95% is fairly normal, but if they used 99% instead, then it's possible that by some reasonable statistical standards, it would be considered significant, even if Brian concluded it wasn't. A similar story applies to whether it's a two-sided or one-sided test. If it's a two-sided test (i.e. EITHER a negative OR a positive effect could be potentially measured), that's a tighter standard on the positive side compared to a one-sided test.

3) Even if it wasn't considered statistically significant, how far away was it? If it had something like a 20% p value, maybe the answer is that it MAY be a real effect, but there isn't enough evidence to conclude it. Maybe the real answer is that it merits further study, rather than it's zero.

You should also look at the first paragraph. Independently of Brian, I had also looked into the matter and came to a different conclusion. Therefore it's not just uninformed opinion.

Without having read the article yet, I would say one advantage is HEALTH. Now, there's not much of a way to quantify it, seeing as colleges don't have an official injury report to file with the NCAA. However, how many guys play regularly with some minor ankle sprain/sore shoulder/etc. that is painful and can only be healed by rest. For these players, that bye week makes a difference--and there isn't much of a way to measure the stats or the difference those few extra days make.
I'll bet if you polled coaches and players, college and pro alike, on having more bye weeks during the season, that there would be a majority of them in favor.

I bet if you polled coaches, a majority would say that you have to establish the run to win. Coaches, even at the highest level, have been weighed down with traditional wisdom for decades. That traditional wisdom does not come from careful analysis of past events. It's just what Coach Joe taught them thirty years ago. And Coach Joe got it from Coach Bob thirty years earlier. These guys don't come from MIT or CalTech - for the most part, they're former jocks.