Dakota Raabe had quite a nice night on the PK and was rewarded with an empty-netter [Marc-Gregor Campredon]

OFFENSE

Corsi

House

Possession %

First Period

15

5

41%

Second Period

11

5

39%

Third Period

12

4

41%

Overtime

n/a

n/a

n/a

TOTAL

38

14

40%

Analysis: Not Michigan’s best offensive game. Each period, the Wolverines seemed to create a few nice looks at the net, but that was about it. They didn’t have a high volume of chances, nor did they overwhelm Wisconsin’s defense. Michigan’s offense was mostly buoyed by their power play. During even-strength play, Michigan could not generate a whole lot. This game seemed a lot like the second PSU game from a couple of weeks ago, and the inverse of the OSU game last Friday. There was not a lot of organic offense, but the team leaned on other realms to pull out the victory. Jack Becker did have a nice circle route on which he powered though a couple of Badgers and was rewarded with a wrister that beat Jack Berry. He has really come on in the last month.

[After THE JUMP: establishing Lavigne's baseline and looking at how special teams won the game]

Now that football season is over, occasionally a season ranking comes out based on team or offensive/defensive performance. Enter the self proclaimed college football professor. (I'm not here to dime the guy out, he's just what got me interested in working this up). Landof10 reported the other day that this guy's annual ranking of defensive coordinator performances came out, and that this past year Don Brown had the 24th best defensive performance in FBS, 6th best in the B1G. At this point you probably don't even care where he has Harbaugh. Wise of you. For any coaches out there trying to poach Don Brown, read no further!

My first reaction was "Look, if Don Brown's 2017 defensive performance isn't in your top 10, we can't even talk". However curiosity got the best of me, and in digging deeper, the statistics used don't measure what I would consider a Defensive Coordinator's performance at all.

Very briefly, he is including statistics that that are counted within other stats, such as number of sacks, and number of tackles for loss. To compare this to basketball, a slam dunk gives you two points and an emotional boost (maybe a sports center clip), but it's simply part of shot efficiency and total score the same way that sacks and TFL's are part of defensive yards, efficiency, and defensive 3/4th down conversion stops, its just how you get the job done, the result is what matters. Additionally he is detracting for recruit quality, which makes no sense. A coach's performance should most certainly include the quality of recruits they bring to the program, not detract from it.

A Simplified Defensive Theory

A defensive coordinator's mainoff field role is to recruit (and train) quality defensive players, study film, and develop game strategies. This should all directly relate to performance, so I don't believe it needs to be weighted against anything. Let's leave it all off the report card as it is included elsewhere in overall performance indicators.

A defensive coordinator's main on field role is to stop the offense from gaining any yardage, through either three and outs, or by turnovers gained, so as to provide maximum field position for the offense, with no points scored.

Barring the ability to completely stop the offense from gaining yardage and first downs, the defensive coordinator's role is to stop the offense from scoring points.

Applicable Scoring Indicators

While many relevant statistics are based on say yards per game, or score per game, these do not account for "offensive interference". A team with an offense that cannot advance the ball is likely going to have a defense that either has more time on the field or more defensive possessions. As such defensive performance needs to be based on its productivity per possession in cases where it makes sense to do so. Based on this theory, I examined the main applicable defensive statistics for all FBS teams.

Average Defensive Yards per Possession

Defensive 3rd and 4th Down Combined Conversion %

Defensive 1st Downs Allowed per Possession

Points Allowed per Possession vs Power5 Quality Teams

Defensive Turnovers Forced per Game

These five indicators to me measure the true performance of a defensive unit, and other important performance factors are already baked into at least one of these. While the first three indicators are somewhat related (actually there is a correlation between the first four, including points allowed), I felt it was important to examine all of them in totality as outliers could show say teams that gave up huge plays fairly frequently.

Points Allowed per Possession vs Power5 Quality Teams was used instead of the more common "scoring defense" (Defensive points per game/possession), as games against non-Power 5 conference quality schools do not provide a clear picture of defensive coordinator performance. For example, a late season game against Furman might allow for an excellent defensive performance without having to show any new looks, while an early season MAC opponent may strategically allow a DC to try and not show any new formations, or to play around with positioning, provided they maintain a comfortable lead. Also, some schools schedule slightly more non-Power5 quality games than others. A final note on this statistic: for all FBS teams measured, all games vs P5 teams and teams with similar/reasonable quality were considered.

It seemed most logical to rank these teams in order either by yards or 1st downs/possession, defensive stop %, or Points Allowed per Possession as the most important factors. I chose yards/possession based on my defensive theory above, and as Michigan came in first for three of these top four, with similar rankings for many others. Objective statistics below:

Michigan Rank

1

1

1

7

18

Team

Yd/Pos

3/4d%Cnv

1stD/Pos

Pt/Pos

Trnvr/G

Michigan

19.57

0.272

0.99

1.43

1.31

Clemson

20.50

0.323

1.10

1.13

1.43

Wisconsin

20.62

0.320

1.10

1.18

2.07

Alabama

20.72

0.348

1.22

1.04

1.71

Northern Ill.

22.11

0.332

1.11

1.72

1.69

Indiana

22.19

0.318

1.14

1.87

1.08

Ohio St.

22.64

0.339

1.26

1.50

1.71

Washington St.

22.72

0.298

1.15

2.09

2.15

Mississippi St.

22.89

0.310

1.01

1.80

1.62

Virginia Tech

23.44

0.281

1.13

1.28

1.46

Michigan St.

23.45

0.349

1.25

1.69

1.77

Georgia

23.65

0.326

1.25

1.37

1.33

UTSA

23.80

0.354

1.25

1.89

2.00

Texas

23.88

0.274

1.15

1.50

2.00

Auburn

23.91

0.345

1.25

1.55

1.36

Central Mich

23.98

0.360

1.24

2.00

2.38

South Fla.

24.26

0.364

1.33

2.08

2.00

Penn St.

24.48

0.359

1.29

1.45

1.92

Washington

24.53

0.403

1.39

1.39

1.85

I ended up taking the top 19 teams by yards given up per possession because it covered all of the top 10 teams for the relevant statistics and looked fairly pretty.

This is my attempt to do it right- accounting for offensive interference of true defensive performance

Moving past the issues that came up with the college football professor's version, a main concern with using simple statistics to quantify defensive (or offensive) performance is that the offensive and defensive performance is so intertwined. That is to say, number of defensive possessions per game, defensive field position, average number of offensive turnovers lost per game, number of offensive punts per game, and opponent quality (here measured as % of top 10 teams by final rankings played) will all reasonably factor into defensive points allowed, and are not logically true measures of defensive performance. Additionally I argue that loss in defensive production from the previous year is an additional factor in game performance to some extent that may vary by year to year. For this I did not measure starters lost, but total production as I think that's more accurate.

Offensive interference indicators:

%top 10 opponents in schedule

%lost defensive production from 2016

Average defensive field position

Average offensive turnovers/game

Offensive punts/game

Opponent possessions per game (past 10)

To stress, this is not a ranking of team performance or best teams, it's an attempt to quantify how well Defensive Coordinators did, or how well a FBS defense did in spite of the offense. Here I have named these indicators below as "offensive interference indicators", though some are external interference indicators (opponent strength) rather than purely offensive:

Team

%Top10 Opp

Lost DPrdctn

DFldPos

Trnvr/G

Pnts/G

Opp Pos/G

Michigan

0.231

0.78

0.2971

1.62

6.00

13.85

Clemson

0.143

0.38

0.2686

1.14

5.07

13.5

Wisconsin

0.143

0.32

0.2911

1.71

4.43

12.71

Alabama

0.214

0.41

0.2535

0.71

3.93

12.57

Northern Ill.

0.000

0.26

0.3183

1.85

6.62

15.31

Indiana

0.250

0.04

0.3075

1.67

7.17

15.33

Ohio St.

0.214

0.43

0.2896

1.36

3.64

13.29

Washington St.

0.000

0.28

0.3573

2.38

4.85

14.23

Mississippi St.

0.231

0.41

0.2976

1.69

4.23

13.38

Virginia Tech

0.077

0.21

0.2672

1.08

5.08

13.62

Michigan St.

0.154

0.48

0.2844

1.54

5.31

12.69

Georgia

0.267

0.15

0.2666

1.07

4.13

12.47

UTSA

0.000

0.26

0.2905

1.64

4.27

12.09

Texas

0.000

0.20

0.2786

1.38

6.46

15.31

Auburn

0.357

0.40

0.3295

1.43

4.07

13.36

Central Mich

0.000

0.26

0.3381

2.38

6.92

15.46

South Fla.

0.083

0.23

0.3002

1.08

5.33

14.83

Penn St.

0.077

0.32

0.2732

1.00

4.15

13.46

Washington

0.077

0.50

0.2809

0.85

3.54

12.15

I chose to use Opponent possessions per game instead of average time played on defense for one main reason. Higher time on defense correlates directly to poor defensive performance: if you are allowing a team to slowly and consistently march down the field, that is not a offensive interference indicator, its an indicator of poor defensive performance. Conversely, higher defensive possessions per game can correlate either way and allow for exceptional defensive performance, with the assumption that the defense is still going at full speed for those shorter but more frequent possessions.

Once we accept (or don't) these six indicators as external or otherwise offensive interference, things start to become subjective. How much do each of these impact a Defensive unit's performance indicators? I was specifically interested in how they would impact Defensive points allowed per possession, as Michigan already appears to lead the way even excluding the offensive effect on yards, 1st downs, and 3+4d stop%. The first thing I did was to pull the points allowed per possession for the top dozen teams, and compared their offensive interference indicators to Michigan's.

Team

Avg Def Pt/Pos v P5Qual TM

Top10 Opp in Schedule

Lost Def Production

Avg DField Position

Off Trnovr/G

Off Punts/G

Opp Pos/G

Alabama

1.04

0.214

0.41

0.2535

0.71

3.93

12.57

Clemson

1.13

0.143

0.38

0.2686

1.14

5.07

13.5

Wisconsin

1.18

0.143

0.32

0.2911

1.71

4.43

12.71

Virginia Tech

1.28

0.077

0.21

0.2672

1.08

5.08

13.62

Georgia

1.37

0.267

0.15

0.2666

1.07

4.13

12.47

Washington

1.39

0.077

0.50

0.2809

0.85

3.54

12.15

Michigan

1.43

0.231

0.78

0.2971

1.62

6.00

13.85

Penn St.

1.45

0.077

0.32

0.2732

1.00

4.15

13.46

Ohio St.

1.50

0.214

0.43

0.2896

1.36

3.64

13.29

Texas

1.50

0.000

0.20

0.2786

1.38

6.46

15.31

Auburn

1.55

0.357

0.40

0.3295

1.43

4.07

13.36

Michigan St.

1.69

0.154

0.48

0.2844

1.54

5.31

12.69

What I observed is that almost across the board, Michigan has stronger offensive interference indicators than these top 11 other teams by points allowed/possession. Of the six teams with better base points allowed than Michigan, only two had 1/6 indicator stronger than Michigan, the rest were 0/6. Of the five teams behind Michigan in base points allowed, only two had 2/6 stronger indicators, the other three were 0/6. My hypothesis was then that essentially Michigan was going improve more significantly by points allowed relative to its peers regardless of any reasonable weighting. I played around with various simulations (more on that below), using a subjective measurement that 2sd from the mean in either direction could result in 20% changes in defensive points given per possession, and ended up using this scoring mechanism below for two reasons 1) it felt the most logical, and 2) it weighted more heavily towards the indicators that were more consistent between teams but that have a direct impact on points per possession (field position and offensive turnovers/game):

%Top10 Opp

Lost DPrdctn

DFldPos

Trnvr/G

Pnts/G

Opp Pos/G-10

Average

0.13

0.33

0.29

1.45

5.01

3.66

Weight/1

0.13

0.12

0.22

0.25

0.10

0.18

Multiplier

1.00

0.35

0.75

0.17

0.02

0.05

Using Opponent Possessions per Game did not seem entirely appropriate without an adjustment, as the issue that translates from offensive interference is the "gas effect" (having to play that extra 1-2 possessions because the offense can't stay on the field (think back to the South Carolina game), which sort of loses its linearity past a certain number of possessions. As such I simply used possessions over 10 per game as a baseline. I weighted all six indicators to come to an average of one, and the multiplier is simply what is required for the weight vs the original type of number.

Some comments on these offensive interference indicators: Pnts/G is Offensive Punts/Game. The theory behind that being that if your offense is progressively punting a lot, your defense is going to become backed up to their own goal line by no fault of their own, similarly with offensive turnovers and average DFldPos. (Remember, that even with an average change of 4 yards field position, i.e. the difference between Michigan and Alabama, that's simply a mean that puts an offense within field goal range at the start of their possession on average once a game). Ignoring that offensive interference indicator would assume incorrectly that those 3 points given up per game are entirely the fault of the defensive unit. Results below:

Michigan Rank

1

1

1

1

18

Team

Def Yd/Pos

Def 3+4d Conv %

Def 1stD/Pos

Weighted Def Pt /Pos v P5Qual TM

Def Trnovr Forced/G

Michigan

19.57

0.272

0.99

1.09

1.31

Clemson

20.50

0.323

1.10

1.19

1.43

Wisconsin

20.62

0.320

1.10

1.19

2.07

Alabama

20.72

0.348

1.22

1.18

1.71

Northern Ill.

22.11

0.332

1.11

1.65

1.69

Indiana

22.19

0.318

1.14

1.57

1.08

Ohio St.

22.64

0.339

1.26

1.43

1.71

Washington St.

22.72

0.298

1.15

1.94

2.15

Mississippi St.

22.89

0.310

1.01

1.58

1.62

Virginia Tech

23.44

0.281

1.13

1.57

1.46

Michigan St.

23.45

0.349

1.25

1.63

1.77

Georgia

23.65

0.326

1.25

1.51

1.33

UTSA

23.80

0.354

1.25

2.42

2.00

Texas

23.88

0.274

1.15

1.65

2.00

Auburn

23.91

0.345

1.25

1.25

1.36

Central Mich

23.98

0.360

1.24

1.72

2.38

South Fla.

24.26

0.364

1.33

2.26

2.00

Penn St.

24.48

0.359

1.29

1.77

1.92

Washington

24.53

0.403

1.39

1.78

1.85

In taking these offensive interference indicators and plugging them into defensive points allowed per possession, using this model Michigan comes out on top of points allowed per possession. As a side note various weights kept Michigan at or near the top 4 regardless of what sort of logic I used for the six indicator weighting, from 8% to 40% change in defensive points per possession, and did not vary significantly for Michigan's ranking within the top four in assigning different weights for each of the six indicators. For example Michigan still comes out on top if these six factors account for 15% swing points given up per possession, and is among the top four at 8-10%, making me fairly confident that Michigan's points given up per possession was within the top four or at the top of FBS performance, accounting for Offensive interference indicators.

Confirmation Bias or just Confirmation

After publishing this originally I was curious and went back once more and ordered the original defensive point per possession between defensive yards per possession and the weighted defensive points per possession. I argue that defensive yards per possession shows a fairly accurate estimate of a defense's ability to stop an offensive movement towards points scored, and it isn't as influenced by offensive interference indicators such as field position or errant offensive turnovers in field goal range. As a result my hope was that weighted points per possession would more closely correlate to defensive yards per possession than unaltered points per possession. This appears to be the case.

Team

Def Yd/Pos

Unaltered Pt/Pos

Weighted Pt /Pos

Michigan

19.57

1.43

1.09

Clemson

20.50

1.13

1.19

Wisconsin

20.62

1.18

1.19

Alabama

20.72

1.04

1.18

Northern Ill.

22.11

1.72

1.65

Indiana

22.19

1.87

1.57

Ohio St.

22.64

1.50

1.43

Washington St.

22.72

2.09

1.94

Mississippi St.

22.89

1.80

1.58

Virginia Tech

23.44

1.28

1.57

Michigan St.

23.45

1.69

1.63

Georgia

23.65

1.37

1.51

UTSA

23.80

1.89

2.42

Texas

23.88

1.50

1.65

Auburn

23.91

1.55

1.25

Central Mich

23.98

2.00

1.72

South Fla.

24.26

2.08

2.26

Penn St.

24.48

1.45

1.77

Washington

24.53

1.39

1.78

That Defensive Turnover Rate

I'm going to try and dig into this deeper, as was suggested. I did not actively play with this rate or consider it relative to the offensive indicators, as it is simply a function of how the defense gets the job done; it is baked into yards/possession, down statistics, and offensive field position (not covered here). However, Michigan and Indiana were dead last at defensive turnovers gained per game in the top 19 defensive teams (though Indiana had the top defensive returning production in FBS at 96%. My guess is that Don Brown's strategy with so many younger players was to keep it simple, and play contain rather than attempt to make the big plays. Sort of how Don Brown kept things more simple for Rashan Gary in 2016. (below)

I'd say it worked, but I'd also hazard a guess that next year that turnover ratio is one of the things that significantly improves, which will likely result in better offensive productivity. Perhaps Gary will develop a defensive holding tell like throwing his arms up in the air to make it super obvious. Who knows. All in all this defense got me excited in review. Can't wait for next year, and go Blue!

(Thanks for reading, and feel free to provide comments, especially those of you more data/analysis oriented).

Analysis: Michigan did not have an exceedingly high volume of shots, but they did create a solid number of quality looks, especially when the game was within reach. As the game wound down, and Michigan nabbed their three-goal lead, they backed off and protected their blazingly hot goaltender. The second line of Warren-Norris-Slaker was the best line of the night. Probably the best thing to emerge from the weekend was a dangerous second line to complement the DMC line. It is still quite the drop to the listed third line (Sanchez-Raabe-Winborg), but the listed fourth line of Becker and the Pastujovs had a great night last night and looked relevant again tonight. Another robust test is looming next weekend for this growing offense.

Jack Becker had two tallies, including a great re-direct at the top of the crease [James Coller]

OFFENSE

Corsi

House

Possession %

First Period

18

8

49%

Second Period

20

8

48%

Third Period

16

7

53%

Overtime

n/a

n/a

n/a

TOTAL

54

23

49%

Analysis: Going back a couple of years, Michigan has not outplayed Penn State at even strength. They pounded the Nitany Lions on the scoreboard two years ago, but that was mostly due to special teams. That was not the case tonight. They played a top-five Corsi team to a draw and generated better offensive chances with more consistency. I was not as high on Peyton Jones coming into this game, but he played very well and definitely kept his team in the game until Michigan finally blew it open late. Michigan pressured the Lion defense and got into the house area all night. They also moved the puck from side to side very well and could have scored more much earlier if not for a very nice game from Peyton Jones. Lastly: three even strength non-DMC-line goals. Ye-uh!

[After THE JUMP: the defense holds and special teams take a step in the right direction]

Quick note: For those unfamiliar with the FSI, it is a weekly survey asking fans to rate their feelings about each game and the season so far on a 0-100 scale. To catch up check out my blog here: http://mgoblog.com/diaries/onefootin

Who has it better than us? Well, according to my calculations, more than half of the Big Ten has it better right now. And I’m going to bet you won’t like who’s on top.

Let’s take this in two parts.

The Outback Bowl

First, there was that bowl game. As Figure 1 makes clear, this game felt bad. In fact, at a satisfaction level of 17.6 on our 0-100 scale, it felt worse than every regular season game except the Michigan State game.

This isn’t too surprising. It was bad enough to lose when favored by 7 points against an uninspired-looking South Carolina team that had just fired its offensive coordinator. It got worse when Michigan, leading 19-3, managed to fumble at the 5. It bottomed out when it turned out that was just the beginning of the second half Errorpalooza. Watching Michigan self-immolate while the Gamecocks scored 23 unanswered points was deeply aggravating, to put it mildly.

Figure 1: Outback Bowl Game Satisfaction.

(On a scale of 0 to 100, where 0 is the worst you ever felt after a game and 100 is the best you ever felt after a game, where would you rate your feelings about the Outback Bowl?)

X-axis is game satisfaction and Y-axis is # of respondents

Adding insult to injury, the loss to the Cocks took most of the remaining mojo from the fan base regarding the season as a whole. Season satisfaction clocked in at 24.9 – its lowest point of the season. 8-5 doesn’t feel good, as it turns out.

Figure 2: Season Satisfaction after the Outback Bowl.
(On a scale of 0 to 100, where 0 means the season went horribly and 100 means the season went perfectly, how do you feel Michigan's season went?)

X-axis is season satisfaction and Y-axis is # of respondents

Calculating B1G Fans’ Season Satisfaction

Okay, now for part two. Michigan’s season was unsatisfying but perhaps – out of a morbid sense of curiosity – you are wondering how Michigan fan satisfaction stacks up against other fan bases around the league.

Modeling Satisfaction from Our Data

Since I did not survey non-Michigan fans directly I used a regression analysis of our Michigan fan data to come up with a formula for calculating satisfaction for other fan bases. This approach comes with clear limitations. First, since we only have one season of Michigan data we don’t even have a perfect model of how Michigan fans will react to all situations. Just to take a couple of examples, we have no data on how fans respond to an unexpected victory over a ranked opponent, nor any idea how season satisfaction would look during a season where Michigan outperformed overall expectations. For that reason, our regression model is certainly far from perfect.

Second, even if our model were perfect for Michigan fans, it is very likely that other fan bases would react somewhat differently to the same situations. Given historical circumstances (spoiler alert!), Purdue’s fan base is likely to be happier with a 7-6 record on the season than Michigan’s is with 8-5. And though all teams have rivalries, we probably shouldn’t assume that all fans feel the same about them. I am pretty convinced, for example, that Sparty and Buckeye fans get more satisfaction from beating Michigan than the other way around.

With these caveats in mind, I still think we can provide a pretty reasonable estimate of B1G fan base satisfaction based on how Michigan fans responded during the season. For Michigan fans, based on 2605 responses over 13 games, the basic equation for game satisfaction is: 49.63 + (1.03 x Margin of Victory/Defeat) + (0.28 x Margin vs. Vegas) – (20.8 x Surprise Loss).

Margin of Victory/Defeat, clearly, is just measured by how many points more/less Michigan scored than its opponent. This captures both whether a game is a victory or defeat as well as its intensity. Margin vs. Vegas is how many points more/less Michigan scored than its opponent relative to the Vegas line. This captures general fan expectations about how the game went, which as we have discussed in past weeks is a critical component of how people feel about the outcome of a game. Surprise Loss is a variable I threw in because it was clear that unexpected losses – i.e. where Michigan was favored to win by Vegas – hurt more than usual.

In English, the model assumes satisfaction is about 50 points on our 100-point scale and then slides things up or down based on whether Michigan won or lost, by how much, and by how much relative to expectations. An additional point of margin in a victory adds about one point to fan satisfaction (vice versa for a loss). For every touchdown by which Michigan beats the Vegas spread you can add another 2 points of satisfaction, while a surprise loss sucks about 21 points of satisfaction from the average fan.

According to the magic of statistics this formula explains 70% of the variation in individual game satisfaction ratings. In the land of predicting individual opinions, 70% is pretty darn good, especially since all we have is data about the games and we don’t have any information on the respondents (Imagine, for example, trying to predict presidential popularity from economic conditions but without any information on respondents’ political affiliations).

Table 1 below illustrates how well the formula does predicting the typical fan’s satisfaction compared to the average satisfaction measured by the survey for each game. Though the predicted satisfaction misses big in a couple cases, overall it tends to come pretty close, with an average absolute difference of less than six points across all 13 games. After a few more seasons worth of data the predictions should get better.

Table One. Real vs. Predicted Michigan Fan Game Satisfaction

Game

Actual Sat

Predicted Sat

Actual - Predicted

Florida

80.9

74.5

6.4

Cincinnati

59.9

65.3

-5.4

Air Force

62.9

61.2

1.7

Purdue

76.5

71.3

5.2

Michigan State

17.5

14.9

2.6

Indiana

51.6

56.5

-4.9

Penn State

23.9

6.1

17.8

Rutgers

73.9

69.5

4.4

Minnesota

78.5

78.6

-0.1

Maryland

73.5

81

-7.5

Wisconsin

28.8

30.7

-1.9

Ohio State

27.7

39

-11.3

Outback Bowl

17.6

11.5

6.1

Average diff

5.8

The formula for season satisfaction is pretty similar. If you’ve been reading the diary this season you know that the average fan’s sense of the season is heavily tied to the game they just watched. As a result, assessments of the season varied a lot more on a weekly basis than they probably should have based strictly on the amount of new data coming in each week. The other significant variable in the season satisfaction formula, unsurprisingly, is the number of cumulative losses. Nothing says satisfaction like winning; nothing destroys it more than losing.

As a result, our season satisfaction formula after the 2017-18 season looks like this: 29.84 + (.62 x Game Satisfaction) – (3.388 x # Cumulative Losses). This model explains 73% of the variation in individual season satisfaction assessments over the 13 games of the season. Again, not too shabby. Table Two provides the summary.

Table 2 Real vs. Predicted Michigan Fan Season Satisfaction

Game

Actual Sat

Predicted Sat

Actual - Predicted

Florida

85

80

5

Cincinnati

77.2

67

10.2

Air Force

72.7

68.8

3.9

Purdue

76.7

77.3

-0.6

Michigan State

40.5

37.3

3.2

Indiana

53.7

58.5

-4.8

Penn State

33.7

37.9

-4.2

Rutgers

62.9

68.9

-6

Minnesota

69.1

71.7

-2.6

Maryland

69.9

68.6

1.3

Wisconsin

36.3

37.5

-1.2

Ohio State

36.8

33.5

3.3

Outback Bowl

24.9

23.8

1.1

Average diff

3.6

Who Has It Better Than Us? Season Satisfaction across the Big Ten

If you’re still with me, Table 3 brings home the sad fact: Michigan’s implosion in the Outback Bowl, combined with its five losses on the season, put Michigan fan satisfaction below all seven B1G teams that won their bowl games and even below Indiana, which lost to its rival Purdue to end its season.

Table 3 End of Season Fan Satisfaction in the B1G

Team

Season Sat

Record (Ranking)

Final Game (Game Sat)

MSU

70.2

10-3 (15)

Beat #18 WSU 45-17 (81.5)

OSU

65.9

12-2 (5)

Beat #8 USC 24-7 (69.1)

Wisconsin

63

13-1 (7)

Beat #10 Miami 34-24 (61)

PSU

59

11-2 (8)

Beat #11 UW 35-28 (58)

Purdue

56.1

7-6

Beat Arizona 38-35 (75.2)

Northwestern

50.1

10-3

Beat Kentucky 24-23 (49.1)

Iowa

49

8-5

Beat Boston College 27-20 (58)

Indiana

31.4

5-7

Lost to Purdue 31-24 (40.7)

Michigan

24.9

8-5

Lost to South Carolina 24-17 (17.6)

Minnesota

14.9

5-7

Lost to Wisconsin 31-0 (14.2)

Rutgers

9.5

4-8

Lost to MSU 40-7 (10.9)

Nebraska

2.74

4-8

Lost to Iowa 56-14 (0)

Maryland

2.74

4-8

Lost to Penn State 66-3 (0)

Illinois

1.2

2-10

Lost to Northwestern 42-7 (8.4)

There is plenty to quibble with about these satisfaction predictions. Looking at the final game satisfaction figures, for example, it seems to my eye that they are probably too low for teams that won a bowl game. For most fans, winning a bowl game is likely more satisfying than winning a regular season game for any given margin of victory and performance against the Vegas spread. And in particular I think the model clearly undervalues the impact of beating a highly ranked opponent in a bowl game, even in these cases where the B1G team was favored. As a result of this, those teams’ final season satisfaction ratings should probably be higher than they are predicted here.

The reason the model misses on this is simple: so far we have no Michigan bowl victories and zero victories over ranked opponents in our satisfaction database. Until we do we’re stuck guessing at how much those things affect the predictions. Likewise, since we only have one season’s worth of data we can’t model the effects of teams significantly outperforming (or underperforming) season expectations. Going 7-6 is worse than 8-5, but Boilermaker fans are looking at their 7 wins through a very different lens than Michigan fans are viewing 8 wins. Similarly, OSU is close to the top, but how satisfied can the Bucks really be at this point with a two-loss season? And what about Wisconsin? Was that a great season or was that like winning a silver medal and wishing you’d won the damn gold?

Looking at the results from 30,000 feet, however, they make sense. Thanks to the fact that game satisfaction is a big driver of how fans rate the season, the seven teams that won their bowl games generated higher season satisfaction scores than Michigan. It’s important to remember here that this is an analysis of fan satisfaction – the fact that the satisfaction rankings don’t mirror objective measures of season quality (i.e. win/loss records) is pretty much the whole point. Fans are emotional, irrational, and short-term thinking animals. We have the S&P to tell us how good teams are. We have the satisfaction index to have fans tell us how they feel about the teams.

For our grand finale, in case you want to compare Michigan’s roller coaster of satisfaction with others on a week-by-week basis, I leave you with the season trends for each of the B1G teams.

Analysis: This was not an offensive juggernaut by any means, but this is not what the game lent itself to for Michigan. They scored super early, a nice wrister by Brendan Warren– again. Michigan then got a PP tally a few minutes later. While they did create a few chances, Michigan was mostly content to control play and suffocate this game away…which they seemed to do starting in the mid 2nd period. Aside from trading PP goals in the 2nd, Michigan enjoyed a lot possession and generally put the puck in safe places. In a series that generally requires goalz to win, this one did not, and Michigan played it well.