RealGM Analysis

If you want to justify your preseason projections in college basketball, the good news is there are a lot of ways to claim you are right. There are projected tournament seeds, margin-of-victory numbers, conference wins, and so on. I’m not necessarily interested in finding the right technique to convince you that my system is the best. I strongly believe that basing preseason projections on player stats and recruiting information is a worthwhile exercise. But I also think that there is a ton of information to be gleaned by watching off-season practices and talking to coaches. (Seriously, no one could have predicted a break-out season from Michigan’s sub 3-star guard Caris LeVert by just studying the numbers.) And I also believe that multiple statistical methodologies are useful too. That’s why I am hoping that Ken Pomeroy compares the statistical preseason rankings again at some point in March.

But today I want to do a little internal evaluation. First, I want to show you some quick numbers that suggest that the stats might teach us something the polls are missing. Second, I want to show you which teams have exceeded or fallen below my best case and worst case scenarios. Finally, I want to talk about ways I can improve my simulation going forward.

No One Could Have Seen This Coming

I hated the refrain on some highlight shows this weekend that no one could have seen Virginia’s outright ACC title coming. That’s because I spent much of the off-season trying to emphasize how strong a statistical profile Virginia had. Virginia played great in the ACC last season, but only missed the tournament because of some baffling early season losses. Meanwhile, the Cavaliers key player loss, a highly inefficient point guard, seemed completely replaceable.

Based on the stats of Joe Harris and Akil Mitchell, the breakout sophomore potential for Justin Anderson and Mike Tobey, and the high school ranking of PG London Perrantes and Malcolm Brogdon, the stats suggested that Virginia would be a contender for the ACC title. My simulation model had Virginia 11th nationally in the preseason and I don’t remember seeing any other preseason publication that had Virginia that high.

This year my model was right about Virginia, and as the next table shows, in most cases where my statistical model disagreed with the AP preseason poll, my statistical model has done better at predicting the team’s current Kenpom.com ranking.

-My model liked Florida more than the AP preseason poll, and even though they have been shaky at times, the Gators have great overall stats thanks to a 16-0 mark in the SEC.

-My model liked Wisconsin and Virginia more than the preseason polls, and both have Top 10 resumes.

-My model liked Iowa, and the margin-of-victory for the Hawkeyes has been solid. (Unfortunately, their resume is not nearly as good.)

-My model was also more accurate when it came to Creighton, a team I was shocked not to see in the AP preseason poll.

Now my statistical projections were not always right. My model was more skeptical of Wichita St. But in the aggregate, this seems like a good year for my preseason projection model.

Preseason

Current

Team

AP

Hanner Model

KenPom

Kentucky

1

1

24

Louisville

3

2

5

Michigan St.

2

3

18

Kansas

5

4

9

Florida

10

5

4

Duke

4

6

3

Ohio St.

11

7

14

Arizona

6

8

1

Oklahoma St.

9

9

27

Wisconsin

20

10

10

Virginia

24

11

2

Michigan

7

12

13

North Carolina

12

13

22

Syracuse

8

14

11

Memphis

13

15

43

Iowa

*

16

12

Marquette

17

17

57

Gonzaga

15

18

21

Creighton

*

19

7

UCLA

22

20

16

New Mexico

23

21

36

VCU

14

22

17

Wichita St.

16

23

6

St. Louis

*

24

30

Connecticut

18

25

25

Oregon

19

33

29

Notre Dame

21

26

97

Baylor

25

35

38

*Others receiving votes

Even if my model has done well this year, that doesn’t necessarily prove it is superior in the long-run. This is a very small sample, and with so many unproven players at the college level, uncertainty abounds. Realistically, there are a huge number of plausible outcomes for each team each season. And as the above table shows, both the polls and the stats projections were wrong about a number of teams.

This general uncertainty is why I introduced the idea of a best case/worst case projection last fall. My idea was to simulate player performance and show a range of possible scenarios for each team. I clipped the 10 percent highest simulations and 10 percent lowest simulations and showed the 80% confidence interval for each team.

In theory, if I had modeled the season perfectly, team’s rankings should have ended up between my best case and worst case scenario 80 percent of the time. In fact, the numbers are not that good. As of Sunday, only 63 percent of teams are between my best case and worst case scenarios. This was startlingly low to me. But since my goal is to improve the ratings for the future, let me talk about the outliers and see what we can learn from them.

Above the Best Case

Over-Achievers

Preseason

Current

Team

My Best Case

My Worst Case

KenPom

Arizona

3

19

1

Virginia

4

21

2

Creighton

9

33

7

Wichita St.

9

55

6

Villanova

16

62

8

Cincinnati

23

97

20

Iowa St.

28

97

26

Massachusetts

51

110

48

Florida St.

59

120

35

Saint Joseph's

57

133

53

George Washington

52

146

44

Oklahoma

54

167

31

SMU

70

134

19

Vermont

71

136

60

Green Bay

72

165

51

Nebraska

69

196

56

Most of the teams that exceeded expectations did so because of individual players having unexpectedly good years. For example, Cincinnati’s Justin Jackson improved his ORtg from 83 last year to 107 this season. And Florida St.’s Montay Brandon has seen his ORtg improve from 77 last year to 102 this season.

Sometimes players have played better than expected out-of-position. Oklahoma’s Cameron Clark has true guard skills, but by growing to 6’7”, he’s been able to hold his own as a post defender, allowing Oklahoma to reach unexpected heights this season. And Larry Brown has clearly worked wonders at SMU this season.

Overall, the list of over-achievers is about what I expected to see. Sometimes players do unexpected things and teams exceed even high expectations. I do think there are two interesting lessons from this over-achievers list however:

Lesson 1: There is no dominant team this year

In the preseason, I said the best case for Arizona and Virginia was third and fourth, but with so many teams above them faltering, they have moved up to the top, partly by default. Now, I don’t say that to take anything away from Arizona or Virginia. Winning the Pac-12 and ACC is a tremendous accomplishment. But in most previous seasons, Arizona’s margin-of-victory would put them second or third nationally in the computers, not first.

In the preseason, I thought Kentucky, Michigan St., Louisville, or Kansas had a chance to be historically good. Michigan St.’s excuse has been a string of injuries. And even though players like Keith Appling and Branden Dawson are back in the lineup now, they aren’t playing like the stars they once were. Kentucky truly baffles me. They have no excuse for losing games like they did this week, including on the road to a bad South Carolina team. Kentucky simply has too much talent and depth to be playing this poorly.

Kansas and Louisville may still have a chance to take over the top spot in the rankings with a NCAA tournament run. But oddly, their computer numbers depend quite a bit on the formula you use this year:

Lesson 2: The margin-of-victory systems disagree quite a bit right now

ESPN’s BPI, Sagarin’s Predictor, and Kenpom.com all use slightly different formulas. For example, BPI weights wins more heavily than just margin in its margin-of-victory formula. And each system weights recent games differently.

But in a year like 2014, when teams have ridiculous long winning streaks (see Florida and Wichita St.) and ridiculous differences in schedule-strength (See Kansas and Louisville), the net result is that the margin-of-victory systems don’t even agree who is good.

ESPN

Sagarin

BPI

Predictor

KenPom

Arizona

1

1

1

Florida

2

6

4

Kansas

3

4

9

Wichita St.

4

18

6

Duke

5

3

3

Louisville

6

2

5

Virginia

7

9

2

The good news is that even if no one agrees who is great, we should be in for a wide-open and outstanding tournament.

Below the Worst Case

While I was generally comfortable with my “best case” simulations, my “worst case” simulations were clearly off. This list of under-achievers is longer than I intended:

Under-Achievers

Preseason

Current

Team

My Best Case

My Worst Case

KenPom

Kentucky

1

13

24

Michigan St.

1

10

18

Oklahoma St.

3

19

27

North Carolina

4

21

22

Memphis

6

26

43

Marquette

6

29

57

Notre Dame

15

42

97

Colorado

11

62

65

North Dakota St.

19

58

71

Purdue

19

62

98

Washington

21

64

89

Alabama

16

81

101

Boise St.

23

70

72

St. Mary's

26

66

74

La Salle

20

79

111

Utah St.

27

89

126

Georgia Tech

34

106

141

Boston College

35

107

139

Butler

39

111

127

Drexel

40

108

128

Wright St.

51

111

135

Washington St.

48

135

203

UAB

42

149

150

Florida Gulf Coast

58

143

176

Weber St.

59

138

174

Central Florida

60

138

190

Akron

63

138

147

Luckily, there are two things I can fix:

Lesson 3: I need to simulate the possibility of missing players

I simulated player performance (from bad to good), but I did not simulate injuries and suspensions.

-In North Carolina’s case, if I had known that PJ Hairston was never going to play, my model would have had them lower.

-In Colorado’s case, the team has played admirably at times, but without Spencer Dinwiddie, the downside is obviously lower.

-In Drexel’s case, they were good enough to take Arizona to the wire, but when Damion Lee went down, the CAA title was now out of reach.

Lesson 4: I need to allow more risk in the defensive projection

Our measures of defensive stats are poor and the models that project defense are not very good. Still, if anyone can explain to me how a Kentucky team that is one of the tallest in basketball history and supposedly has NBA players at every position cannot rotate defensively, please let me know.

Another contributing factor is the change in the defensive rules this year that seemingly have impacted coaches differently. As I noted in a previous column, Boston College has really struggled to adopt to the new rules. Overall, far too many of these under-achieving teams are under-achieving because of unexpected defensive collapses, and this is a part of the model I can account for more accurately in the future.