1996-97

The Basics

How the regional sites are selected: Regional sites are pre-determined by the men's ice hockey committee. Cities/arenas make bids, and the committee selects the locations in advance. (Future sites)

How the teams are selected: There are six automatic bids that go to the conference tournament champions of the six Division I conferences. The remaining 10 teams are selected based upon ranking under the NCAA's objective system of Pairwise Comparisons.

The selection committee: One representative from each of the six conferences make up the men's ice hockey committee (heretofore referred to as "The Committee"). Terms are 4 years long. The committee currently includes:

AHA - Brian Riley (Army coach, 2014-18)

Big Ten - Tom McGinnis (Minnesota AD, 2013-17)

ECAC - Jim Knowlton (Rensselaer AD - chair, 2013-17)

HEA - Kevin Sneddon (Vermont coach, 2010-14)

NCHC - Brian Faison (North Dakota, AD, 2013-17)

WCHA - Mel Pearson (Michigan Tech, 2013-17)

Automatic bids: The NCAA mandates that a conference receives an automatic bid to the NCAA tournament if it exists for at least two years and has at least six teams. There is no mandate on how a conference should award the automatic bids, though almost every conference in every sport awards it to the winner of that conference's postseason tournament. For a time, the hockey committee gave two automatic bids -- to the regular-season and conference tournament champion -- but did away with that practice in 2000-01, in part because the NCAA said it wasn't allowed, and in part because the added autobid earned by the MAAC (now Atlantic Hockey) reduced the available at-large slots.

Pairwise Comparison System

What it is: In the early 1990s, the NCAA Men's Ice Hockey Committee instituted a system designed to objectively compare teams to each other. The methodology evolved over time, getting more and more precise until becoming as it is today. Also, the criteria used has fluctuated somewhat over time. Three are currently used.

The three criteria: The current criteria for comparing one team to another consists of:

The most notable change to the selection criteria over the years has been the removal of Record in Last 16 (or 20) Games. In addition, as of 2013-14, the Record vs. TUC criterion was removed, and replaced with a sliding-scale "Quality Win Bonus."

How it's applied: Each "Team Under Consideration" (TUC) is compared to every other "Team Under Consideration" (see below for TUC definition), using the three criteria. Within each "comparison," one point is awarded for winning each criterion. One point is also awarded for each head-to-head win. The team with the most "criteria points" at the end of this process, wins that comparison. If the comparison ends in a tie, it's broken by determining which team has the better RPI. This procedure is repeated for every possible TUC pair. The final number represented in the Pairwise Comparison Rating chart is the amount of "comparison wins" (PCWs) each team has.

How the teams are ultimately selected: Using the chart, the teams are listed in order on the basis of most "comparisons" won. When taking out the teams that qualify automatically (by virtue of winning their conference tournament), the remaining top teams are then selected to fill out the 16-team field. If there is a tie in the amount of total comparisons won, this tie is also broken by comparing the two teams' RPI. (Note: That method of breaking ties is not outlined anywhere, and has simply been ascertained through experience and observation. Likewise, the ordering of teams in the chart -- based on total comparisons won -- is also not outlined anywhere. Other methods have been used in the past that, while practically amount to the same thing, are not exactly the same. See below, and this article.)

History: To understand this further, it's important to know the history of the system. There came a time when the hockey community decided it wanted to take subjectivity out of the process. The Pairwise Comparison system was born. Originally, the system was designed as a way to objectively compare teams that were close in RPI, i.e. "on the bubble" of getting into the tournament. Once that bubble was ascertained by the committee (a subjective process in a sense, but not practically), the committee checked the individual comparisons among the teams, and figured out who was "winning the comparisons" against each other. It was only third-party sources -- after learning of this methodology's details -- that originally totaled up all the "comparison wins" and presented them in a chart in ranking form. This kind of chart was ultimately popularized by U.S. College Hockey Online, the first Internet-only college hockey media organization, which went on-line in 1996. Some time over the next seven years, life imitated art -- in other words, the committee's methods morphed, and it began to actually utilize the chart, as is, without doing any micro-observation of the individual comparisons. (See: article and article.)

Pairwise - Definitions

Team Under Consideration: As of 2013-14, the Record vs. TUC criterion has been removed, effectively making every team a TUC. Prior to that, a team under consideration was one which had an RPI of .500 or higher. There were other definitions in the past, such as "top 25 RPI teams." A team was once made a TUC by winning its conference tournament and becoming an automatic qualifier, but that is no longer the case, as of 2006.

RPI: The RPI was created by the NCAA in the late 1970s, originally to help the basketball selection committee. It's a method of adjusting for the varying strengths of schedule of the different teams. The number is computed from the following three components:

A team's own winning percentage (25%)

The average of the team's opponents' winning percentages (21%)

The average of the team's opponents opponents' winning percentages (54%)

Originally, the RPI was weighted 25-50-25, as it is in men's basketball. At one time, hockey experimented with making a team's own winning percentage comprise 35% of the RPI, which worked OK when there were just four conferences that were generally comparable. But it wound up tilting the RPI too much in favor of strong teams from weak conferences -- particularly with the advent of "mid-major" conferences such as Atlantic Hockey (1999) and College Hockey America (2001) -- so the composition of the RPI was returned to 25-50-25. As of 2006, the RPI weights were changed to as their currently constituted (25-21-54.)

Home/Road weighting: For purposes of calculating a final RPI, games are weighted based upon whether they are home or road games. Road wins and home losses are weighted by a factor of 1.2, while home wins and road losses are weighted by 0.8. Unlike basketball, all components of the RPI are weighted. This weighting system was introduced in 2013-14.

Quality Win Bonus (QWB): A "Quality Win Bonus" was added for the 2013-14 season. For any win against the top 20 of the RPI, a team is awarded "bonus points" on a sliding scale from 1-20. In other words, a team is given a .050 RPI bonus for defeating the No. 1 team, sliding down to .0025 bonus for defeating the 20th team. The total bonus for the season is divided by the amount of games played (weighted for home-road), to give a final bonus figure. There was previously a more vague bonus system, which applied to wins against non-league teams in the Top 15 of RPI. That lasted from 2004-08 before being eliminated.

Bad win tweak: A flaw of the RPI is that it has can potentially decrease if a good team defeats a poor team. In order to compensate for this, if a team's victory would otherwise lower its RPI, that game is removed from the formula. This originally only applied to conference tournament games, but as of 2006 was modified to include all games.

Record vs. Common Opponents: As of 2012-13, two teams' records vs. common opponents is not a straight win-loss percentage. Instead, you get a win-loss percentage against each individual common opponent, then average all those percentages together. This helps smooth out situations, for instance, where a team can beat up on the same opponent four times, while the other team in the comparison only was 1-0 against that opponent. 4-0 vs. 1-0 was a big difference. But under the new method, both go down as just 1.000.

Seeding Process

Overview: There have generally been two sacrosanct philosophies when it comes to the seeding process. 1. teams that are hosting a regional must be placed in that region; 2. avoid first-round games (and second-round, if possible) against teams from the same conference. Other factors, such as maximizing gate revenue, and limiting travel have become de-emphasized since the tournament went from 12 to 16 teams in 2003.

How the seeds are determined: Since the advent of the objective system of Comparisons, there has always been a step-by-step methodology to determining the seeds. But since going to a 16-team tournament, the methodology has become highly straightforward. For one, there was a time when the emphasis was more upon individual comparisons. Now, the Pairwise Comparison chart, as described above, is used to rank the teams in a straight 1-16. (Note: This methodology is not outlined in the Ice Hockey manual, it has simply become the practice of the committee over time -- and was determined by the media via observation.) The teams are then grouped into four "bands" of four, with teams 1-4 given No. 1 seeds (Band 1), 5-8 given No. 2 seeds (Band 2), 9-12 given No. 3 seeds (Band 3), and 13-16 given No. 4 seeds (Band 4). Ties among teams in the amount of team-to-team comparisons they have won, at one time, were broken by looking at those individual comparisons among the teams in question. Now such a tie is generally broken by simply looking at the RPI.

No. 1 seeds: The No. 1 seeds are ranked 1-2-3-4, and then placed, in that order, in the region closest to home as possible.

The rest: For the remaining teams, the current practice no longer favors geography, but instead places a strong premium upon maintaining a "serpentine" order. i.e. 1 vs. 16, 2 vs. 15, 3 vs. 14, etc... with the second-round set up to preserve, if possible, a 1 vs. 8, 2 vs. 7, 3 vs. 6, 4 vs. 5 setup. The committee will mix and match teams within bands in order to preserve the two sacrosanct issues mentioned above, but will not move teams outside their band. Generally speaking, in order to avoid an intra-conference matchup, the committee prefers flip-flopping the No. 3 seeds within their band to different regionals, as opposed to No. 2 seeds. Either way would work, but they have usually chosen the former.

Frozen Four: The regional winners that will face each other in the national semifinals (the Frozen Four semis) are pre-determined prior to the start of the tournament under the assumption that the four No. 1 seeds will advance. The region of the No. 1 overall seed is matched with the region of the No. 4 overall seed, and same for No. 2 and 3. This holds even if the No. 1 seeds get eliminated in the regional.

Issues

Over-emphasis on the 1-16, 2-15 seeding: The Pairwise -- and KRACH, for that matter -- is not precise enough for the committee to confine itself so strictly to a 1-16 ordering of the teams based upon it. It's a good method for selecting teams -- because at least an objective system, even if flawed, eliminates the problems with subjectivity. But in seeding, there's no need to be so locked into the numbers when they are so close. These are too small sample sizes to do that to yourself. Even in KRACH, although a pure method of ranking teams based on past results, you cannot be sure that team 8 is a better team than team 9. While ordering the teams 1-16 is a nice conceptual starting point, the committee should not consider itself hamstrung by it. Even the concept of placing teams in "bands" of four -- where teams can be shuffled within the band, but not moved to a different band -- seems unnecessary. It doesn't make logical sense why it's OK to flip-flop teams 9 and 12, but it's not OK to flip-flop teams 8 and 9, if necessary. (See: article)

The "TUC Cliff": After eight years of CHN writing articles about the so-called "TUC Cliff," the committee decided to eliminate it as of 2013-14. It did so by removing the "Record vs. TUC" criterion, and its definition that cut off a TUC at ".500 RPI or better." Having that definition used to lead to some interesting, and sometimes drastic, fluctuations in the Pairwise depending on which teams were above or below the line. Subtle changes to RPI can change a team's order by one spot, but that subtle difference caused an often drastic difference in other teams' Record vs. TUC. For example, if Team A is 6-0 against Team B, and Team B is a TUC, then Team A will get a significant boost in that department. But if Team B drops off the TUC Cliff, then Team A will lose the 6 wins and suddenly be far worse in the Pairwise. Not only is that bad on the surface, but it also created situations where some teams could benefit by losing. To wit, in 2005, when Wisconsin defeated Alaska-Anchorage in Game 3 of their WCHA playoff series, it bumped UAA off the TUC Cliff, and Wisconsin suddenly got drastically worse in the Pairwise. From an NCAA standpoint, Wisconsin would have been better off losing the third game of the series. That's not the sign of a good system. (See: article)

The whole season: Which brings us to the classic argument ... Should the season be judged as a whole, or should more weight be given to the end of the season, or conference tournaments, for example? The Committee, and the hockey community as a whole, decided to remove the "Last 16" criteria from the Pairwise many years ago, not so much because it didn't agree with the idea philosophically, but because the "Record in Last 16" was so skewed by strength of schedule. But there are some who believe the season should just be judged as a whole, period. On a more fundamental level, should we be relying upon Pairwise components that have such small sample sizes? For example, "Record vs. Common Opponents" is often based upon a game or two. Perhaps it's better to live with this for the sake of factoring in things that are worth factoring in -- such as Head-to-Head and Common Opponents. Others will argue, just use the RPI to compare teams (or, better yet, KRACH), and just use the other criteria when the RPI (or KRACH) is very close.

Bad win tweak: While it's true that a team's RPI can go down for winning a game against a bad team, and while it's true that this illustrates a flaw in the RPI concept, the concept of removing that game from the formula in order to compensate is logically flawed. The RPI is meant to be taken as a whole -- a snapshot of the entire season once it's over. It's only because of the publicity given to it by media organizations such as this one, that anyone even notices the daily fluctuations in the RPI. (Consider, too, that bad teams' RPIs go up when losing to good teams.) That the RPI is a flawed method is apparent on many levels, but the way to compensate is to use a different method for adjusting for strength of schedule (like KRACH), not to bastardize a flawed method.

Deviations from the Pairwise: It's one thing to flip seedings around for a compelling reason. It's another thing to flip seedings around by subjectively ignoring the Pairwise criteria. This is what the committee did in 2005, for the first time. Even though Colorado College won its comparison with Denver, and therefore was second on the Pairwise chart to Denver's third, the committee decided to switch them because Denver won the head-to-head matchup, 3-2, including the WCHA title game. This hardly makes sense when the committee's own rules state that head-to-head is just one of four criteria. This seems like a small change, but it opens a huge can of worms that should scare anyone who believes we should be avoiding smoke-filled rooms, and anyone who believes the whole season matters. Why have the comparison system if the committee can simply decide to overrule it? (See: article)

Should we replace the RPI and/or Pairwise with KRACH?: We think so. (See: article and article)

Should we keep the system completely objective: No matter the flaws in this system, or any system, it has been generally agreed upon by the hockey community that it is better than allowing committee members free reign to make subjective decisions. Even inserting just the appearance of bias is not worth the grief. At least this way, no matter the flaws, the system is out in the open and teams know what they have to do to make the tournament. Some have argued that the committee should be allowed subjectivity in cases that scream for it. For example, let's say a team loses its star goalie for 15 games, doesn't do well, but then the goalie comes back, the team plays great but gets a 4 seed. Should the committee be able to move them up? After all, the basketball committee takes those kinds of things into consideration. The problem is, you start opening things up like that, and you don't know know where it ends. Even the decision of how far to take it is subjective in and of itself. Everyone's definition of "common sense" is different.

Regional hosts staying home: This leads to a firestorm every year, particularly from those who don't know about the philosophy. Cries of favoritism come from the masses that such-and-such team again gets to play its NCAA games at, or near, its home arena.

Other misconceptions: Whether everyone agrees with this process for selecting and seeding the teams or not, the methodology is well-defined and transparent. There is no subjectivity in the selection process, other than the up-front subjectivity in the criteria that's used. The selections are not based upon polls. They are not based upon the whim or opinion of any committee member. ... There are many common misconceptions. For example, that teams who win the conference tournament will (or should) get preferential treatment in seeding; or that teams who are playing well down the stretch will also get preferential treatment. These wins are simply factored into the process naturally, and are not given any subjective weight. Whether or not they should, is a matter of debate. But as it stands, they don't.