The power ratings for high school lacrosse are based on margin of victory calculations, including a 10-goal limit constraint, a correction factor for teams over- and underperforming their power rating, win-loss records, and championship bonus points. In addition to the power rating and national ratings calculations, information is provided on strength of schedule and quality wins, although the latter two are not used in the power rating formula.

The first criterion is margin of victory, which is based on the strength of an opponent (i.e., its power rating), the home-field advantage in points or goals, and the game score. Most computer rating program are based on goal margins because that provides the most important data relating the strength of two opponents based on each team's outcome in previously played games.

The computer program solves a set of equations, one each for every game played, such that:

(1) P(i) - P(j) = Score(i) - Score(j) + HFA

where P(i) and P(j) are the power ratings for team(i) and team(j), score(i) and score(j) are their scores, and HFA is the home field advantage, which is "+" if i is the home team and "-" if i is the away team. Equation 1 states that the difference in power ratings between two teams is equal to the goal difference of these two teams if they were to play on a neutral site (HFA = 0). Now this equation is never solved exactly, and there is an error produced for every game:

(2) Error = {P(i) - P(j)} - {Score(i)-Score(j)} - HFA

The objective of the algorithm is that for every team, the sum of these errors for all games played = 0.0. This means that Equation 1 is valid when averaged over all games a team plays.

The calculation of P(i) is an iteration procedure where if the sum of errors for all teams is > 0, then P(i) is reduced a small amount and all calculations are repeated. The P(i) values are adjusted over thousands of iterations until the sum of errors for all teams = 0.0. When this happens, the ratings program has converged. This is nothing more than a trial-and-error procedure that stops when convergence is reached. The method is also referred to as a predictor-corrector procedure, because as new data become available, all results are updated.

Ten-Goal Limit

Games played within and outside a region are treated separately (see external correction factor). Head-to-head contests represent only one game, and the results of all games collectively are more important. So it is possible that one team can beat another team and yet still have a lower power rating. The accuracy of this method is, on average, 3 goals for lacrosse. That means, if you subtract the power rating for two teams and add in the home field advantage, the result will be within three goals of the actual game score difference. The criticism of margin of victory is that it promotes unsportsmanship by encouraging running-up-the-score and penalizing teams if they hold the score down.

The LaxPower system is such that, if the goal margin exceeds a threshold value, neither team benefits or loses rating points. In lacrosse, this threshold is 10 goals and is referred to as the 10-Goal Limit or TGL. In football, 30-point limit is used. In addition, sometimes a team will win yet still overperform (beyond its rating), so a correction factor in terms of rating points will be added to their score to acknowledge this. The same holds for a team that may lose and also underperform below itsratings. This team will lose additional points based on this correction factor. The purpose is to reward a team for winning and penalize a team for losing. Bonus points are also awarded to teams that win their state or some conference championships.

Strength of Schedule

A second criterion is strength of schedule (SOS), which determines the relative difficulty of the opposition a team plays. There are three ways to compute SOS. The first is to take the average strength of all opponents based on margin of victory computer power ratings (PR). The second is to use a weighted average (exponent) of the margin of victory PR. The third is to sum the last two components of the RPI formula described below. The results will vary, sometimes significantly between the RPI method and the first two methods.

Win-Loss Percentage

A third criterion is win-loss percentage. Four methods are employed. Teams that win, regardless of schedule, receive consideration even if the schedule is weak. The percentage can be based on (1) total games, (2) conference or division games, (3) weighing games played later in the season more heavily than games played earlier in the season, and (4) weighing games depending on where the game was played: where road wins and home losses count more than home victories and away losses.

Quality Wins

A fourth criterion is quality wins. A team will receive additional consideration for defeating a highly ranked team. The points awareded are based on the rank of the opponent such that the higher the rank, the more points received. The calculation can be based on poll ranking, RPI ranking, or margin of victory rankings.

Polls

Finally, the fifth criterion is poll results. Polls are based on expert input, and pollsters can consider factors not taken into account by computer ratings. Polls are often criticized, however, for being subjective and biased.

National Ratings

After calculating a state rating, results may be combined into a national rating by assigning an offset value to each team of each state. This value is called a ROM, or regional offset margin, and represents the relative strength of one state compared to another. It is based on games played between teams in different states.

A more comprehensive formula with additional criteria can be found here. The current formula may be expanded in the future to contain additional criteria found in this more comprehensive algorithm.