First, the statement of it being "the single most useful statistic in hockey" is so flabbergasting, I honestly don't know how to even begin.

Second, the basis of it is said to be that both SV% and shooting % are "primarily luck-driven." Really? Maybe in the short term, but over any longer period, that's clearly false IMO. SV% is considered as or more important than any statistic for judging goalie performance by those that specialize in this. Shooting % isn't one of my favorite stats, but to say it's primarily luck-driven also doesn't seem correct IMO.

There's a graph on the link in the OP for this thread, but I'm not sure what to conclude from it. If it shows regression to the mean over time, why does it start rising at some point, plateau (2000-2500 shots) and then fall (2500-3000 shots)? Shouldn't it continue rising as the sample size increases?

I don't know where to find all the relevant team 5v5 data (shooting%, SV%, shots F-A, etc.), so I'm looking at overall data here. Here are the examples given in the link to PDO:

1) 2012 Minnesota- This seems to be the featured keystone example in the linked article and the whole basis of this example is that Minn's SV% was an unreasonably high .944 as of Dec. 11. This does seem quite high, but am not sure how 5v5 SV% varies from overall SV%, so not sure how high the variation was. What else was happening besides "luck regressing to the mean"?:

A) The top 2 Minn. goalies each had .932 overall SV%s as of Dec. 11. It seems odd that they would have basically identical SV%s given that this is supposedly all driven by luck.

B) Backstrom's .932 overall SV% to that point was .018 better than the year-end league avg. of .914. In his first 3 seasons ('07-'09) his overall SV%s were .024, .012 and .015 better than league avg. How is .018 better than league avg. so different from some of his previous performances?

C) Backstrom was the starter or at least the biggest part of the tandem/trio. He misses large chunks of games in both January and March. This means he may have been rusty, still partially injured, or (looking at the data) possibly overworked upon his return. This also means the remaining goalie(s) may have been overworked during his absence.

E) They finished with an overall SV% + S% of 995, so they must have been very unlucky the last 2/3 of the season, right?

Attributing Minnesota's collapse purely to "luck regressing to the mean" seems quite simplistic and likely incorrect IMO. I'm going to quickly address the other examples, since they were basically glossed over in the linked article, and after the featured example, my impression of this metric, or at least the article supporting it, went from skeptical to "does not pass inspection."

2) 2012 St. Louis- They were supposedly unlucky at the start of the season before luck regressed to the mean. Maybe so, but according their overall SV% + S% was the fourth highest in the league at 1014, even with such an "unlucky" start. How does a stat, that supposedly regresses to the mean, fluctuate so wildly that it goes from unlucky to not just normal but very lucky? How does one draw valid conclusions from such a stat?

3) 2011 Dallas & New Jersey- Again, Dallas was supposedly very lucky at the start and New Jersey very unlucky. Yet Dallas finished 7th or 8th in overall SV% + S% at 1012 and New Jersey finished last at 983. I don't see where their luck regressed to the mean, at least enough to explain their different fates.

4) 2010 Colorado- Once again, Colorado was supposedly lucky at the start, but finished second with an overall SV% + S% of 1021. I don't see where their luck regressed to the mean, at least enough to explain their fall.

Other team examples of overall SV% + S% (which is supposedly random and driven by luck, at least the 5v5 version):

Vancouver was 1018, 1019, 1026 and 1019 the last 4 seasons.
Boston was 1036, 998, 1023 and 1019 the last 4 seasons.
Nashville was 1016 and 1024 the last 2 seasons.
Phoenix was 1006, 1010, and 1013 the last 3 seasons.
Rangers were 1006, 1013 and 1019 the last 3 seasons.

Islanders were 986, 989, 998, and 984 the last 4 seasons.
Columbus was 999, 994, 984, and 984 the last 4 seasons.
Toronto was 981 and 975, before improving to 997 and 998.
Edmonton was 989 and 990 before improving to 1007.

It's a patchwork of various factors, which are often muddled by a large dose of luck. That makes its use severely limited as either a measure of total skill by goalies/shooters on a team, or as measure of luck. The individual version is even more prone to fluctuation and so basically even more useless. Why would I use the goalie's SV% over a very small sample as half of a metric for an individual skater, whether it's supposed to indicate the skater's skill or luck?

I think if one used it on a team basis and compared it to how that team did in the past, it might indicate that either the team is getting better/worse or luckier/unluckier. However, as I said in a previous post, why wouldn't one just use GF/GA or some such metric that is at least as easy to find and is more directly indicative of performance?

Quote:

Originally Posted by Czech Your Math

What is IPP?

I wouldn't call PDO an "advanced statistic", I'd call it a new statistic that is very simple and very misleading. It has two components, one of which the individual has much control over and may indicate his skill to a large degree, the other of which he has very little control over (except to prevent high quality shots) and therefore has little to do with his skill. It's like putting sauerkraut and Oreos together and calling it an advanced food.

Look at the shooting %s of Brett Hull or Stamkos. They jumped in their second seasons, but that doesn't mean they didn't improve further a season or two later. Of course such levels are not sustainable for most players over longer periods, but it doesn't mean it was mostly luck.

It's not the individuals shooting %, it's the teams shooting % with him on the ice. Essentially, its the other teams goalies SV %. Extremely different things, and blows up most of your argument..