But if you are using statistics to predict some actual random phenomenon, like how well you will do on a test or what the strength of a steel rod is, then all bets are off, because you can never know the distribution with a confidence level of 100%.

Caveat Lector, but consider steel strength. Suppose your steel foundry produces steel that averages some measure of strength. Then one week, you find a distressingly high number of substandard steel samples. The boss comes down, yelling his head off at you, saying you need to straighten up the operations because he's going broke.

If your operations are essentially the same, then next week you can expect an improvement in your test samples because they'll regress toward the mean. If things are different, then the poor test samples may be the result of a change in inputs, procedures, or whatever. However, if nothing has changed, you can tell the boss that you'll see to it and bet that next week he'll be giving you an attaboy for fixing the problem.

Suppose you are promoted and you now manage five factories. One of the factories comes back with steel samples that are horrible. Don't expect that factory to start producing samples closer to the five-factory average, because it may have a substandard process or substandard suppliers.

In the context where I learned it, the common mistake was to assume that an exceptionally good student will become more average because of regression to the mean. But that's not the case. An 4.0 student isn't going to slide down to a 3.0 as a result of the rest of the class having a 3.0 average. But if you see a 3.0 student score a 3.9 and a 3.8, you can expect him to be closer to his 3.0 average in the future—inasmuchas the variation in his test scores are the product of random error and not something instrumental such as a better study method.