Friday, October 30, 2009

Don't use regression to calculate Linear Weights, Part II

Last post, I wrote about how using regression to estimate Linear Weights values is a poor choice, and that the play-by-play (PBP) method is better. After thinking a bit more about it, I realized that I could have made a stronger case than I did. Specifically, that the regression method gives you an *estimate* of the value of events, but the play-by-play method gives you the *actual answer*. That is: a "perfect" regression, with an unlimited amount of data, will never be able to be more accurate than the results of the PBP method.

An analogy: suppose that, a while ago, someone randomly dumped some red balls and some white balls into an urn. If you were to draw one ball out of the urn, what would be the probability it's red?

Here are two different ways of trying to figure that out. First, you can observe people coming and drawing balls, and you can see what proportion of balls drawn turned out to be red. Maybe one guy draws ten balls (with replacement) and six are red. Maybe someone else comes up and draws one ball, and it's white. A third person comes along and draws five white out of 11. And so on. Over all, maybe there are 68 balls drawn total, and 40 of them are red.

So what do you do? You figure that the most likely estimate is 40/68, or 58.8%. You then use the binomial approximation to the normal distribution to figure out a confidence interval for your estimate.

That's the first way. What's the second way?

The second way is: you just empty the urn and count the balls! Maybe it turns out that the urn contains exactly 60 red balls and 40 white balls. So we now *know* the probability of drawing a red ball is 0.6.

If the second method is available, the first is completely unnecessary. It gives you no additional information about the question once you know what's in the urn. The second method has given you an exact answer.

That, I think, is the situation with Linear Weights. The regression is like observing people draw balls; you can then make inferences about the actual values of the events. But the PBP method is like looking in the urn -- you get the answer to the question you're asking. It's a deterministic calculation of what the regression values will converge to, if you eventually get the regression perfect.

-------

To make my case, let me start by (again) telling you what question the PBP method answers. It's this:

-- If you were to take a team made up of league-average players, and add one double to its batting line, how many more runs would it score?

That question is pretty much identical to the reverse:

-- If you were to take a team made up of league average players, and remove one double from its batting line, how many fewer runs would it score?

I'm going to show how to answer the second question, because the explanation is a bit easier. I'll do it for the 1992 American League.

Start with a listing of the play-by-play for every game (available from Retrosheet, of course). Now, let's randomly choose which double we're going to eliminate. There were 3,596 doubles hit that year; pick one at random, and find the inning in which it was hit.

Now, write out the inning. Write out the inning again, without the double. Then, see what the difference is in runs scored. (The process is almost exactly the same as how you figure earned runs: you replay the inning pretending the error never happened, and see if it saves you some runs.)

If we were to do this for each of the 3,596 doubles, we could just average out all the values, and we'd know how much a double was worth. The only problem is that sometimes it's hard to recreate the inning. For instance, Don Mattingly's double in the sixth inning on September 8:

How many runs score in this reconstructed inning? We don't know. If the second single advanced the runner to third, and the subsequent fly ball was deep enough, one run would have scored. Otherwise, it would be 0 runs. So we don't know which it would have been. What do we do?

The situation arises that we were picking innings randomly, and dividing them into halves (the half before the double, and the half after the double). The problem is that the process creates an inconsistency in the hypothetical inning. The second half of the inning, in real life, started with one out and a runner on second and third. The hypothetical second half started with one out and a runner on first. That created the problem.

So, since we're picking randomly anyway, why don't we throw away the *real* second half of the inning, and instead pick the second half of some *other* inning, some inning where there actually IS one out and a runner on first? That will always give us a consistent inning. And while it will give us a different result for this run of our random test, over many random tests, it'll all even out.

We might randomly choose Cleveland's fourth inning against the Royals on July 16. In that frame, Mark Whiten struck out and Glenallen Hill singled, which gives us our required runner-on-first-with-one-out. After that, Jim Thome singled, and Sandy Alomar Jr. ground into a double play.

Grafting the end of that inning (single, double play) on to the beginning of the original inning gives us our "consistent" hypothetical inning:

Hypoth: (stuff to put a runner on first and one out) / ------ / Single / GIDP -- 0 runs scored.

Since the Indians scored two runs in the original, unadulerated inning, and zero runs in the hypothetical inning, this run of the simulation winds up with a loss of two runs.

Now, there's nothing special about that Cleveland fourth inning: we just happened to choose it randomly. There were 6,380 cases of a runner on first with one out, and we could have chosen any of them instead.

The inning could have gone:

Out / Single / ------ / result of inng 1 of 6,380Out / Single / ------ / result of inng 2 of 6,380Out / Single / ------ / result of inng 3 of 6,380Out / Single / ------ / result of inng 4 of 6,380...Out / Single / ------ / result of inng 6,380 of 6,380

If we run the simulation long enough, we'll choose every one of the 6,380 equally. And so, we'll wind up with just the average of those 6,380 innings. So we can get rid of the randomness in the second half of the inning just by substituting the average of the 6,380. Then our "remove the double" hypothetical becomes:

Out / Single / ------ / average of all 6,380 innings

And, with the help of Retrosheet, we find that after having a runner on first and one out, those 6,380 innings resulted in an average of 0.510 runs being scored. So now we have:

Actual: (stuff that put a runner on first with one out) / DOUBLE / other stuff that caused 2 runs to be scored

Hypoth: (stuff that put a runner on first with one out) / other stuff that caused 0.510 runs to be scored, on average

So, for this inning, we can say that removing the double cost 1.490 runs.

Now, the "actual" inning was again random. We happened to choose the Yankees' 6th inning on September 8. But we might have chosen another, similar, inning where there was a runner on first with one out, and a double was hit, and the runner held at third. This particular September 8 inning led to two runs. Another such inning may have led to six runs, or three runs, or no runs (maybe there were two strikeouts after the double and the runners were stranded).

So, what we can do, is aggregate all these types of innings. If we look to Retrosheet, we would find that there were 796 times where there were runners on second and third with one out. In the remainder of those 796 innings, 1129 runs were scored. That's an average of 1.418 per inning.

So we can write:

Actual: stuff that put a runner on 1st with one out / DOUBLE leading to runners and 2nd and 3rd with one out / Other stuff leading to 1.418 runs scoring, on average.

Hypoth: stuff that put a runner on 1st with one out / ------ / Other stuff leading to 0.410 runs scoring, on average.

And so, we know that a double with a runner on first and one out, which ends with runners on 2nd and 3rd, is worth, on average, 1.008 runs.

Let's write this down this way, as an equation:

+1.008 runs = Runner on 1st and one out + Double, runner holds

We can repeat this analysis. What if the runner scores? Then, it turns out, the average inning led to 1.646 runs scoring instead of 0.410. So:

+1.236 runs = Runner on 1st and one out + Double, run scores

We can repeat this for every combination of bases and double results we want. For instance:

-0.117 runs = Runner on 1st and one out + Double, runner thrown out at home

+1.000 runs = Runner on 2nd and nobody out + Double

+1.212 runs = Runner on 1st and two out + Double, runner safe at home and batter goes to third on the throw

I'm not sure how many of these cases there are, but we can look to Retrosheet and list them all. At the end, we have a huge list of all possible combinations of doubles, and what they were worth in runs. We just have to average them, weighted by how often they happened, and we're done. We then have the answer.

As it turns out, the answer for the 1992 American League works out to 0.763 runs.

The answer is NOT a estimate based on a model with random errors that we have to eliminate. It's the exact answer to the question, the same way counting the balls in the urn gave us an exact answer.

Just to be absolutely clear, here's what we've shown:

Suppose we randomly remove one double from the 1992 American League. Then, we reconstruct the inning from the point of that double forward, by looking at the base/out situation before the double, finding a random inning with that same base/out situation, and substituting that new inning instead of what really happened.

If we do that, we should expect 0.763 runs fewer will be scored. If we were to run this same random test a trillion times, the runs lost will average out to .763 almost exactly.

If you try to answer this question by running a regression, to the extent that your estimate is different from 0.763, you got the wrong answer.

------

Anyway, the explanation above was a complicated way of describing the process. Here's a simpler description of the algorithm.

1. Using Retrosheet data, find every situation where there was a runner on second and no outs. It turns out there were 1,572 such situations in the 1992 AL. Count the total number of runs that were scored in the remainder of those innings. It turns out there were, on average, 1.095 runs scored each time that happened (1,722 runs scored in those 1,572 innings).

2. Repeat this process for the other 23 base-out states (two-outs-bases-loaded, one-out-runners-on-first-and-third, and so on). If you do that, and put the results in the traditional matrix, you get:

3. Find every double hit in the 1992 AL. For each of those 3,596 doubles, figure (a) the run value from the above table *before* the double was hit; (b) the run value for the situation *after* the double; and (c) the number of runs that scored on the play.

The value of that double is (b) - (a) + (c). For instance, a 3-run double with the bases loaded and 2 outs is worth 0.293 minus 0.745 plus 3. That works out to 2.548 runs.

4. Average out each of the 3,596 run values. You'll get 0.763.

It's that simple. You can repeat the above for whatever event you like: triples, stolen bases, strikeouts, whatever. Here are the values I got:

Anyway, my point in the original post wasn't meant to be "regression is bad." What I really meant was, why randomly pull balls from urns when Retrosheet gives you enough data to actually count the balls? This method gives you an exact number.

One objection might be that, to do it this way, there's way too much data to use, and so regression is a more practical alternative. But is it really better to use a wrong value just because it's easier to calculate?

Besides, you don't have to calculate them yourself -- they've been calculated, repeatedly, by others who can be referenced. In the worst case, they're close to the "traditional" weights, as calculated by Pete Palmer some 25 years ago. If you need a solid reference, just use Pete's numbers. They're closer than you'll get by running a regression, even a particularly comprehensive one.

32 Comments:

Disclaimer: when I say this gives you the "right answer," I'm exaggerating a tiny little bit. You could nitpick and say that there may be small features of real life this doesn't capture. For instance, when we figured out the value of the double that left runners on second and third with one out, we looked at *all* innings with runners on second and third with one out, not just those innings that got there via a double. If you believe that this makes a difference, and that there's a reason that getting to that stage via (out, single, double) is different than getting there via (out, single, single, stolen base), you may disagree with the results.

What I'm saying is that this gives the "right answer" if you agree with the assumptions built in to the model.

Nice post, Phil. I was going to offer a quibble, but then you made it yourself in comment #1! And I agree that there's really no "parsimony" case for regression at this point, since the linear weights have been calculated. (And with the current cost of computing power, parsimony ain't what it used to be.).

It occurs to me that it should be possible to establish general LWs based on a league's OBP or R/G that would be quite accurate (and much easier than customizing for every season). The weight of the HR never really changes, but other events are worth a bit more when OBP is higher. Has anyone ever developed a model for establishing OBP-dependent LWs?

Guy, I think Tango has a calculator to create linear weights values, but I'm not sure if it's a formula or a simulation ... anyone else know if there's a method to predict LW values from the league stats?

I had originally left this comment, which I now realize is not quite right:

-----

At the risk of repeating myself too many times, here's one last way to think about it.

Suppose you decide to simulate an inning of a baseball game this way. First, you start with nobody on and nobody out. You then go to the play-by-play data for the 1992 American League, and randomly pick one of the 21,017 things that happened with nobody on and nobody out. Let's say it turns out to be a single to right field. So you have a runner on first and nobody out.

Now, you randomly pick one of the 5,526 things that happened with a runner on first and nobody out. Suppose it's a fly ball. You have a runner on first and one out.

Now, you randomly pick from things that happened with a runner on first and one out.

And so on, until the inning ends. At that point, you have a number of runs that scored.

If the inning has at least one double in it, pick one randomly (which is easy if there's only one). Take everything that happened up until the double, not including the double. Then repeat the randomizing process from that point on, creating a second, "control" inning. (You might actually wind up with another double, but that's OK.)

You should expect the control inning to yield 0.763 fewer runs than the original inning. Of course, that single test can't yield a fractional number of runs -- but if you repeat this enough times, the average will be 0.763, in the same way that if you toss a fair coin a few quintillion times, the average number of heads per toss will be 0.500.

-----

What's wrong is that if you don't take actual ends of innings, but simulate ends of innings, you might not get exactly 0.763. What you get depends on the idiosyncrasies of that particular season. It will be something probably very *close* to 0.763, but not necessarily exactly 0.763.

Guy and Phil, there are at least three non-PBP estimation techniques out there:

1) Tango's calculator, which is based on his Markov model

2) Estimates derived from a good non-linear run estimator (like BsR)--you can use the "+1" approach, which is just adding one (or some fraction) of each event to the the league totals and finding the change in BsR, or you an use partial derivatives as I did in a BTN article (Nov 05 maybe?)

3) Both Tango and David Smyth have published some quick and dirty estimates of varying degrees of complexity--one was based just on league R/G, Tango had one based largely on league OBA, Smyth had one that used the league run scored/baserunner as a starting point. Maybe Tango will come over here and share some of his.

Thanks, p. The problem with (1) and (2) is that even though you and I may trust them, academic types might say that (a) Tango's Markov algorithm isn't public or proven to work, and (b) Base Runs (or whatever) hasn't been proven accurate enough to rely on, even using partial derivatives.

I mean, if some people refuse to use LW, which has been around for 25 years, and use regression instead, it's hard to see them using BaseRuns.

The PBP algorithm, on the other hand, speaks for itself.

Just from the point of view of those who want to run their own regressions ...

Phil, I wouldn't want to use any of those techniques in place of the empirical weights either. I was just referencing them as means of estimating the LW given the league stats. You are right, I'm sure, that those techniques would probably not be accepted by the folks who want to run regressions.

I found one of the quick-and-dirty techniques I was referring to, that relies only on R/G. It was published by David Smyth on the old FanHome (so I don't have a link):

I don't think anyone ever formally tested these, but the everyone in the thread thought they looked reasonable. In any event, they answer Guy's question of whether R/G-based LW approximations have ever been explored.

As you can see, those formulas take the form of a linear regression y = mx + b.

In looking at the run expectancy tables, I wonder if there is some kind of bias (there might not be or it might be really small and there might be nothing anyone can do about it). But the table says something like 2.4 RE with bases loaded an no outs.

But a bad pitcher is more likely to get to that base/out situation. So it may be that everything else is not being held constant. Maybe all of the cases cancel each other out so that we also tend to have more good pitchers pitching with one out and no one on base. Just wondering if any of the run values could be affected by this issue.

Also, I think it is interesting to wonder why regressions generally come close to the "right" run values for the basic events but doubles seem to be alot farther off. Why would that be? Maybe there is something interesting to learn if we answered that. I think in the other post someone mentioned that teams that hit alot of HRs hit alot of doubles, but I don't know if anyone came up with a conclusive answer.

The balls are discrete units. When you dump over the urn and count the balls, you are counting in integers.

Runs are also discrete units. When a team scores runs, you are counting in integers.

If determining run values was EXACTLY equivalent to counting balls, you would always have integers.

When calculating linear weights, you are taking an average. The average you come up with has a standard deviation, from which you can calculate confidence intervals. These intervals tell you the precision of your calculated average. This seems a lot like the results of a regression, not counting integers.

When you count the number of balls in the urn, there are no averages, no standard deviations, and no confidence intervals. It is simply the number that you count.

Maybe you can explain again how this is the same as counting balls in an urn?

When you count linear weights by the PBP method, there is no standard deviation because there is no sample. You are using *all* the data to come up with an exact average, just as you can count 60 red balls and 30 white balls and get exactly 66.67% red, with no standard deviation.

Note that even though balls are integers, you can still have 66.67% red balls in the urn. And even though runs are integers, a double can be worth 0.763 runs.

When you count linear weights by the PBP method, there is no standard deviation because there is no sample. You are using *all* the data to come up with an exact average, just as you can count 60 red balls and 30 white balls and get exactly 66.67% red, with no standard deviation.

So, if any of the 6380 inning that we averaged over had different results we would have ended up with the exact same answer?

For example your 1129 runs in 796 innings could have easily been 1100 runs in 800 innings? I believe runs scored in generally a heavy tailed distribution (someone told me this once, but I'll use Poisson for simplicity).

Would the variance in this situation not be: ~ 1129 so +/-65.0 (95%). So maybe it's 1.33 instead of 1.42?

I'm sure there are some rare play-by-play events that have much higher variations...

There are two sources of randomness. First, there's randomness in the actual events on the baseball field. Then, there's randomness in how well your sample happens to match reality.

To do the urn analogy again: suppose Bud Selig comes along, and flips a coin 100 times. For each head, he puts a red ball in the urn. For each tail, he puts a white ball in the urn. As it turns out, he flipped 55 heads and 45 tails, so there are 55 red balls in the urn and 45 white balls.

Now, Robert Regression comes along. He pulls 10 balls out of the urn and wins up with 6 tails. "Ha!" he says. "I suspect the urn has 60% red balls, plus or minus 15%."

Now Peter Play-by-Play comes along. He dumps out the contents of the urn and counts 55 red balls and 45 white.

Peter's estimate has no variation. Robert's does. That is completely independent of the fact that *the number of red balls in the urn* was itself chosen randomly. Neither Peter nor Robert takes that into account. Robert's confidence interval is ONLY for his sample matching the urn. It is NOT for what "should have" been in the urn.

The analogy to this stuff:

The actual play on the field is Bud Selig stuffing the urn with a random number of red balls. Robert is the regression trying to estimate LW values. Peter is the play-by-play method actually counting the balls.

Sure, the 1129 runs could have been 1100. If it were, the PBP method would come up with different weights. But the fact remains that, GIVEN that it turned out to be 1129 runs, the PBP method gives pretty much the exact answer for how many runs a double was worth *the way the season turned out*.

But the fact remains that, GIVEN that it turned out to be 10 tails, the PBP method gives pretty much the exact answer for how many tails were flipped *the way the coins flipped*

What good is that information. There were 10 tails so what? It's not a statistical analysis, but rather an observation about what has happened. Is it a meaningful observation? Is it a useful observation? These are questions that would need to be asked.

I'm curious, what happens to these estimates if say we break the data into two groups - games on even days and games on odd days?

Finally how would this PBP system deal with co-related innings. What if a DOUBLE in one inning resulted in fewer RUNS in the next inning? This system wouldn't see the "total cost/benifit" of the DOUBLE as it cannot see past its vision (one inning). [If a pitcher is chased, or if the batter cycle changes because of hit etc.]

Anyway, I don't know enough about markov chains and baseball to really discuss these issues, so this is all I really have to say.

Patriot: Thanks for the info from David S. The values generally make sense, although I'm surprised that the BB value isn't more heavily dependent on R/G.

Cyril: My theory on the wrong regression coefficient for doubles is that power hitting teams hit "better" doubles, i.e. they advance more runners, and some of that value is then captured by the HR coefficient.

I think regression also tends to overvalue BBs. That's probably because BB rate is so correlated with OBP. So teams that get a lot of BBs have more valuable BBs, those with few BBs have less valuable BBs. That would make it appear the average marginal value of a walk is higher than it really is.

If power hitting teams hit "better" doubles, then my guess is that power hitters hit "better doubles than other hitters. So whatever value we apply to a hitter will be biased. Some guys should get more than .8 for a double and some less. Is there a way to figure out how much each guy's double value should be altered?

When you count linear weights by the PBP method, there is no standard deviation because there is no sample. You are using *all* the data to come up with an exact average, just as you can count 60 red balls and 30 white balls and get exactly 66.67% red, with no standard deviation.

But what about the averages you are using to come up with your final answer? Here are portions of your explanation:

And, with the help of Retrosheet, we find that after having a runner on first and one out, those 6,380 innings resulted in an average of 0.510 runs being scored.

If we look to Retrosheet, we would find that there were 796 times where there were runners on second and third with one out. In the remainder of those 796 innings, 1129 runs were scored. That's an average of 1.418 per inning.

And so, we know that a double with a runner on first and one out, which ends with runners on 2nd and 3rd, is worth, on average, 1.008 runs.

...

All of these numbers are averages, with a certain degree of error that you don't describe. Since these numbers are used to come up with your final answer, your final answer still contains the error of these estimates. So it's not like simply counting the number of balls.

Here's a better way of describing the analogy, IMO:

Each red ball = 1 runYou're trying to determine the composition of each run. So a red ball could be made up of 10 different wavelengths of red.

When you're trying to come up with the value of a double, you're trying to estimate the amount of Red wavelength 2 in the balls. However, each ball has a different amount of Red wavelength 2. So you take the average amount of Red wavelength 2 in these ball, and you come up with your best guess of how much Red wavelength 2 is in each ball. If you choose one ball at random, there's no guarantee that your estimated Red wavelength 2 will be in that ball.

Same thing with runs. Choose one run at random, and there's no guarantee that a double would have contributed X% of that run. It's not counting, it's estimating, and error is still present.

Exactly! Given that the PBP came out the way it did in the 1992 AL, the 0.763 is *exactly* what an *average* double was worth in the 1992 AL.

Here's one more analogy. Suppose the urn consists of 50% red and 50% white. And Bob draws out 6 red and 4 white balls.

We can say that Bob drew out 60% red balls. But there is no error in that figure, because it's not an estimate! He did draw out EXACTLY 60% red balls, plus or minus zero.

Maybe you're thinking about another question: if we ran the 1992 AL season again, and the PBP was different, how much different would the doubles estimate be, and what's a confidence interval for that?

That's a legitimate question, but a different one. And no regression based only on 1992 data can answer it.

It is NOT the case that the standard error from a regression on 1992 data gives you a confidence interval for what the double might be worth for other seasons. That standard error just tells you the confidence interval for the observed average for THAT season, the 1992 AL. It's a way of estimating the 1992 value of the double, which we KNOW will come out to 0.763.

In 2008, in MLB, in non bases loaded cases (NBL), the 2Bs ratio (per AB) was .0538. In bases loaded cases (BL), it was .0634. So the 2B rate was 18% higher in BL cases since .0634/.0538 = 1.18.

The HR rate in NBL cases was .0293. In BL cases it was .0289. So it was just 1.4% lower.

Is it possible that the base-out situation affects the batting event? We would have to look at lots more years, of course. But could this affect the run values of events, no matter how they get calculated?

1. Why not use a good simulator? Have an average team play thousands of games, then program its doubles to go up by 20 (or whatever number you choose). You could simply have 20 singles become doubles. Then instead of having 20 singles become doubles, have 20 singles become HRs. Then you can see how many more runs the team scores in each case.

2. You could find an 8 team league. Then have 8 events: Bs, 2Bs, 3Bs, HRs, BBs, batting outs, SB and CS. You have 8 team equations for runs and 8 unknowns (the run values for each event). I think an Excel program can do determinants or something like that to solve that. Might be interesting to see what run values you get

Consider none out, none on base.Following a home run: if we refine the model so that the preceding homerun defines a state different from the leadoff state, does the remainder of the inning have a different value?Following a three-run home run: does the remainder of the inning have a value different from the leadoff state?

Consider two out, bases loaded.Following a base on balls: if we distinguish that from two out, bases loaded (in general or no runs scored), same as above ...

If the measured run expectancy value of a leadoff home run is greater than one, or that of a bases loaded walk is greater than one, or similarly for more complicated cases, then there is a followup question whether to attribute the refined RE values to batters. In other words, should we then refine the PBP linear weights model of batter value in the population studied, by including simple memory in the event definitions? or should we use those findings only to refine a Markov model of the half-inning? It's debatable.

That's all for now. Among other things, I'm not sure that comments on this page are still current.

Agreed that what you suggest would be something to study. You'd have to consider other things, like batting order ... you'd expect more runs in an inning where the number 1 leadoff hitter homers than in an inning where the number 6 leadoff hitter homers. And so forth.

Some of that is already built in, because home runs aren't evenly distributed through the batting order, but only some of it.

Last week I discussed the richer state space defined by the familiar base-out states and perhaps one preceding event (some memory). For example: bases empty, no outs, following a home run.

The state definition may be refined by specifying the batting position, too. For example: runner on first, two out, batter 8.

Recall my followup to self: Should we attribute the refined run expectancy values of batting events to the batters? In other words, should we incorporate any findings in the PBP-linear-weights measure of batter value in the population studied? Or use them only for other purposes such as simulation?

The example of batting position rather than preceding event may provide more insight into the nature of PBP linear weights as a method.

Suppose that two-base hits for batter 8 with runner on first and two outs generate fewer runs than do those for batter 4 with runner on first and two outs. Furthermore that difference in NL 2009 is greater than in AL 2009.

How should we decide whether to value doubles by all batters according to an average over all batting positions? And how should we decide whether to use an average over one modern league-season, or over both leagues, or over multiple seasons?

(Why) should we value doubles by Albert Pujols partly by reference to all half-innings with doubles by number 8 batters? If so, should we draw the line around one league-season? How should we decide?

The urn model fails here, in my opinion. It underwrites your point that PBP linear weights are generated by clerical work rather than modeling, the sense in which they are not a statistical matter at all. That simply means the modeling is in the definition of the urn and I suppose that should be largely a statistical process.

Phil,You have explained to Anon,>>When you count linear weights by the PBP method, there is no standard deviation because there is no sample. You are using *all* the data to come up with an exact average, just as you can count 60 red balls and 30 white balls and get exactly 66.67% red, with no standard deviation.<<

Where there is no sampling (according to your model), I think it's best to say directly that you have observed every member of the population (according to your model). First, some people don't know enough about sampling to know the equivalence here. Second, it's misleading to emphasize that you are using all the *data*.

Working with the Retrosheet data can be time consuming, but well worth the challenges.

As the 2009 data is available, it would be revealing to see how actual behavior may have shifted in some areas since your initial study period.

Would you please provide an explanation into the SQL coding process you undertook against the Retrosheet EVENT file to create the run or win expectancy table within your post which appears to be also be nearly identical to the output you provided on your web site: