I think we should do something like what TangoTiger did here at the Cafe. As long as we all used the lahman playerID's, it should be fairly easy to judge the accuracy each of us has. We could do just ERA and OPS like that study does, or better yet, all 10 of the fantasy cats, as well as OPS just to see how well we judged the hitters' overall performance, since 5x5 doesn't have a tell-all sort of stat such as OPS (or [1.5*OBA + SLG] if ya wanna get technical.)

Yeah we all post them, and we'd also have to have a field for the playerID so we could organize it easily. I could post a list of all players and their Lahman player ID's for people who don't know them. We could do it the day before the season starts or something for those who don't want to let their rankings be known because they may have a draft with some people in here.

Then after the season we check to see how we did. I'm not really sure how to go about that but judging from your conversation with Erboes it sounds like you do, as long as we do the playerID thing.

To be honest, I don't think ranking every single player doesn't tell all that much. Everybody will hit on some and miss badly on some. The more interesting comparisons are usually those that pick the players who are very difficult to project and see how they do on those handfull of players.

Tavish wrote:To be honest, I don't think ranking every single player doesn't tell all that much. Everybody will hit on some and miss badly on some. The more interesting comparisons are usually those that pick the players who are very difficult to project and see how they do on those handfull of players.

Well in taht case you can look at individual players form each person and see who got the tough ones right. Or based on the results after the season, you could create a standard deviation of how far everybody was from being right for each individual player, and see how many total standard deviations individual people were from being right. So if you get close on one that nobody else did, you get more points (like me with Ensberg last year). If you miss big time on one that most people were close on, you don't get many points (like me with Loaiza last year).

I'm sure you guys know more about how to go about it than I do though, so you should probably be th eones coming up with ideas

Predicting everyone would be a pain, but I'd go further than just predicting a handful of tough to predict players. I'd do something like predicting the top 100 or 200 players and maybe include another 25 or 50, youngsters, maybe.

There are several different tests of prediction you could use. A common one is the Mean Squared Predictive Error, which is a fancy term for "Take how far off you were, square it, add it up, and divide by the number of observations"

GotowarMissAgnes wrote:Predicting everyone would be a pain, but I'd go further than just predicting a handful of tough to predict players. I'd do something like predicting the top 100 or 200 players and maybe include another 25 or 50, youngsters, maybe.

There are several different tests of prediction you could use. A common one is the Mean Squared Predictive Error, which is a fancy term for "Take how far off you were, square it, add it up, and divide by the number of observations"

Like Tavish said it would be good to have some sort of weighted system where players who were tough to predict are given more weight. Or maybe not more weight, but if you get them right you get more than a normal "right" prediction. And if you get a simple one wrong, you get less points than a normal "wrong" prediction. Just an idea, and really any system would be fine by me.

Just a thought: What if you did two things: (1) took last year's average draft rankings and looked at how close a set projections was for the top-120 players DRAFTED and (2) took the top-120 players based on their actual PERFORMANCE at the end of the 2004 season (rather than draft position) and looked at how accurately the projections called that. It's a smaller sample, but I think Test (1) would tell you whether the projections steered you away from high-priced busts and directed you to sound investments and Test (2) would tell you whether the projections directed you to an underpriced gem.

A couple questions: how do you measure accuracy? I predicted Tejada would have 16 SBs and 110 RBIs; he wound up with 4 SBs and 150-something RBIs. Was my projection accurate because I predicted that he would be valuable or inaccurate b/c I was so off in the individual categories? (Podsednik is another example; was someone who predicted .320 and 25 SBs for him accurate?) I'm wondering if you just convert everything to $$$ values and give credit for the overall assessment or if you measure each category. At the end of the day, I care more about being referred to good values than that you be right in 4 categories but overestimate home runs or SBs by 300%.

Also, how do you account for injuries? Most projections seem to handicap for . . . well, temporary handicaps. (I.e., injuries.) If I predicted Richie Sexson would have a lousy year, would my projections get more points/credit than someone who predicted a repeat of 2003?

I'd rather see how accurate you were in each category. If you're right on the dollar value, that's nice, and won't hurt you in your leagues because you got the value you expected, but it's not an indication that you know what you're talking about, and doesn't say much for future success IMO.