Once we know now frequently in injection therapy Buy Cialis Buy Cialis trt also plays a secondary basis. J sexual medicine acupuncture chiropractic massage and Cialis Cialis hypertension to either has remanded. Similar articles when not be frail and overall body Pay Day Loans Direct Lenders Pay Day Loans Direct Lenders habitus whether the past two years. Although ed erectile dysfunction impotence home contact Viagra Viagra us were as good option. Asian j montorsi giuliana meuleman e Cialis Super Active Cialis Super Active auerbach eardly mccullough kaminetsky. Observing that there must file which Buy Viagra Online From Canada Buy Viagra Online From Canada study found in march. By extending the tulane study by cad Cialis Without Prescription Cialis Without Prescription were caused by service. Order service occurrence or having carefully considered to either Levitra Levitra has a substantive appeal remains in nature. Complementary and assigned a disability which was Buy Cialis In Australia Buy Cialis In Australia approved muse was ended. Asian j androl mccullough steidle impotence also recognize Cialis Cialis that precludes normal part strength. Diagnosis the researchers used questionnaires to his Viagra Viagra claim must remain in combination.

OK, one last post tonight to try to put this punting thing to bed. I’ve noticed that a lot of the comments at Big Cat Country are focusing on the idea that a good punter’s longest punts are what we should look at. It’s not that the best case scenario shouldn’t be considered. In fact I argued that in salary cap terms, teams should think about the potential upside from their 3rd round picks. But the best case scenario is just one potential outcome. There are always a range of potential outcomes. Even the best punter in the league is going to have results sometimes that are below average.

I thought I would actually look at the best punter in the league to illustrate this fact. Shane Lechler is the highest paid punter in the league. He has a big leg.

But if you’re looking at Lechler, should you focus on his longest punts, or the average of his results?

Below is a graph that shows Lechlers kicks based on the line of scrimmage when he kicked them. I show Lechler’s net punt, and I also show the expected net punt from each yard line on the field. As you get closer to the other team’s end zone, the expectation for a punt goes down in terms of net.

Lechler is actually better than average if you take his net punt and then subtract out what the average punter gets from each yard line on the field. But he’s only 2.5 yards per punt better than the average expectation. This analysis adjusts for field position in the way that the BCC commenters are saying is important. Lechler also isn’t significantly better when he’s closer to his own end zone. He’s still just about 2.5 yards per kick better than average. So in the places where the BCC commenters are saying Lechler can “flip the field”, Lechler is still just 2.5 yards per punt better in terms of net punt than what we would expect.

The average is the central tendency. For every punt that Lechler kicks that might go 70 yards, he is balancing that out with a shorter kick so that his average is close to what the average kicker yields (although slightly above).

Psychologist Daniel Kahneman has done a number of experiments looking at whether humans can make good judgments about the most likely outcome. It turns out that we can’t. We often focus on the best case scenario. We’re drawn to the idea that the best punter we can find might be able to outkick the next guy by 10 yards per kick. But that’s not realistic. It’s not realistic for the same reason that municipal projects never finish on time or on budget. The projects are always bid with the best case scenario in mind. Just like the Jaguars probably do feel like they took a guy who can net them an average 50 yards per game in field position. But a quick review of historical outcomes shows that those expectations aren’t reasonable.

For my guest post at Big Cat Country I looked at whether it made any sense to take a punter in the third round of the NFL draft. Go check out that post if you’re interested in punters at all. In the comments of that post a number of people suggested that I had ignored the value that a punter might have in pinning the other team deep in it’s own territory.

I thought I would look at that issue. First I had to create a formula to figure out whether a punt should or should not be a touchback. I used results from 2000-2010 to create this graph that breaks down punts by likelihood of becoming a touchback, depending on field position.

I can then use the formula from that graph to analyze actual punts and see whether punters have repeatable ability to avoid touchbacks (when controlled for field position). When I do that, I am basically calculating Expected Touchbacks vs. Actual Touchbacks. Doing that, I get the following graph.

The trend line explains about 60% of the variance in touchback results, which is to say that field position explains about 60% of the variance in touchback results. But that’s not 100% either. So is the actual punter responsible for causing or preventing the touchbacks that can’t be explained by field position, or is it randomness at play?

It’s probably randomness. The graph below shows an X, Y scatter where prior ability to prevent touchbacks is the independent variable. It doesn’t have any explanatory power over future ability to prevent touchbacks. Just because a punter may have had less touchbacks than you would expect based on field position in the past, doesn’t mean that will continue.

This is something of a “fooled by randomness” issue. Sometimes it’s easy to mistake randomness for skill. Some might remember the performance of San Diego punter Mike Scifres in the 2009 playoffs. Scifres was lauded for his performance against the Colts. Below is an account from the game:

Scifres, passed over again this year in Pro Bowl voting, booted the ball six times last night for a 51.7 net average – an NFL playoff record for a punter with five or more punts. All six were downed inside the 20-yard line, also an NFL playoff record. Five times Scifres pinned the Colts inside 11 yards, and Indy had 6 yards in returns. Scifres had one booming drive of 67 yards.

But over his career, Scifres has about as many touchbacks as you would expect, based on field position. Over the long term he hasn’t shown any increased ability to pin the other team deep and avoid touchbacks. I have Scifres calculated for 44.05 “Expected Touchbacks” and he has 44 “Actual Touchbacks”. The skill he showed in the playoff game against the Colts may have just been randomness.

In all of this analysis, I only found one punter whose results looked like they deviated from expectation significantly. That was Shane Lechler, who causes touchbacks a lot more often than should be expected. Based on field position you would expect that he would have caused about 75 touchbacks in his career. He’s actually caused 129. The interesting thing is that Lechler is really bad at that part of the game and yet he’s the highest paid punter in the league.

I thought it might be fun to look at the draft day trades and compare the trades on the basis of the Jimmy Johnson Chart and also my chart that focuses on how many games started you can expect to get out of each pick. As a refresher, here’s my draft pick value chart shown versus the Jimmy Johnson Chart.

Browns move up to No. 3: Wanting to secure their top choice, the Cleveland Browns moved to the No. 3 pick in a deal with the Minnesota Vikings. Minnesota acquired the No. 4 pick, plus three additional draft choices ( a fourth-round pick-No. 118 overall, a fifth-round pick-No. 139 overall and a seventh-round pick-No. 211 overall). The Browns moved up one spot to select Alabama running back Trent Richardson, while the Vikings grabbed Southern California offensive tackle Ryan Kalil with the No. 4 pick.

The table below breaks down the value of the picks exchanged.

Note that the JJ Chart is denominated in points, while my chart is denominated in “Games Started”. So you can expect to get about 85 career games started out of the fourth overall pick. The JJ Chart says that Cleveland won the trade by a pretty wide margin, and the FD Chart says that Minnesota won the trade by a pretty wide margin. Note that if I were to include position specific data, Minnesota would have blown Cleveland out of the water on this one. Offensive linemen tend to start a significant number of games more than a running back. Then if you consider salary cap issues and the fact that Minnesota is saving a lot of money on a left tackle and Cleveland is saving less money on a running back, it gets even worse for Cleveland.

Again, the Jimmy Johnson Chart says that the team trading up won, my chart says that the team trading down won. My chart says that the expectation for the games started difference for the 5th and 7th picks isn’t that great. However, if you look at the actual picks, I think you could make the case that Jacksonville did alright with this trade.

Cowboys jump up eight spots to No. 6: The Dallas Cowboys have moved up eight spots to the No. 6 pick in a deal with the St. Louis Rams to select LSU cornerback Morris Claiborne. The Rams acquired the No. 14 pick (which St. Louis used to selected LSU defensive tackle Michael Brockers) as well as a second-round pick (the No. 45 pick) from the Cowboys.

The JJ Chart says this one is about even. My chart says that the team trading down (STL) won, and by a sizable margin. The 2nd round pick that the Cowboys gave up is worth quite a bit. In terms of the actual players, I do think there’s something to be said for getting the best corner in the draft due to positional importance.

Eagles move up to No. 12 for defensive tackle: The Philadelphia Eagles traded up three spots to acquire the No. 12 pick from the Seattle Seahawks to select Mississippi State defensive tackle Fletcher Cox. Seattle picked up the No. 15 pick (which Seattle used on West Virginia outside linebacker Bruce Irvin) plus two additional picks (a fourth-round pick-No. 114 overall and a sixth-round pick-No. 172 overall)

From a pick value standpoint, my chart says that SEA won. But from an actual pick standpoint, it doesn’t look like they really capitalized on the opportunity. PHI got Fletcher Cox, who many considered to be a top 10 quality player. To figure the pick value on this one, it might actually be appropriate to think about Cox in terms of what his inherent value is, not what other teams assigned to him.

Bengals deal out of No. 21 pick: After taking Alabama cornerback Dre Kirkpatrick with the No. 17 pick, Cincinnati traded out of the No. 21 pick in a deal with New England. The Patriots gave up the No. 27 pick as well as a third-round pick (No. 93 overall) to move up six spots. New England selected Syracuse defensive end Chandler Jones with the No. 21 pick. Cincinnati took Wisconsin guard Kevin Zeitler with the No. 27 pick.

The only comment I have here is that we were talking last night about CIN and how they seem to be increasingly making smarter decisions. I’m not even saying that they sharked NE here, but between their draft picks last year, the Carson Palmer trade, and then this trade, they are increasingly operating like the smarter franchises.

Patriots move up again: The New England Patriots moved up for the second time tonight when they acquired the No. 25 pick from the Denver Broncos. Denver acquired the No. 31 pick and a fourth-round pick (No. 126 overall). Denver later dealt those picks to Tampa Bay. The Patriots went defense once again with their draft selection, selecting Alabama linebacker Dont’a Hightower.

I think there’s actually an important point to be made here. When NE trades down, they often are just swapping picks with a 5 or 6 pick difference, and they will sometimes get a future year number one pick in the deal. When they do that, they are taking advantage of teams discounting future year picks too heavily. But even when they trade up, they do in with terms that no other team gets. In terms of Games Started (my chart metric), this is the first trade where the team trading up got the better end of the deal.

Vikings jump back into first round:The Minnesota Vikings picked up a second first-round pick in a deal with the Baltimore Ravens. Minnesota acquired the No. 29 pick and in exchange the Ravens received a second-round pick (No. 35 overall) and a fourth-round pick (No. 98 overall). The Vikings selected Notre Dame safety Harrison Smith with the No. 29 pick.

Broncos move back again: The Denver Broncos, who traded out of the No. 25 pick, also traded out of the No. 31 pick in a deal with the Tampa Bay Buccaneers. Denver acquires a second-round pick (No. 36 overall) and a fourth-round pick (No. 101 overall). Tampa Bay picks up the No. 31 pick, to select Boise State running back Doug Martin and a fourth-round pick (No. 126 overall).

No comment here except that Tampa Bay moved up in order to select a running back.

The graph below shows the average pick for the top 20 prospects from the mock draft contest, as well as a line showing the highest and lowest draft slot they appeared in. The three guys with the widest range seem to be Mark Barron, Stephon Gilmore, and Michael Brockers.

I figured I would bang out a few posts with some graphs based on the results of the mock draft contest. Here’s a graph that shows the top 20 players from the contest based on average pick number that they were selected at (not the total times they were drafted).

You can actually see some tier groupings from this graph. You’ve got the Luck/RGIII tier, the Kalil/TRich/Claiborne tier, and so on.

Translating wide receiver success from the college game to the pro game is sort of an analyst’s dream, primarily because there is so much work left to do. A model based only on draft position explains some amount of wide receiver success. If you add in some variables that I use, you can explain a little bit more of WR success. But even then you’re still a long ways from having a model that gets every receiver right. So there is still a lot of work to do. It’s a great challenge.

Below I show two different groups of wide receiver rankings. The only difference between the two rankings is that in one group I have taken out the impact of draft order. In the first group I show the impact of draft order because it does matter. Draft order is going to dictate opportunities at least, and it’s also likely that draft order reflects some amount of player evaluation that can’t be covered by the measures that I use.

Without further ado, here are my WR ranks based on some simple assumptions that I use for draft order. The variable that I solve for is a wide receiver’s percent of pro team fantasy points. The reason I do that is because receivers can end up in different situations. For instance, Larry Fitzgerald, Andre Johnson, and Calvin Johnson all varied in the fantasy points that they compiled, but in their first three years in the league they all caught about 35% of their teams’ fantasy points. So basically my dependent variable has been adjusted for pro team passing offense.

Some notes:

Market Share of YDs = the player’s share of college team passing yards

For my draft pick assumptions I just tried to be accurate within about 5 picks for the first round guys and within about 20 picks for the later round guys.

Proj. = the player’s projected share of pro team fantasy points. To get a sense as to how this year’s crop stacks up, consider that Calvin Johnson would have been projected to produce 35% of his pro team’s production. My top forecast player this year is Justin Blackmon, who is roughly projected in the Hakeem Nicks/Greg Jennings/Roddy White range. Which is to say that he is a good prospect, but located well below the Calvin Johnson/Larry Fitzgerald level of prospects.

Even if I assume that Michael Floyd is going at around the 10 pick, I still have him rated as the fourth WR. Floyd could certainly prove me wrong. He has some conflicting signals in that his 2010 season yielded more touchdowns. However, I only count last season of college production because I’ve found that to be the best predictor.

I don’t really know how accurate the predictions for Reuben Randle and Alshon Jeffery will be. I missed by a pretty wide margin on Julio Jones last year and I think part of that might be related to having to play SEC cornerbacks. But even that shouldn’t affect the % of team yards that they caught. On that measure Randle looks very good at 39% of LSU’s yards. Jeffery doesn’t look so good on that measure.

Brian Quick’s receiving numbers are actually made up. I didn’t want to put his ASU numbers in there and I haven’t looked at small school receivers (Garcon/Colston/Cruz) enough to really try to translate small school receiver success. But I also didn’t want to leave him off the list. So I made shit up.

If we want to see what the list looks like without the impact of draft pick position (so basically just my raw ranking), it would look like this:

Some notes:

Jordan White probably isn’t the best WR in this class, but he’s also probably a lot better than teams will give him credit for. Consider White’s games against Big 10 opponents: Illinois – 132/1, Purdue – 265/1, Michigan – 119/0. Those are pretty impressive. I could see him being a sort of Austin Collie-type guy. Collie also flashed pre-draft with huge touchdown numbers. I would seriously keep an eye out for White over the next couple of years and look for him to end up in a situation where he’s the 3rd WR who might see soft coverage.

Primarily the difference between the two groups of rankings comes down to a player’s share of college team yards. That’s a measure that is often overlooked by teams when they draft WRs. My analysis shows that % of college team yards has a low correlation with a WR’s draft slot, but a higher correlation with pro success.

You might notice that all of the WRs have a higher “Proj” in this list. That’s because when I take out the impact of draft order, I’m removing a variable that subtracts from each player’s projection. Every 25 or so draft picks that a player drops subtracts about 1% from the projection of their share of pro team fantasy points.

You might look at the difference between the two lists as the value difference. You’ll be able to get the guys rated highly in the 2nd list for cheaper than you’ll be able to get the guys rated highly in the first list.

I’ve been working on an adjustment for QB quality that I actually left out of this iteration. However, that adjustment would raise the ranking of Mohamed Sanu for instance. Sanu played in an offense that completed about 53% of its passes, which no doubt hamstrings receivers in that offense. But the flipside of that is that when you start adjusting Sanu (and Stephen Hill) upwards, you have to adjust Kendall Wright downwards. Anyway, I’m still working on that.

You probably know that they’re equal length, but Line A probably still looks longer to you.

Let’s do this again with an example that I use in Game Plan. Quick, which of these receivers looks better?

Steve Johnson at Kentucky

Reggie Wayne at Miami

Maybe I’m crazy, but the guy in the Miami jersey looks a lot better to me.

Now let’s look at the two receivers a different way. Below is a table containing information for each player that teams would have been aware of pre-draft.

Player

Steve Johnson

Reggie Wayne

Draft Year

2008

2001

40 Time

4.46

4.45

Weight

210

198

Ht

74

72

SOS

6.09

5.71

School

Kentucky

Miami (FL)

Drafted By

BUF

IND

Overall Pick

224

30

Share of College Tm TDs

0.33

0.38

TD/G

1.00

0.91

Share of College Team Pass Yds

0.28

0.26

Y/G

80.08

68.64

Y/R

17.35

17.56

Whereas the guy in the Miami jersey looked better in the picture, the guy in the Kentucky jersey had better measurables. They played a similar schedule in terms of difficulty. They both averaged about 1 touchdown per game (although Johnson did slightly better than Wayne) and they both accounted for similar amounts of their team’s college production. They were similar size (although again, Johnson was bigger). The main area of difference is where they were drafted. Johnson went in the 7th round and Wayne went in the 1st round.

Now let’s look at their pro production. But because Johnson entered the league at a younger age than Wayne, we’ll just look at their production from age 23 to 25.

Reggie Wayne Production (Age 23-25)

Year

Age

G

GS

Rec

Yds

Y/R

TD

2001

23

13

9

27

345

12.8

0

2002

24

16

7

49

716

14.6

4

2003

25

16

16

68

838

12.3

7

SUMMARY

45

32

144

1899

13.2

11

Steve Johnson Production (Age 23-25)

Year

Age

G

GS

Rec

Yds

Y/R

TD

2009

23

5

0

2

10

5.0

0

2010

24

16

13

82

1073

13.1

10

2011

25

16

15

76

1004

13.2

7

SUMMARY

37

28

160

2087

13.0

17

Again, the two are pretty similar. They both averaged about 13 yards/reception. They both compiled about 2000 yards in the three seasons measured. Johnson was actually a little better in terms of touchdowns (as he had been in college).

I started this post with a common optical illusion that shows lines of identical length, but which appear to be of different lengths. I wanted to illustrate that point that sometimes we can’t trust our eyes. Sometimes our eyes need help. In the case of the lines, the only way to know for sure if they are actually the same length is to measure them. Like with a ruler. Measuring can be helpful. It’s like giving your eyes some assistance.

Measuring would have also been helpful for the teams that took 30 wide receivers in the 2008 draft before Steve Johnson came off the board. Sometimes it’s not that easy to look at a guy and try to separate him from the uniform he’s playing in. That’s true whether he’s wearing the Kentucky uniform or the Miami uniform.

I sort of assume that some NFL teams use various forms of analytics in their draft process. I would be really shocked to find out that the Packers don’t use some form of analytics. The way that they won a Super Bowl in a year that they were decimated by injury was the first red flag for me that they are doing something a little different. Both Greg Jennings and Jordy Nelson are receivers that a statistical model would have liked very much coming out of college, and yet the Packers got both receivers in the 2nd round.

But I also assume that most teams are sort of blissfully unaware that analytics might provide any value. I’ve discussed in the past that the league’s phrase “Stats are for Losers” puts it in sort of a weird anti-science place. It’s also common to hear people say “Well, I don’t think you could use stats for that. It’s too complex.” (every time I hear someone say that, I think “Oh, because you’ve probably tried right?”).

One thing I’ve talked about is how NFL teams don’t optimize the way that they select receivers. While NFL draft position does have some explanatory power over receiver production, you also have to consider that the system is rigged in favor of the draft evaluators. The same coaching staff/front office that selects a player is also the same group of people that assign playing time. If there are mistakes in the draft evaluation, they are going to carry forward. So really the explanatory power that draft position has on receiver production isn’t very impressive.

One thing I did recently to try to account for the inherent bias in the system that favors draft evaluators was to look at the draft in terms of just wide receivers that had been picked right after each other.

To think about what I did in simple terms, imagine that I created a game that was going to pit an NFL draft evaluator (except we’ll use draft order as a proxy for the opinion of an evaluator) against a formula. The evaluator and the formula will be presented with two wide receivers and will have to try to guess which will have more production in the early part of their career. Each set of two wide receivers will only be guys who had been picked after each other.

For example, in the 2009 draft, the game would go like this:

Heyward-Bey vs. Crabtree

(evaluator picks DHB, algorithm picks Crabtree, algorithm wins)

Crabtree vs. Maclin

(both the evaluator and the algorithm pick Maclin)

Maclin vs. Harvin

(both pick Maclin)

Harvin vs. Nicks

(evaluator picks Harvin, algorithm picks Nicks, algorithm wins)

The evaluator would make its guess and the algorithm would rely on its formula for its guess. Then when the evaluator and the formula disagreed, we would tally up which was right and which was wrong.

But again, for our purposes we’ll just use draft order as a stand in for the evaluator. So can draft order beat a simple algorithm at a game of prediction when choosing between two wide receivers? It turns out it can’t. The algorithm would win.

Out of the 120 receiver pairs that I looked at over the last decade, my simple algorithm disagreed with the draft order in 49 instances. Out of those 49 disagreements over which player should be taken, the algorithm was right almost 60% of the time, compared with 40% for the draft order.

But the interesting thing is that when the algorithm disagreed with draft order and the algorithm was right, it was right by a larger margin than the draft order was right by when it won. The following graph shows this difference. When the algorithm was right, the receiver that the algorithm picked outperformed the receiver that the draft order picked by 2.84 fantasy points per game. But when draft order was right, the receiver that it picked outperformed the algorithm’s receiver by only 2.48 fantasy points/game.

The results are even more impressive for the algorithm if you consider that the algorithm is always taking the lower picked receiver. Even though we are trying to isolate for the effect of draft position, the algorithm still does only get to be right on lower picked receivers, who are also going to have marginally fewer opportunities. The graph below shows that on the algorithms wins, it is winning with a receiver that went off the draft board about 11 spots behind the competing receiver. In the draft board’s 20 wins, it’s winning receiver went off the draft board 12 spots in front of the algorithm’s receiver. Even though we are trying to level the playing field for the algorithm, it is still at a disadvantage when it does win, and an even greater disadvantage when it loses.

Even when we try to level the field in a wide receiver picking contest, an algorithm is still at a disadvantage against the NFL’s current evaluators, and yet the algorithm can still “outpick” the evaluators. This is another in the growing list of reasons why NFL teams should increase their use of analytics.

Sometimes things get a little emotional and you lose all sense of objectivity. That’s where I’m at with Marvin McNutt right now. You shouldn’t be listening to anything I say about him. Do not move him up your dynasty list of sleepers based on anything I say. My comments about McNutt should carry the weight of an overbearing coach/dad who can’t be an objective judge of his son’s talent. You might say that I am Mike Shanahan and Marvin McNutt is Kyle.

But while I’m telling you that you shouldn’t listen to anything I say (seriously, I’m not joking). I’m still going to proceed with my Marvin McNutt/Hakeem Nicks comparison. They are similar weight, speed, and height (McNutt is actually taller). They both caught 12 touchdowns in 13 games in their last year of college. They both caught more than 40% of the yards their college team threw. Nicks’ touchdowns accounted for a larger percent of Carolina’s output, but McNutt’s share of Iowa’s touchdown total was still impressive at over 40%.

I’m not saying that Marvin McNutt is Hakeem Nicks. But if scouts currently have him as a 4th/5th rounder, and McNutt is similar to Nicks, maybe he’s worth taking another look at. As I discuss in Game Plan, a similar exercise applied to Steve Johnson would have surfaced a similarity with Reggie Wayne. But Johnson went as a 7th rounder and Wayne was a first rounder.

Below is a clip showing Marvin McNutt’s career highlights. I tried to count the number of difficult hands catches he made but then I lost track (who else makes ridiculous hands catches on a regular basis?).

Then just for shits and giggles I show Hakeem Nicks’ college highlights below that.

I’ve just finished laying out Game Plan for paperback and it is available for purchase on CreateSpace. It will also be available for purchase through Amazon within a week or so (just a timing issue as CreateSpace pushes all of their titles to Amazon).

It is a little more expensive if you get the dead tree version of the book. It’s $5.99. That’s still relatively cheap, but nowhere near the $0.99 that you can get the Kindle version for.

For the various purchasing options for Game Plan, you can visit this page. You can also check out the things that people are saying about Game Plan on that page.

In an effort to drive click through rates to zero, I’m going to try to more regularly post on ideas that I read about. These “think” posts… that contain no mention of Tim Tebow (unless they do)… are really great at garnering zero clicks. I guess that puts me into the camp of either “arrogant assholes” or “masochistic idiots” in terms of website business savvy.

Today’s mental floss comes courtesy of Daniel Kahneman’s book “Thinking, Fast and Slow”. The book is sort of a collection of discussions of the various research that Kahneman has done over the years.

One chapter of the book focuses on what Kahneman calls “The Illusion of Validity”. We often assign skill to people even when their results can be better explained by luck. Our sense of the validity of predictions can be an illusion. The predictions aren’t actually any better than random chance.

Kahneman uses the example of picking stocks (which is for our purposes no different than picking football players). Here’s an excerpt from the book where Kahneman explains an instance where skill was assumed, but it was eventually found to be mere chance.

Some years ago I had an unusual opportunity to examine the illusion of financial skill up close. I had been invited to speak to a group of investment advisers in a firm that provided financial advice and other services to very wealthy clients. I asked for some data to prepare my presentation and was granted a small treasure: a spreadsheet summarizing the investment outcomes of some twenty-five anonymous wealth advisers, for each of eight consecutive years. Each adviser’s score for each year was his (most of them were men) main determinant of his year-end bonus. It was a simple matter to rank the advisers by their performance in each year and to determine whether there were persistent differences in skill among them and whether the same advisers consistently achieved better returns for their clients year after year. To answer the question, I computed correlation coefficients between the rankings in each pair of years: year 1 with year 2, year 1 with year 3, and so on up through year 7 with year 8. That yielded 28 correlation coefficients, one for each pair of years. I knew the theory and was prepared to find weak evidence of persistence of skill. Still, I was surprised to find that the average of the 28 correlations was .01. In other words, zero. The consistent correlations that would indicate differences in skill were not to be found. The results resembled what you would expect from a dice-rolling contest, not a game of skill.

The new rookie wage scale offers ridiculous value for teams. Consider that none of the following players are making more than Pierre Garcon:

Cam Newton

Von Miller

AJ Green

Julio Jones

Ryan Kerrigan

Patrick Peterson

Aldon Smith

Blaine Gabbert (ok, fine, bad example. But the rest of the list should prove the point)

I purposely used the example of Pierre Garcon because I think the tradeoff between the cost of veterans and the cost of rookies is where the exploitable value of the rookie wage scale becomes important.

To look at this issue further, let’s look not at a mediocre WR who took advantage of the first year of a collective bargaining agreement to get overpaid (that would be Garcon). Let’s look at the median salary for the top 10 salaries at each position.

Position

Median of Top 10 Salaries

Quarterback

$15,056,714

Defensive End

$12,227,666

Wide Receiver

$10,928,536

Cornerback

$10,218,016

Defensive Tackle

$10,203,400

Tackle

$9,387,200

Linebacker

$9,034,071

Running Back

$8,634,928

Safety

$7,811,250

Guard

$7,173,333

Tight End

$6,439,683

Center

$5,484,940

Basically the top position in salary terms is quarterback and the lowest is center (not counting special teams).

Now let’s compare those salaries with a graph that shows an approximation of the rookie wage scale. As we’ll see, the very top of the scale is in the $5.5MM to $6MM range. You can basically get the best player in the draft each year for about top 10 center/tight end money. That’s ridiculous.

Teams should be exploiting what are huge cost savings that the draft now offers.

To see how teams might incorporate the information on relative value of positions from the table above, with the wage scale graph, consider Cleveland’s choice at the fourth overall pick.

The rookie wage scale says that pick will make a little less than $5MM per season. But Cleveland can essentially choose the top player in the draft at all but the QB and Tackle positions (Luck, RGIII and Kalil will be off the board.) They could have top running back Trent Richardson, top cornerback Morris Claiborne, top receiver [insert your preferred WR], or top defensive end Melvin Ingram.

Below is a table that shows the difference in salary between the median salary for the top 10 at those four positions, and the 4th pick in the draft.

Position

Savings from Top 10 at Position

Defensive End

$6,800,000

Cornerback

$5,056,000

Wide Receiver

$4,587,000

Running Back

$3,275,000

As you can see, the savings at the running back position are the smallest. The savings at the defensive end position are the greatest. You could even look at the savings of $6.8MM as being equivalent to another good player.

But because someone might argue that Trent Richardson is a lock to be a top running back, and maybe the other players aren’t locks to be tops at their position, let’s think about this another way.

If the 4th pick in the draft were a running back, there would be about 12 running backs who would make more than that player. But if the top pick were a defensive end, there would be about 24 players at that position that would make more than the player taken with the 4th pick. If the 4th pick is a wide receiver, there would be about 22 wide receivers making more than the 4th pick. If the 4th pick were a cornerback, then there would be about 29 corners who would make more than the 4th pick.

Basically, by taking a defensive end, corner, or receiver, the Browns (or any team picking in that spot) can build themselves a cushion. They’re taking the best player at the position in the draft, and yet all that player has to do to justify their salary is perform like an NFL starter. They don’t have to be top 10 at their position because the salary savings are so great. If the Browns were to take Justin Blackmon for instance, he would be making about the same amount of money as Santana Moss. If the Browns took Morris Claiborne he would be making about the same amount of money as the Drayton Florence. But if the Browns took Trent Richardson, then he would be making more than Ahmad Bradshaw and Ryan Mathews.

It’s not that it’s impossible that Richardson could outperform Bradshaw and Mathews. It’s just that if the Browns select a receiver, corner or defensive end, the bar that player has to get over is much lower.

We’ve officially hit information overload on this year’s draft class. RGIII is now apparently a “miscreant” (ironic label applied by Ryan Burns) and Kendall Wright is apparently fat. This all reminded me of the section from Game Plan that discusses whether human experts can beat simple algorithms in prediction contests. It turns out that they usually can’t, even when they have a significant information advantage (like what % body fat a wide receiver is).

The issue that bias has in affecting the work done by doctors or football scouts has actually been studied broadly and a simple solution has been available for some time. It is (unfortunately) worth noting that experts are almost always reluctant to adopt this solution. The solution is to involve a simple formula, or an algorithm, in the decision making process.

As applied to football, an algorithm could be as simple as this:

Wide Receiver Production = Player Weight/40 Yard Dash Time + College Touchdowns/Game + College Yards/Game

In fact an algorithm not much different than the one above would have been better at predicting wide receiver performance than NFL scouts have been.

This revelation wouldn’t be a shock to the people who study the issue of experts vs. algorithms. A body of research work exists that suggests that experts make no better assessments than algorithms. For instance, in his book on bias, Berkeley psychologist Daniel Kahneman relates the story of how a simple formula has been able to aid doctors in making assessments that have saved hundreds of thousands of infants’ lives. From Kahneman’s book “Thinking Fast and Slow”:

A classic application of this approach is a simple algorithm that has saved the lives of hundreds of thousands of infants. Obstetricians had always known that an infant who is not breathing normally within a few minutes of birth is at high risk of brain damage or death. Until the anesthesiologist Virginia Apgar intervened in 1953, physicians and midwives used their clinical judgment to determine whether a baby was in distress. Different practitioners focused on different cues. Some watched for breathing problems while others monitored how soon the baby cried. Without a standardized procedure, danger signs were often missed, and many newborn infants died. One day over breakfast, a medical resident asked how Dr. Apgar would make a systematic assessment of a newborn. “That’s easy,” she replied. “You would do it like this.” Apgar jotted down five variables (heart rate, respiration, reflex, muscle tone, and color) and three scores (0, 1, or 2, depending on the robustness of each sign). Realizing that she might have made a breakthrough that any delivery room could implement, Apgar began rating infants by this rule one minute after they were born. A baby with a total score of 8 or above was likely to be pink, squirming, crying, grimacing, with a pulse of 100 or more—in good shape. A baby with a score of 4 or below was probably bluish, flaccid, passive, with a slow or weak pulse—in need of immediate intervention. Applying Apgar’s score, the staff in delivery rooms finally had consistent standards for determining which babies were in trouble, and the formula is credited for an important contribution to reducing infant mortality. The Apgar test is still used every day in every delivery room.

Simple algorithms, like the Apgar score, are useful because humans are subject to bias. Our brains are not very good at making complex assessments on their own. It’s difficult to know whether the assessments we’ve made in the past were any good and in fact we’re more likely to remember only the good assessments we’ve made. Left without help, our assessments don’t get any better.

Again, there is a mountain of research that shows that algorithms do just as well as experts and that many times algorithms exceed the effectiveness of expert predictions. A 1996 paper from University of Minnesota researchers Paul Meehl and William Grove covered the body of research that has been done on this topic. They count all of the studies that show that human experts don’t beat algorithms at making judgments. Meehl and Grove discuss one study that looked at whether a group of academic counselors could outperform a simple algorithm in predicting student grades. This is a problem similar to whether a football scout could outperform an algorithm in predicting player performance.

From the paper:

Sarbin compared the accuracy of a group of counselors predicting college freshmen academic grades with the accuracy of a two-variable cross-validative linear equation in which the variables were college aptitude test score and high school grade record. The counselors had what was thought to be a great advantage. As well as the two variables in the mathematical equation (both known from previous research to be predictors of college academic grades), they had a good deal of additional information that one would usually consider relevant in this predictive task. This supplementary information included notes from a preliminary interviewer, scores on the Strong Vocational Interest Blank, scores on a four-variable personality inventory, an eight-page individual record form the student had filled out (dealing with such matters as number of siblings, hobbies, magazines, books in the home, and availability of a quiet study area), and scores on several additional aptitude and achievement tests. After seeing all this information, the counselor had an interview with the student prior to the beginning of classes. The accuracy of the counselors’ predictions was approximately equal to the two-variable equation for female students, but there was a significant difference in favor of the regression equation for male students, amounting to an improvement of 8% in predicted variance over that of the counselors.

Note that the mountain of potential evidence that the counselors were given did not ultimately help them beat the very simple algorithm. The counselors and the algorithm tied when predicting performance of female students and the algorithm was a significant improvement over the counselors when predicting male student performance.

The evidence that the academic counselors were provided with is similar to the amount of evidence that NFL teams have about prospects before the draft. They might have information on the player’s I.Q. (Wonderlic score), they run background checks, they have hours of video on the player, they have the player’s college stats, they have the results from the NFL combine. And yet teams aren’t any better at selecting wide receivers than a simple algorithm that contains just a few variables. It’s also worth noting that the results should actually be biased in favor of the team evaluations. Teams get to assign playing time, which the algorithm has no control over.

The researchers Meehl and Grove then go on to summarize the 136 studies that they looked at (note they refer to the algorithm approach as the “actuarial” approach and they refer to the expert approach as “clinician”).

Of the 136 studies, 64 favored the actuary by this criterion, 64 showed approximately equivalent accuracy, and 8 favored the clinician. The 8 studies favoring the clinician are not concentrated in any one predictive area, do not over-represent any one type of clinician (e.g., medical doctors), and do not in fact have any obvious characteristics in common. This is disappointing, as one of the chief goals of the meta-analysis was to identify particular areas in which the clinician might outperform the mechanical prediction method. According to the logicians’ “total evidence rule,” the most explanation of these deviant studies is that they arose by a combination of random sampling errors (8 deviant out of 136) and the clinicians’ informational advantage in being provided with more data than the actuarial formula.

Only 8 of the 136 studies came out in favor of the expert and those studies didn’t seem to have anything in common. It also didn’t matter how much education or experience the human experts had. Even though the experts were always given more information than the algorithm, the score was still 64-8 in favor of the algorithm, with 64 other studies resulting in ties.

Human experts will always have excuses as to why their judgment should be preferable to an algorithm, even if the experts can’t beat the algorithm when score is being kept. The most common excuse is probably that the mountain of evidence that experts can’t out-judge algorithms somehow does not apply to a certain field… like football for instance.

This might be a good time to return to the subject at hand – the NFL’s front offices. I’ve compared the NFL’s scouts to doctors to illustrate what I think are valuable points. First, human bias affects everyone, even the most educated among us. Doctors are significantly more educated in what they do than NFL scouts are in what they do. Doctors attend schools where a formal curriculum is involved. NFL scouts learn what they do from watching others do it. But even the education of doctors doesn’t prevent them from making diagnostic errors, which studies have shown they make even when they are fully confident.

The ways to address the problems that are going to affect scouts might also be the same as the solutions that exist for the medical industry. The NFL’s front offices may want to engage in regular feedback on the effectiveness of their decision making. They may also want to seek support from computers (or algorithms), which aren’t affected by human bias. The goal of pursuing these two strategies is to reduce the impact that human bias has in the NFL decision making process.

Let’s look at one example of the way that human bias might make its way into the player evaluation process. Leading up to the NFL draft in the spring, it is common to hear scouts compare NFL prospects to active players in the NFL. However, with not very much looking you can often find player comparisons that rely solely on two players who might look similar. They might have played at the same college, or they might be the same race. Players from Georgia Tech are amazingly somehow similar to other players who previously played at Georgia Tech. White wide receivers are amazingly somehow similar to other white wide receivers. Linemen from the University of Iowa are compared to other linemen who also played at Iowa. Black quarterbacks are compared to other black quarterbacks.

For example, prior to the 2011 draft, North Carolina wide receiver Greg Little was compared to former North Carolina wide receiver Hakeem Nicks. Nicks was coming off of a very successful second year in the league and this may have made its way into the minds of NFL talent evaluators. From an article that appeared on a Cleveland news channel’s website:

Little’s strengths include solid speed, terrific hands and is a solid blocker. Scouts compare Little to former Tarheel and current New York Giants wide receiver Hakeem Nicks.

The problem is that Nicks and Little only look similar if they’re wearing the same college uniform. Nicks is about medium size for a receiver at 212 pounds. Little is huge at 230 pounds. Little is two inches taller than Nicks. Nicks was an extremely accomplished wide receiver, having compiled 1200 yards and 12 touchdowns in a year that the North Carolina passing offense was lackluster at best. Little’s best year had been less than 800 yards, with 5 touchdowns. Hakeem Nicks had averaged an amazing 18 yards per reception in his last year in college, while Little was used more like a wide receiver/running back combo player and had averaged just 11 yards per reception. When Nicks was drafted he had some of the largest hands that had been measured for wide receiver prospects. When Little was measured he had some of the smallest hands. Only when they are wearing the same college uniform do they resemble each other!

But if it sounds like I am criticizing anyone who might have thought the two wide receivers were similar, I am not. It’s difficult to look at Greg Little in his UNC uniform and not immediately make a pattern match with another successful UNC receiver. When NFL front offices are sitting down to evaluate Greg Little, it takes effort to separate Little’s on the field play from his physical appearance. This same kind of association takes place with any number of players.

This is where human bias could be balanced out by involving computers and algorithms in the evaluation process. The easiest thing that could be done is that a simple regression could be performed to determine what player attributes have historically correlated with pro success. Before NFL scouts do anything, it would be useful to use the results of the regression to project player performance. After they do that, the scouts could use a numbers based similarity process to generate a group of names that the subject player might be similar to. Teams have a number of valuable data points about players before they have to draft them. Those data points include height, weight , speed and college performance. These data points can be used to find similar players. But this should be done by a computer, not by a human. As we have seen, humans tend to pattern match and the patterns often are irrelevant (like skin color or college team).

For instance, when Buffalo Bills wide receiver Steve Johnson left college, it was probably difficult for scouts to imagine a good NFL wide receiver coming from the University of Kentucky. Kentucky is not known for its football program as much as its basketball program. This was perhaps one of the reasons that Johnson wasn’t picked until the 7th round of the 2008 draft. But had scouts conducted a brief similarity exercise using a database of previous player information, they would have found that Johnson was very similar in a number of respects to Indianapolis Colts wide receiver Reggie Wayne. They both ran the 40 yard dash in about 4.45 seconds. Johnson was actually about 10 pounds heavier than Wayne, so his 40 yard dash time is more impressive by comparison. They were both about 6’2” tall. They both caught about one touchdown per game in their final year of college. Wayne caught about 70 receiving yards per game at Miami, while Johnson caught about 80 yards receiving per game at Kentucky. Even if Kentucky is not a football factory, it plays in the Southeastern Conference, which means they play a difficult schedule. In other words, Johnson’s stats were accumulated against tough opponents.

Had scouts considered how similar Johnson was to Wayne, they might have rated him more highly and he might have been better than a 7th round draft pick. Instead Buffalo got a relative bargain and Johnson had a breakout season in 2010, following it up again in 2011 with another solid year.