2015 NFL Draft: “Consensus” Top 200 Big Board Preview

Last year, we looked at what the consensus of the scouting community had to say when it came to which players were the best prospects in the NFL draft. That look alone gave us a lot of tools, which meant not only could we have a unique way to “grade” drafts, take a look at which teams were off-the-wall, and which evaluators were best at “predicting” the NFL draft.

The Huddle Report has a similar competition, grading mock drafts as well as top 100s. Draft Board Guru won their competition—their top 100 had the most players actually selected in the top 100. We’ll take a look at other ways to grade that process in a little bit as well.

We’ll take a look at last year’s results, then follow it with a preview of this year’s board. If you want to skip to this year’s board, click here.

2014 Review

Awards

“Gold Standard”

In this case, the Gold Standard doesn’t refer to who best ranked the players in terms of how well they’ll do, but whose rankings best captured the feeling of the evaluation community. There are a number of divergence tests to run on this sort of thing, but instead of boring you with process I’ll just let you know that they all came to the same conclusion.

DraftTek’s final board was the best at capturing the thoughts of the evaluation community as a whole, which means that if you only had one board to look at to see what people think of your favorite prospect, you’d check out the one they had at DraftTek. This year, their board is here.

“Odd Duck”

A lot of people take pride in the fact that their evaluations are independent of others, and well they should. Many times, that produces unusual rankings, and gives their boards the best opportunity to surpass others when it’s finally graded—or fall well behind. If you want unique takes on the draft, you want the Odd Duck.

Last year’s Odd Duck didn’t have a close competitor. Kyle Crabbs at NDT Scouting produced the most divergent talent ranking for NFL prospects last year in his guide. The guide is available for purchase for $10.00. It’s thoughtful, well-organized and easy to read. Here’s a sample of 18 pages from this year’s guide, which includes a number of evaluations and an explanation of their methdology.

Best Prediction

The Huddle Report’s contest is very good, and a pretty handy reference for figuring out who the players in the top 100 will go. They look at the players in a board’s Top 100, then count up how many of those players actually went in the first 100 picks.

We’ll take it a step further, this time by figuring out how far away a player was actually picked from their spot on the board and adding up all those differences. We’re adjusting for how high a player is picked, so if a player ranked first goes fifth, it means a much bigger penalty than if a player ranked 75 goes 88th.

For this we’ll exclude those boards that had at least 100 players, something we’ll exclude in all phases this year, not just from awards, but input into the rankings. That means the winner, Daniel Jeremiah, doesn’t count because he only ranked 50 players.

The next best predictor was Mike Mayock at the NFL Network, whose score was actually fairly stunning. Despite the fact that he had Jadeveon Clowney as his second-ranked player (one of 11 rankers to do so out of 34), Greg Robinson as his third-ranked player and Blake Bortles as his 15th-ranked player, the rest of his rankings matched the board fairly well.

“Out of Sync”

Instead of calling the opposite award the “Worst Prediction,” we’ll call it Out of Sync, because most of these evaluators aren’t attempting to predict the Top 100 with their list of top 100 players, simply predict the best 100 players. Kyle Crabbs won that award as well, and got there with some bold moves—ranking third overall pick Blake Bortles as his 49th-best player, second-overall pick Greg Robinson as his tenth-ranked player and fourth-overall pick Sammy Watkins as his 16th-ranked player.

Oh, and Mike Evans ranked 123rd.

Hypothesis-Testing

Evalucasting?

Last year, we divided the boards into “forecasters” and “evaluators” in order to separate the two approaches that seemed to have developed when ranking draft prospects. The first set of boards come from what some people have started calling “Big Draft,” which is a reflection of dominant media narratives—we usually see them on TV on ESPN or the NFL Network, or online at those places in addition to CBS and on occasion Yahoo!

The idea was that those that are “plugged in” to the league are not necessarily better at evaluating players—although they could be, due to training, access (to coach’s film or scheme) and resources (like former players)—but have information most other sources wouldn’t have, like off-field concerns and injury. They know how a player did in interviews and have some sense of what teams are thinking.

That means their access to the pulse of the NFL will influence their evaluations either implictly or explicitly, and results in a board that is reflective of NFL opinion.

There are a couple of things we could do here. First, we could run the Top 100 test we did above to see if the forecaster (or evaluator) board beat Mike Mayock.

After that, we could take a look at where the evaluator boards diverged with the forecaster boards, and see who was closer to the actual pick. We’ll count up “wins” for those who were closer to the actual pick, and create “winning percentages” for the Top 32, Top 50, Top 100, Top 256.

So, was the forecaster board more accurate than Mayock? Yes, but by the tiniest margin. The average error for Mike Mayock was the magnitude of error that would have the player who was picked 20th was ranked 34th, which is actually pretty good on average.

The average error for the forecaster board was as if that 20th pick was ranked 33rd. It’s a very small difference. A better example comes at the 100th pick. The average error for Mayock would rank the player who ended up going 100 was ranked 141st. For the forecaster board, that player would be ranked 139th.

Either way, the forecaster board is pretty good at predicting the draft from that perspective.

At the end of the day, however, it’s not really what you’re looking for when figuring out where a player will be drafted. If a player is ranked 35th, 38th and 32nd by different groups, there’s not a real controversy. Instead, it’s when a player is highly lauded by one set of draftniks and derided by the other that it’s really interesting.

Yes, I’m talking about Teddy Bridgewater. Sort of.

So, we’ll create win counts for those bigger differences for each group of players: Top 32, Top 50, Top 100 and Top 256. As an example, Teddy Bridgewater was ranked third by the “evaluators” and 18th by the “forecasters.” Bridgewater was drafted 32nd, so the second set of boards “won.”

Evaluator Wins

Forecaster Wins

Ties

Top 32

0

2

1

Top 50

0

4

1

Top 100

1

9

1

Top 256

9

31

1

The difference is clear. The evaluators didn’t win once in the top 32 or the top 50, which is pretty compelling evidence that the forecasters do indeed forecast.

Evaluator Win%

Forecaster Win %

Top 32

16.7

83.3

Top 50

10

90

Top 100

13.6

86.4

Top 256

23.2

76.8

It’s a landslide. The tie, by the way, was Taylor Lewan—who was ranked 15th by the evaluators and 7th by the forecasters, and ended up going 11th, splitting the difference. There’s a good case for eliminating ties from the win percentage calculation, but there’s not much point: the difference is clear.

But Are They Right

I think the best way to see how to evaluate which board best predicts player performance is to take a look a the players who they disagreed on and make a judgment call on that player’s performance vs. the value of the spot he was projected to be in.

Naturally, that would cause a lot of disagreement and is not foolproof. People disagree on what the value of a third-round pick is, and what teams should expect from them. Is a high-quality backup a good pick in the third? A bad starter? Sometimes those two aren’t distinct, but the second is judged more harshly. I would consider, for example, Mason Foster to be in that category.

Perhaps the best way to resolve the evaluative tension behind the value of some of these picks isn’t necessarily to use Approximate Value (my go-to way of answering these questions) but to index Approximate Value against Pro Football Focus scores to see what the expected PFF score for a pick should be. Then, we can use that as a rough guide to see who was closer on some of these controversial players.

That will have to wait, however, as these players get more time in the NFL. For now, let’s make snap judgments on the biggest disagreements.

Evaluator

Forecaster

Winner

Mike Evans

16

7

Forecaster

Justin Gilbert

26

9

Evaluator

Taylor Lewan

16

6

Forecaster

Calvin Pryor

32

17

Forecaster

Jason Verett

18

37

Evaluator

Teddy Bridgewater

3

18

Evaluator

Joel Bitonio

84

42

Forecaster

Derek Carr

12

31

Evaluator

Jeremy Hill

96

54

Forecaster

Well, alright then.

2015 Preview

The Big Board

Well if you’re here for the big board, here it is. It’s incomplete and needs about five more rankings groups to contribute in order to finish it. Tomorrow we’ll have it updated and complete, along with a complete analysis of the different ways to rank the boards (logarithmic point distributions, medians, means and trimmed means) and the differences between the evaluators and forecasters in this year’s draft.

Two notes before you see it. The first is that we have “roles” and “positions” to designate the likely fit for players to be draft and the likely positions they’ll play. Roles are a narrow grouping than positions, so all 3-technique tackles, 5-technique tackles and nose tackles are “IDL” or interior defensive linemen.

Sometimes, those roles and positions are the same, like for edge rushers and QBs. Here’s a cheatsheet:

I’ll explain the groupings and why they are the way they are tomorrow as well as the analysis of the rankings I mentioned earlier. Remember, Anthony Barr was an “edge” player last year, so the position groupings aren’t ironclad. Still, if you think a player’s role was misdefined, do not hesitate to let me know.