On The 18-Game Injury Report

The study used data from injury reports from the 2003 season to the 2007 season — the number of players who missed games each week — to form a line graph with the intent of showing whether more players missed games as the season wore on. The graph indicated that the high point of players missing games with injuries — an average of little more than three players out with injuries per team — was in Week 10. The low point of the regular season was when an average of just over one player per team missed the Week 17 game — the final game of the regular season.

In measuring how many players miss games, the study did not take into account the importance of a game. If a player has a moderate injury in the early or middle part of the season, his team may rest him for two or three weeks. But if the injury occurs just before the start of the playoffs, the team may tell him they need him back on the field quickly.

I haven't read the report, so I can't speak to the veracity of the NFL's data, which was compiled from injury reports and, according to Mike Reiss, "information from team trainers". I know that in compiling our injury database, we use the same sources of data. Our data scrubs players who never had a serious shot of playing for the team or weren't expected to play (e.g. Willis McGahee in 2003, Kenechi Udeze last year) out to try and get a measure of how teams are affected by injury in a given season.

Quickly, though, I discovered how the NFL had likely laid out its report and presented its data. I took the data from 2003 through 2007 and calculated, for each week, the number of players per game who were listed on the injury report and did not play. (We'll get to the injury report excuse in a second, but in general, the only guys who don't get listed on the injury report and then don't actually play are usually the third-string quarterback). I didn't include players listed on IR or PUP, since the data was compiled from injury reports and that report doesn't list guys who aren't on the active roster (IR and PUP are separate lists).

Most of the results matched up well with the NFL's data; the line peaked right around Week 10, with a drop-off right after that until Week 17, when it peaked again, owing to an increase in games missed by players on teams with nothing to play for with regards to playoff positioning.

Of course, the IR and PUP lists do matter -- those players are just as unavailable to the team as a player listed on the active roster and out, albeit whilst not taking up a roster spot in the process. If we include them in the analysis, the data looks totally different, and in a bad way for the NFL:

Players Missing Per Team By Week

2003-2007

w/o IR, PUP

w/ IR, PUP

IR/PUP Totals

1

2.36

4.05

1.69

2

2.81

4.67

1.86

3

2.95

4.94

1.99

4

3.25

5.43

2.18

5

3.28

5.58

2.30

6

3.15

5.53

2.38

7

3.15

5.83

2.68

8

3.22

6.07

2.85

9

3.16

6.26

3.10

10

3.26

6.58

3.31

11

2.85

6.49

3.64

12

2.90

6.75

3.85

13

2.83

6.94

4.11

14

2.79

7.21

4.41

15

2.95

7.68

4.73

16

2.98

7.99

5.01

17

3.45

8.75

5.30

When you include the absence of players on either IR or PUP, teams suffer steadily more injuries as the season goes along. Perhaps not coincidentally, players need to be taken off of PUP by Week 10, or else they get placed onto IR. When the considerations of those two lists are factored in, there's nothing to suggest that the actual health of players hits its nadir at Week 10. It does so at Week 17, after experiencing a steady climb throughout the season.

The issue of whether players are more likely to play "important games" is harder to decipher. The biggest problem, of course, is defining what an "important game" is -- is it games with direct playoff implications? Divisional games? Games in the second half of the season? Does a team at 0-2 consider their Week 3 game a "must-win" and have to include that? Realistically, in the NFL, there's only 16 games; every one is important. Considering that the number of players missing rises as the season goes along, even as the games they miss become more important, I'm not inclined to believe that players are more likely to miss any definition of "important" games.

Now, at Football Outsiders, one of the things I've developed is AGL (Adjusted Games Lost) -- a statistic which calculates the effects of injury on a team based, primarily, upon that very same injury report. Looking at the history of teams in the past, we calculate the likelihood of a player participating in a given game based upon his status as listed on the injury report and his relative role on the team (e.g. is he a starter, a situational player, or a reserve).

It's exactly that perspective that needs to be taken into context when analyzing the injury report. If you truly believe that players listed as Probable are going to play 75% of the time, Questionable 50% of the time, and Doubtful 25% of the time, well, you haven't been paying attention. If you actually look at the historical likelihood of players with a given injury status and team role playing, the percentage are totally different.

2001-2008

Starters

Reserves

Probable

94.9%

80.3%

Questionable

61.9%

43.4%

Doubtful

9.6%

5.8%

In reality, when you look at the data for starting-caliber players (no one is saying there's going to be a betting scandal because the Jets listed Tim Dwight with an ankle injury instead of a hamstring injury), Probable players make it to the lineup 19 out of every 20 times, while Doubtful guys show up about one out of every 10 games.

Since we've found that AGL has a significant correlative effect with a change in a team's performance from year-to-year, we build that methodology into our team predictions each year. If the injury report really bore no resemblance to reality, merely looking at the number of games missed by a team's players (or strictly a team's starters) in a given year would have a similar or superior relationship to AGL.

In reality, the difference is virtually nonexistent. When looking strictly at starters, AGL has a .32 correlation with wins in a given season, while games missed by starters has a correlation of .30. (If you include reserves, that figure falls to .19 and .16, respectively). In other words, Reiss and Florio have a point -- the injury report is, after the fact, about as useful as a binary report indicating "played" and "did not play".

As for the 18-game schedule, though? Regardless of how information is reported or how the NFL spins their data, players are more likely to be injured and miss time in an 18-game season than they are in a 16-game season. The effects of injuries don't rise or fall at an artificial point in time of the season; they are cumulative, and will only continue to accumulate with each additional regular season week added to the schedule.

Posted by: Bill Barnwell on 01 Jun 2009

43 comments, Last at
05 Jun 2009, 3:15pm by
zlionsfan

Comments

Another thing to consider with the IR list is that as it gets later in the season, the cost of putting a player on IR goes down. If a starter gets an injury in week 4 that puts a player on the bench for 8 weeks, the team will likely keep him on the roster but inactive. That same injury in week 15 will probably put the player on IR so the team can pick up a some depth. Since IR is being ignored by the NFL for this report, those injuries are ignored by the report. My guess is that week 10 or 11 is about the cut off for when teams decide to keep players on the roster or push them to IR for those types of injuries.

Not sure who they are trying to fool with the fuzzy math...no football fan- even if the media did attempt to paint an abusive picture of the mistreatment of potential millionaires- is going to say, "I'd rather have a 16-game season"...as far as the NFLPA is concerned, most of the players aren't smart enough to see the fuzziness...those that are smart enough don't need a report to realize that injuries stack up as the season goes on.

Deception and complacency know no bounds in these days...I weep for the future.

I'd like to give the NFL the benefit of the doubt here, but this really looks like a transparent attempt on their part to rig data to support an outcome they want. Of course IR counts. In fact, as much as teams like to game the system, I'd say IR is a much more reliable indicator than the injury report. Teams lie on the injury report all the time. But if a guy is out for the year, he's out for the year. There's not much benefit to lying about it at that point.

Let's say a runningback has a .5% chance of being injured enough to miss part of a game any time he touches the ball (once every 200 touches)
1 or 2 more games are just more chances that the .5% will happen.

But does the chance of being injured increase with 2 additional games? Does a player have a 3% chance of being injured at the end of the season? And a .1% chance early on?

Or are most injuries freak injuries, independent of any wear/tear on the body?
In which case, as long as players get paid the same per game, it wouldn't matter as much if they played 9 years of 144 games or 8 years of 144 games.

I would think, based on no real evidence other my opinion, that injuries would become more likely as the season wears on. I remember seeing something from Will Carroll some time back about the idea of cascade injuries. Essentially, it meant that injuries can beget other injuries. Example -- a player pulls a groin, plays through it, and then hurts his ankle because he's running differently than he normally would. As the lesser injuries mount during the season, and players play "hurt" (as opposed to "injured"), the chances of this kind of thing happening should go up.

As a former player who played through a torn hip muscle and lives with resulting pain in his knee and ankle, I don't even think this could be classified as a theory. Compensating for injuries leads to injuries.

But I think the question was more about a fatigued body (rather than an injured one) more likely to get hurt than a relatively fresh one. I'd say that's probably also true.

But players don't get paid the same. They get annual salaries, right? So for this to be worth their while, they'd need a 12.5% raise across the board, which presumably would be worked into minimum salaries (and existing contracts? if the cap went up 12%, you'd think contracts would too) ...

except the other problem is that it will cost rising players money on a game basis. 4 years of an 18-game schedule is 4.5 years of a 16-game schedule ... instead of being halfway through season 5, you're finishing season 4, and most likely getting less money early in your career ... which for a lot of players is all of their career. Think of it like office personnel getting a raise, well, every 18 months instead of every 16 months.

It might be a temporary financial benefit for players who have very short careers, but even veterans who are nearing the end of their careers aren't likely to benefit as much. I'm not sure GMs will automatically offer 12% more than they would have under the 16-game system. Sure, the minimum salaries will go up, but people making more than the minimum probably won't see all of that 12%.

Consider a 1 game NFL season (w/ no preseason games). What is your expectation for player availability? Something just shy of 100%, maybe 95%, b/c people do get injured in practice?

Now consider a 50 game season, 1 game per week. Tell me, Roger, what is your expectation for player availability in the latter half of that season -- is it lower than it was for the hypothesized 1 game season?

Next we'll see a study from the NFL that plots average career length against number of games per season. Lower left will be NFL at 3.6 yrs and 16 games. Middle will be NBA with 4.8 and 82. Upper right will be MLB at 5.6 and 162. Longer season begets longer careers! QED

There's another problem with IR's involvement in the analysis -- the fact that healthy players can be stashed on there. A team with a mid-quality player who gets an 8-week type of injury in week 2 is likely to stick them on the IR, which prevents them from playing the rest of the season, healthy or not.

The IR number can ONLY go up, and, as someone else pointed out, the cost of putting a player on IR near the end of the season for the sake of extending a roster spot for a prospect is marginal. So what looks like an increasing injury rate due to IR may just be an artifact of these two phenomena.

The solution? Permit teams to activate players from IR to the active roster under certain conditions (what those conditions are, don't ask me) the way a 15-day DL style has worked in baseball. After we've played a few seasons that way, we'll have a better picture of whether season length actually does affect injury prospects.

As far as I am aware it is for precisely the reason implied in the post above. High revenue teams with Machiavellian coaches were adding extra depth by pretending that guys were injured. If you bear in mind that the draft used to be a lot longer than seven rounds, there was too much scope for abuse of the system. Teams still do it. If they have strong rosters some teams will have a draft pick sit out the year and go through another offseason to see if they improve enough to make the team. In theory these players should be getting cut so other teams can decide if they would be an improvement to their rosters but it doesn't aways happen.

Bill, have you factored in the loss of the two preseason games? (I have no idea if the league even publishes injury lists for 'exhibition' games but I would have thought some players would be hurt in those games too.)

In all of the discussions regarding this topic, I am surprised that I have seen little or no discussion of the effect experienced in the CFL when it went to 2 preseason + 18 regular season games. It doesn't seem to be an issue for the players, who get a fraction of NFL money, and its much better for the fans.

I think a more important question is whether the accumulation is linear or logarithmic. Of course injuries are going to be cumulative, this is intuitive in my opinion.

The more important question is whether the rate of accumulation accelerates as the season goes on. In other words is the plot of the data more logarithmic or linear. Without actually doing the plot, the data looks linear to me. And in my opinion a linear accumulation of injuries would be an argument in FAVOR of the longer season because the risk of injury does not actually increase with games played. Rather injuries simply accumulate at a steady rate as the year goes on, again as one would expect intuitively.

FWIW, I just did the plot in Excel and slapped some trendlines on there, just to see which one gave the highest R-squared value.

For "IR/PUP totals", the best fit trendline is polynomial, then exponential, then linear. For "w/IR, PUP", the best fit is polynomial, then linear, then exponential. All had R-squared in the >.95 range.

What that means, I'm not really sure. Probably that you can reasonably call the trend either linear or exponential, depending on your point of view. My gut feel, just from looking at the points on the graph, is that there is an increase in the slope of the graph in the second half of the season.

The difference may be semantic, however. For the sake of argument let's say there is a 1% chance per team each week of a player having an injury bad enough to land on IR. For these injuries nobody in their right mind could argue that there won't be more players on IR at the end of the year than the beginning of the year.

Eliminating this statistic, there would be two questions left for the league. Do "normal" injuries (defined as ones players can come back from in the same year) occur more often as the season progresses and does the amount of time lost increase. The "second" (although related) question is whether season long injuries (say those requiring rehabs of 6 months or more) occur more often as the season progresses. A hard definition of a "season" ending injury would have to be agreed upon since more players are going to miss the season with injury as the season gets shorter.

I think distinquishing the data like this might give a very different "answer" than taking the data all lumped together.

I don't know that it matters if I'm a player whether the chance of a season ending injury stays the same for a given game throughout the year. Because the total chances of a season ending injury happening in a given season would still go up, potentially limiting the number of seasons played.

Theoretically, if the chance of severe injury is constant, you won't affect things like total games played in a career, or games missed due to injury, by making the season longer. But salaries, records, pro bowls, hall of fame voting, and career length, are largely season based, and all potentially changed by lengthening the season. Most of them change for the worse, while some are likely neutral.

Season records, for instance, would take adjustment (to contracts as well?); We already have pre-merger/post-merger, pre-PI change/post-PI change, 14 game season/16 game season, etc caveats. This would simply be another. So this would be a change, but probably not negative.

On the other hand, say you can become a free agent two seasons from now, and they switch to an 18 game schedule. Your odds of getting hurt for significant time prior to free agency and therefore losing a lot of money just went up. (Whether the odds of getting hurt on any particular play or game are constant.)

All of this brings up another potential source of data - injuries before and after the switch to the 16 game season. It was a very different game then, but it might be enlightening.

Between the first and second week the number of players on IR/PUP goes up 0.17 (1.86-1.69 = 0.17). In the last week (3.3) the number goes up 0.29 from the 17th week (3.01). In other words, nearly twice as many players are seriously injured in week seventeen compared to week one. The number of players hurt during a game gradually increases as the season goes on. I did a regression and found the correlation
r = 0.73.

This makes intuitive sense: As the season progresses, players suffer accumulated wear and tear and become more vulnerable to serious injury. Adding two more weeks will have players receiving injuries at an increasing rate.

It's not as simple as you're making it. A 4-week injury doesn't end a player's season in week 1, but it does in week 15. And that late, teams are willing to sign street free agents who they may want to look at for the following season, or to fill out their playoff roster.

"The number of players hurt during a game gradually increases as the season goes on."

That may or may not be true, but THESE stats don't show that. If you look at the graphs of the data, and leave out PUP/IR, then the number of players missing per game gradually goes up for the first four weeks (as you might expect with players getting into "game" shape). But then after that it levels off and is basically flat until "crunch" time in the season at about week 10. For these "more important" games the number of players missing per game goes DOWN and then remains basically flat again until week 17 when there are a bunch of "meaningless" games.

Now when you look at PUP/IR the slope of the graph is pretty shallow for the first 6 weeks as teams evaluate how bad injuries actually are and decide whether to give the players a chance to come back. After 6 weeks, when the PUP to IR decision must be made, the slope of the line increases a bit indicating more people per week are added to the list, but the slop remains basically constant (linear) after week 6 until week 17, when it doesn't matter anyway, and you see a small spike in the number of players on PUP/IR.

This all basically makes the NFL's "point" that injuries, according to injury list data, do not seem to be more common as the season goes on.

But using IL data in the first place is a bit of a farce. Just because players don't miss games doesn't mean they aren't "beat up". Anecdotal evidence suggests that players accumulate "minor" injuries as the season progresses and thus end up more "beat up" the longer the season is. But the NFL is trying to ingore this because there is no good way to quantify this perception (that I am aware of anyway).

So, like I said, it basically comes down to semantics (about the definition of a game "missed" because of injury, which for the NFL are not counted if the player is not on PUP/IR) and smoke and mirrors.

So IR additions plateau around week 9. I guess the takeaway here is "Duh, NFL 2 more weeks means 2 more weeks of injuries. Though those two weeks will not really be any more disastrous than weeks 16 and 17 are now."

What you'd actually want is the number of new people injured each week, not just a total of injured players for the week. If someone is injured in week 2, and its a 4 week injury, you don't need to count it for weeks 3-6 to for this type of report. That would tell you if the total number of injuries per week goes up over the course of the season, or remains relatively constant.

It would clean up the data for things like Chuckv showed in 22, where you can't tell if twice as many people where injured in the last week, or if 4 times as many were and just everyone else came off the injured list that was on it prior, so it doesn't look as bad as it really is. With the data as it is, you can't tell the difference.

A "loose" interpretation of the info would show a plateau in the number of players added at week 7 with abberantly low additions in weeks 6 and 8. Worst case would have the plateau in week 11 in my opinion.

This bolsters the NFL's phony case. But it doesn't answer two key questions the the NFL would probably like to avoid: 1) How the players "feel" at the end of the season (which might actually "plateau" as well at some point, but who really knows except the players themselves) and 2) what are the long term affects from all the accumulated "minor" damage and how would two more "real" games and two less "exhibition" games affect the players long term.

This discussion is quite fun actually (if you set aside the fact that we're talking about peoples' long term health).

I'd be interested to see how Bill's IR per week data splits out by injury type.

We expect that as the season goes on, we will see players added to IR with injuries with shorter recovery timetables. But is it also possible that certain types of injuries are more likely to occur later in the season? This may be obscured in injury reports, but few teams have a motivation to obscure the nature of the injury when adding a player to IR.

Also, setting IR aside for a moment, we may expect the frequency of stress fracture injuries to increase as the season goes on. Does the data bear this out?

I agree that more games means more injuries, but this isn't necessarily a huge problem. Simply stating that more injuries will occur isn't enough to reject the proposed 18 game season.

Further, I like this alternative: Add two games, and one bye week for each team, effectively adding three weeks to the regular season schedule. Teams will have more time to recover from injuries, dampening the effect of the longer season, the NFL will have a longer regular season and more revenue, and fans will have more meaningful games to watch.

Yes, simply stating that more injuries will occur is not enough to reject the 18 game proposal. Even stating that injuries will occur at a higher rate in those last two games (which is really what's being argued here) doesn't necessarily provide enough justification to reject it.

I like the idea of an extra bye week later in the season to offset this. The downside would be that the regular season would now be 3 weeks longer, and so either they're playing "real" games in 95 degree August heat, or the Super Bowl is at the beginning of March.

At the end of the day, this isn't going to come down to whoever can fudge their data the best. It'll come down to money. Everyone (even the people who put this report together for the NFL) knows intuitively that there will be more injuries in a longer season. The question is, how much money is it worth to the players to agree to take this extra pounding, and are the owners willing to pay it? And as someone else said, the other concern is free agency. If "free agency after 4 years" changes from 64 games to 72 games, that's half a season longer that the player has to play before cashing in. The players are not just going to agree to that.

I guess I'm the only one who sees 16 games as already being plenty. The NFL needs to come up with a different way to increase revenues. Players are already literally killing themselves for this game. 18 games of that kind of intensity is insane.

The primary revenue driver for the NFL is TV sales, right? Ticket sales are a drop in the bucket relative to TV Ad Revenue.

Why not lengthen the season by adding additional bye weeks?

Each bye week will add an extra week of 4 slots for televised games without any additional stress on the players bodies.

In addition, the additional rest may actually be good for the players. I could deal with 2-3 bye weeks, maybe even 4. Yes, it stinks when you can't watch your team that weak, but you it gives you the opportunity to see other teams, and broaden your interest in the sport.

Surprisingly, even though I just endured the worst. season. ever., the Lions' bye week was, once again, less interesting for me than the rest of the season. (The Sunday after Thanksgiving has always been the day I look forward to the least, because the Lions are guaranteed not to play then.)

I still watch NFL games then - I have the package, so I'll watch either RZC or games of particular interest - but it's not the same when there is no result for my team.

Of course other people may feel differently ...

If you work at the stadium, or in another business that depends on game-day traffic, you might not like the idea of additional bye weeks.

Would advertisers want to spend more money to cover the same number of games? Of course the NFL would sell it as weeks/hours/slots/whatever, but it's still the same number of games ... that one I don't know.