The Latest

Friday, March 30, 2012

If you follow the TV industry at all, you know that the Nielsen ratings for regular series on the broadcast networks are like a fast-flowing river, and the individual shows are just fish trying desperately to swim upstream. Maybe that water hits an occasional bump in the road and splashes up for a couple seconds, but the decline on the whole feels inevitable. How much of it is the networks' "fault" and how much of it is the rise of alternative options? Debatable. Either way, it's real.

To some extent, this grim reality hurts the integrity of historical ratings. TVByTheNumbers has this thing they call the "Gunsmoke Rule" which essentially says that any ratings beyond the previous season are meaningless for comparison purposes. Sometimes I feel like they just wave that around to save themselves the trouble of looking something up, but the general principle is certainly correct; as ratings decline, so too do standards for renewal and cancellation and pull-me-from-the-schedule-right-now, and this is drastic enough that the raw number standards for network decision-making are completely different within a few short years. Just one example: CBS pulled fall 2006 newbie Smith from the schedule after it got a 2.8 demo in its third episode. Five years later, the occupant of the same timeslot, Unforgettable, premiered to a 2.9, has averaged about a 2.1 for most of 2012, and will air (at least) a full season.

All this collective declining is meaningful and needs to be reported. At this rate, eventually ratings are going to get low enough to seriously challenge the broadcast model. No denying that here. But we're not there yet, and sometimes I think all the talk about series lows and the collective decline misses the trees for the forest. For now, broadcast TV ratings are still a system that largely operates by the same set of rules. It's just that the standards for "hit" and "flop" shift to match the collective declines. There are other things happening in this system beyond "everything's down."

Since it's basically the same system, that means there should be a way to put those old, much higher ratings on a level playing field with those of today. Enter what I'm calling the A18-49+.
That name borrows from the world of baseball, where sabremetrics has numbers that adjust for the run production of the league as a whole. For example, ERA+ compares a pitcher's earned run average with the league's average ERA of that season. That allows for a relatively good comparison of pitchers across eras in which the collective offensive production was hugely different.

That's what A18-49+ does too. It sets the larger trend off to the side and creates a world in which every season always averages out to 100. Over the last few weeks, I've compiled all the broadcast A18-49 numbers over the last six regular seasons, and I've used those to come up with the TV ratings version of "league average": the average of all original entertainment programming across the big four networks during the regular broadcast season. It's a little unfortunate that I only have enough numbers to come up with a good "league average" for six seasons out of the history of TV. Still, the landscape here is changing a lot more quickly than in baseball, so it seems potentially pretty useful.

The formula for A18-49+ is simple: 100 * show 18-49 average / league 18-49 average

This will output a number similar to ERA+. 100 = league average, and the bigger the better.

This is not the kind of number that can get thrown organically into daily ratings conversation the way I do with my timeslot adjustments. It's more of a big-picture thing. But there's a lot of stuff that the A18-49+ can help illustrate, and I'll be exploring much of it each Friday over the next several weeks. Some of these will be more about scheduling than ratings, but combining it all, I hope it'll make for one of the most comprehensive looks at the shifting TV landscape over the last few years.

For now, I leave you with this introductory look. There are a few fairly distinct types of shows that emerge from looking at this. Not every show fits neatly into one of these three categories, particularly across the whole six-year period, but quite a few do.

First, there are the shows that "swim upstream." In other words, they hold up from a raw numbers standpoint over time and, as a result, grind out an increasing amount of value relative to the ever-declining league average. A few shows, like the two below, fall into the sweet spot where they actually decline a bit in raw numbers but gain in A18-49+. Of course, the shows that actually grow in raw numbers (NCIS, Modern Family, The Big Bang Theory) have risen even more precipitously.

Not every show can blame the general decline of broadcast. Some like to say, "So what if it's down? Everything's down!" But some shows are just getting older and would be on the decline even if there were no external factors pulling down collective broadcast ratings. I say that these shows "swim downstream."

Then there are those reliable staples of primetime that never seem to get any real attention and amazingly decline at almost exactly the rate of primetime as a whole every single year. These shows "ride the current." Many of these are the kinds of shows you'd expect:

America's Funniest Home Videos

Season

A18-49+

2006-07

70

2007-08

72

2008-09

72

2009-10

73

2010-11

70

2011-12

68

Dateline Fri

Season

A18-49+

2006-07

56

2007-08

60

2008-09

52

2009-10

55

2010-11

56

2011-12

56

Cops

Season

A18-49+

2006-07

53

2007-08

61

2008-09

55

2009-10

57

2010-11

57

2011-12

54

There are some other, slightly more prominent shows that have been remarkably consistent in recent years: animated shows and CBS' reality franchises.

Perhaps the most amazing member of this "ride the current" group is the biggest show on TV. American Idol has been down in raw numbers every year that A18-49+ data is available, but its complete dominance of the relative landscape in the second half of the aughts has remained almost exactly the same...

...or at least it has until this season. More on Idol in future posts!

Sometime in the near future (probably this weekend), I'm gonna begin installing the A18-49+ numbers by season on the War of 18-49 pages, so stay tuned. Next Friday, we'll dive back into this by trying to define commonly bandied-about words like "hit" and "flop" from an A18-49+ standpoint.

3 comments:

I was thinking about this same thing a lot. I like your approach, because it's easy way to calculate "adjusted" A18-49 ratings using publicly available data that you already gathered. And the best effort I saw so far. Though being the only one I saw :) Anyway, once again you're the first to explore some field, props.However, I think best approach would be to use amount of money advertisers payed per A18-49 point. If I remember well, adage.com survey after this seasons' upfronts came out with: broadcast primetime 30-secs ad was bought at $48K per 1 point, average. Same survey showed number of 44K year ago. Thus, 2010-2011 season ratings should be adjusted by formula ( 44 * show 18-49 average / 48 ) to make them comparable to 2011-2012 season ratings. Ditto for each other season.Here we come to problem ... I doubt adage (or anyone else) is having actual numbers for more than few seasons, let alone those numbers being publicly available. So this perhaps remains just an idea.Even better it would be to use actual money paid (upfront + make-good + scatter) averages. Again, it's highly unlikely anyone is having those numbers. Except for networks itself, they surely have it - but for their shows only, and they have no incentive to make those numbers public.

Interesting idea. My first thought was that it might not really be worth all that extra data entry, since typically the the CPM rises almost exactly to offset the ratings declines. As long as broadcast remains healthy, it wouldn't really change these numbers. But the upside is you could get a better sense of when broadcast becomes NOT healthy... in other words, when the declines become greater than the CPM increases.

I guess another option would be to just take ratings out of it and index each show's ad rate vs. the average ad rate of that season. That would have its own flaws when trying to translate it to ratings but it could be a good historical measurement of each upfront.

CPM rises as much as ratings decline, because broadcast networks are still able to convince big advertisers they need medium with the highest reach. That much is true. But in a process they also manage to convince buyers that their shows must be expensive productions of about $3M per hour (yes, I know, comedies cost less, older shows with expensive stars and genre shows with lot of special effects cost more) to accomplish that goal.If CPM would rise at slower rate, broadcast networks would turn eye to cheaper productions or co-productions (already The Firm and soon Hannibal are following The Borgias and some other shows on cable which are financed that way) before drastic measures (like change of ownership or bankruptcy).If CPM would rise faster (or ratings would be steady or drop slower), then networks would order shows that cost even more and/or try to recoup lost territories (Friday, Saturday) by ordering and airing more originals.But my idea is very rough, it doesn't go so deep. It uses convenient fact: Cost of hour program is at pretty steady level in last decade (at least for big 4 networks, and not counting shows with pumped actors salaries), while CPM continues to raise. It is real reason behind 2006 Smith being cancelled as soon as it felt bellow 3.0 A18-49 rating, while nowadays pretty much everything over 2.0 average is renewed (and on NBC even bellow it).If data would be available (which I doubt), then one single number (cost of 30 secs ad per 1 rating point A18-49) would suffice for entire season. As opposed to calculating "league average" which can lead to dilemmas (like to include unscripted shows or not) or even errors (like to calculate simple averages over all shows instead of weighted averages).