FCC opens up inquiry on Arbitron radio ratings gadget

The FCC wants Arbitron to make good on its promises to improve the accuracy of …

It's not exactly the high-powered investigation that media activists wanted, but the Federal Communications Commission is launching a Notice of Inquiry on whether Arbitron's Portable People Meter (PPM) underestimates minority radio listeners. And to raise the stakes on the probe, FCC Commissioner Jonathan Adelstein added this codicil to the inquiry announcement:

“If the Commission does not conclude that PPM is in fact reliable and accurate," Adelstein warned on Monday, "or if there are still many unanswered questions, the Commission may have to reconsider whether its reliance on Arbitron's market definitions and audience ratings calls into question the reliability and integrity of the Commission's own analysis that uses Arbitron information.”

This is no mild threat. The FCC depends on Arbitron stats to determine radio license market areas, and the Department of Justice uses them in some instances for anti-trust enforcement. But it's unclear what metrics the FCC would rely on if it walked out on Arbitron. And keep in mind that the threat maker in question, Adelstein, is scheduled to leave the Commission and take charge of a rural broadband deployment program over at the Department of Agriculture.

50 by 2010

As we've reported, a half-dozen minority broadcast advocacy groups have repeatedly complained that Arbitron's new gizmo for measuring radio station audience loyalty is seriously out of whack. A device about the size of a Blackberry, the PPM will eventually replace Arbitron's old diary system, in which participants write down what stations they listen to during the day. The company sends out over 2.5 million diary forms to participants every year.

The PPM, currently being tried out in various market areas, is worn by consumers like a mobile phone. The device literally picks up via encoded signals the radio broadcasts to which the participant is listening or is in the vicinity of. At this point, Arbitron says it wants to replace its diary method with the PPM in the most important 50 radio markets by next year. It has already set up the new system in over half a dozen across the country, from New York City to Riverside-San Bernadino.

Critics of the PPM, including the FCC's own Advisory Committee on Diversity, charge that Arbitron's PPM methodology is flawed in about a dozen ways. Only about five or six percent of PPM samples consist of cell phone only households, they say, when nearly a fifth of African-American and Hispanic homes rely on mobiles. They also complain that market PPM sample sizes are much smaller than the diary cohorts, and that while the technology captures listener exposure, it doesn't measure other forms of consumer loyalty to a radio station.

Arbitron has been trying to head off a formal FCC investigation of its PPM for quite some time, but it has also had to deal with state attorney general backed lawsuits against the system in New York, Maryland, and New Jersey.

The Garden State has been particularly up close and personal about this matter, with its United States Senator Robert Menendez meeting with the firm, and the head of the New Jersey Broadcasters Association complaining about PPM's "erratic deployment." NJBA CEO Paul Rotella noted in a letter to the FCC last year that Arbitron deployed 347 PPMs to Middlesex County (pop. 732,000), but only 96 to Monmouth County (pop. 588,000). "This represents 261% greater PPM sample size in Middlesex," he observed, "which only has a 25% greater population!"

Arbitron has settled these suits by promising to boost its sample recruitment techniques, especially among CPOs, and to make certain that PPM recruits use the new technology correctly, among other reforms. But the FCC's inquiry clearly wants to see if the company has or will make good on its promises. "Have these improvements resolved the problems in whole or it part?" the NOI asks. "Are the commitments made by Arbitron to improve PPM methodology in the settlement markets and voluntarily in others sufficient to cure the problems cited by commenters?"

What critics of the PPM system had been hoping for was an FCC "Section 403" investigation on the matter—403 being that portion of the Communication Act's rules that allow the agency to initiate probes on broadcasting issues. That's also what Adelstein wanted. Section 403 probes are tougher than NOIs. They include provisions for witness testimony and document production overseen by an administrative law judge.

Instead, the agency has launched this inquiry—a somewhat lower profile affair that will last for 60 days following its publication in the Federal Register.

What are we doing here?

In their public statements, Adelstein's fellow Commissioners seemed less clear about what the FCC can do about this matter besides investigate it. "We do not regulate Arbitron," interim Chair Michael Copps conceded in his public comment, but "anything that affects media diversity and minority ownership... affects our ability to do our job." Copps also praised Arbitron for "trying to improve its ratings methodology and for committing significant resources to that effort."

The lone Republican on the FCC was far more circumspect. "I expect to pay particular attention to analyses of the Commission’s authority to take any further action in this arena," Robert M. McDowell warned.

As for Arbitron, the company says it is glad that a section 403 is not on the menu. "An open proceeding can foster dialogue, education and an exchange of ideas among parties holding differing viewpoints," a spokesperson for the firm told us, "while a closed investigation would likely lead to 'freezing' the parties into a litigation-like adversarial postures." But even this relatively mild investigation will keep the heat on the company to improve its controversial new metrics machine.

If grapefruit are 20% likely to be red inside, how many oranges come from a given area. Why do people shoot any and all credibility they might possibly have by choosing two entirely unrelated statistics and pretending they are lock step hand in hand? The system is being tested in new york, maryland, and new jersey. It doesn't matter if 15% of the samples of african-american & hispanic homes are cellphone only and only 5-6% of the overall samples not tied to race are cellphone only... they are entirely different things and the representation is accurate based on the two numbers he provided.

I see the biggest issue both ways being when someone is "forced" to listen to a station they don't want to for a significant period of time. The best example I can think of is that the boss plays his favorite station at work and everyone has to listen to it. In the diary method, you might be apt to not list it because you're not choosing to listen, but the PPM will note that you DO listen to that station all day. Technically more accurate, but doesn't account for external influence. I don't understand why they can't run both systems for a while to see if there is really that big of a difference.

If grapefruit are 20% likely to be red inside, how many oranges come from a given area. Why do people shoot any and all credibility they might possibly have by choosing two entirely unrelated statistics and pretending they are lock step hand in hand? The system is being tested in new york, maryland, and new jersey. It doesn't matter if 15% of the samples of african-american & hispanic homes are cellphone only and only 5-6% of the overall samples not tied to race are cellphone only... they are entirely different things and the representation is accurate based on the two numbers he provided.

If you're an advertiser with products or campaigns aimed at a particular racial group, then surely race is relevant? If race is relevant, then significant deviations from the statistical 'norm' on the part of particular groups might be relevant to the question of whether their methodology for selecting households is a good one?

This is a witch hunt.There has been grass root efforts by minority groups to get people to blindly fill out the diaries based upon their ethnic "loyalty" instead of actual listening habits. Now that the People Meters are in place, it's showing how skewed those diaries have been. Ratings equal money. To that effect, ANY measurement tools should be scrutinized.

Originally posted by hpsgrad:If you're an advertiser with products or campaigns aimed at a particular racial group, then surely race is relevant? If race is relevant, then significant deviations from the statistical 'norm' on the part of particular groups might be relevant to the question of whether their methodology for selecting households is a good one?

Or am I misunderstanding something here?

Yes, your missing something. The 6% statistic is for all participants, the 15% statistic is for only african-american & hispanic participants. It is expected for the two numbers to be different since the 6% group of all participants contains non-black/hispanic participants as well. The two should only match if the sample population was 100% black/hispanic.

It's like saying that 20% of grapefruit are red, but in a sample set of citrus fruit sold at the grocery store only 6% of all citrus fruit sold were red grapefruit... then taking those numbers and claiming there is some problem with the method they were counting citrus fruit. The problem isn't that the numbers were inaccurately representing red grapefruit, the reduced percentage comes into play because limes, oranges, lemons, tangerines, tangelo's, and every other citrus fruit under the sun was part of the "all citrus fruit" grouping that was used for the 6% number.

If you look at the number of males 18-25 who are cellphone only, the results are going to be way more than 6% as well. That doesn't mean the 6% number is wrong though unless the area is populated entirely by males 18-25. Like zerocommazero said, it's a witchhunt and the diversity committee isn't beyond using intentionally twisted/bad math to claim a point.

Originally posted by hpsgrad:If you're an advertiser with products or campaigns aimed at a particular racial group, then surely race is relevant? If race is relevant, then significant deviations from the statistical 'norm' on the part of particular groups might be relevant to the question of whether their methodology for selecting households is a good one?

Or am I misunderstanding something here?

Yes, your missing something. The 6% statistic is for all participants, the 15% statistic is for only african-american & hispanic participants. It is expected for the two numbers to be different since the 6% group of all participants contains non-black/hispanic participants as well. The two should only match if the sample population was 100% black/hispanic.

[snippage]

If you look at the number of males 18-25 who are cellphone only, the results are going to be way more than 6% as well. That doesn't mean the 6% number is wrong though unless the area is populated entirely by males 18-25. Like zerocommazero said, it's a witchhunt and the diversity committee isn't beyond using intentionally twisted/bad math to claim a point.

I realize that the numbers aren't expected to match. I also realize that other groups may have similar issues. The thing I don't understand is why the change is irrelevant. I'd think that these differences might matter if listening preferences split along these ethnic or age lines, because the Arbitron measurements, while measuring a legitimate and real value, might not be measuring the correct value.

I realize that the numbers aren't expected to match. I also realize that other groups may have similar issues. The thing I don't understand is why the change is irrelevant. I'd think that these differences might matter if listening preferences split along these ethnic or age lines, because the Arbitron measurements, while measuring a legitimate and real value, might not be measuring the correct value.

If the issue was real, the FCC diversity committee would have chosen the right statistic to spout as a comparison. If they had said 20% of blacks & hispanics have a cellphone only household and the black & hispanic households who participated only have Y% that are cellphone only, there might be some credibility. Instead of doing that, they intentionally chose the wrong statistic and tried to link it to some other statistic... they automatically shot their credibility in the face by fudging numbers and hoping nobody notices. They took it even further by switching between fractions and percentages in the same comparison to further hide the number fudging.

The fact that they did all that doesn't only hurt their credibility in this case either. The next time the diversity committee has something to say, it's automatically taken in suspect because they've shown themselves to not have a problem with fudging numbers to create an issue where none exists. Take it a step further and it harms the credibility of any similar issues that come up in the future even if there is a real issue.

I realize that the numbers aren't expected to match. I also realize that other groups may have similar issues. The thing I don't understand is why the change is irrelevant. I'd think that these differences might matter if listening preferences split along these ethnic or age lines, because the Arbitron measurements, while measuring a legitimate and real value, might not be measuring the correct value.

If the issue was real, the FCC diversity committee would have chosen the right statistic to spout as a comparison. If they had said 20% of blacks & hispanics have a cellphone only household and the black & hispanic households who participated only have Y% that are cellphone only, there might be some credibility.

As I understand it, the PPM depends on a land-line telephone in the household to transmit its findings. If there is a group whose proportion of land-line telephone use differs greatly from that of the overall population, this could cause a problem of over- or under-representation. This is one of the complaints people had about the PPM.

Given this, presenting the different proportions of land-line use between two different groups seems relevant, rather than deliberately misleading. What am I misunderstanding here? I've had several semester of statistics and probability classes, so I'm even more interested in finding my error than I might otherwise be.

I realize that the numbers aren't expected to match. I also realize that other groups may have similar issues. The thing I don't understand is why the change is irrelevant. I'd think that these differences might matter if listening preferences split along these ethnic or age lines, because the Arbitron measurements, while measuring a legitimate and real value, might not be measuring the correct value.

If the issue was real, the FCC diversity committee would have chosen the right statistic to spout as a comparison. If they had said 20% of blacks & hispanics have a cellphone only household and the black & hispanic households who participated only have Y% that are cellphone only, there might be some credibility.

As I understand it, the PPM depends on a land-line telephone in the household to transmit its findings. If there is a group whose proportion of land-line telephone use differs greatly from that of the overall population, this could cause a problem of over- or under-representation. This is one of the complaints people had about the PPM.

Given this, presenting the different proportions of land-line use between two different groups seems relevant, rather than deliberately misleading. What am I misunderstanding here? I've had several semester of statistics and probability classes, so I'm even more interested in finding my error than I might otherwise be.

How can 6% of the total participants using a device that requires a landline be cellphone only households? Your reasoning doesn't add up at all. I'm going to go out on a limb and say that a landline is probably not required for the device to work since 6% lines up with the general population.

I realize that the numbers aren't expected to match. I also realize that other groups may have similar issues. The thing I don't understand is why the change is irrelevant. I'd think that these differences might matter if listening preferences split along these ethnic or age lines, because the Arbitron measurements, while measuring a legitimate and real value, might not be measuring the correct value.

If the issue was real, the FCC diversity committee would have chosen the right statistic to spout as a comparison. If they had said 20% of blacks & hispanics have a cellphone only household and the black & hispanic households who participated only have Y% that are cellphone only, there might be some credibility.

As I understand it, the PPM depends on a land-line telephone in the household to transmit its findings. If there is a group whose proportion of land-line telephone use differs greatly from that of the overall population, this could cause a problem of over- or under-representation. This is one of the complaints people had about the PPM.

Given this, presenting the different proportions of land-line use between two different groups seems relevant, rather than deliberately misleading. What am I misunderstanding here? I've had several semester of statistics and probability classes, so I'm even more interested in finding my error than I might otherwise be.

How can 6% of the total participants using a device that requires a landline be cellphone only households? Your reasoning doesn't add up at all. I'm going to go out on a limb and say that a landline is probably not required for the device to work since 6% lines up with the general population.

I'm honestly not sure how things work. I haven't found a clear discussion of the complaints, never mind the mechanism by which the PPM works. I'm not claiming that there definitely is a problem, I'm claiming that I don't understand why people are so certain that there isn't a problem, and trying to figure out what information will help me find out what's going on.

If you've got information about the details of the PPM, and arbitron's selection process, I'd love to see them. Similarly, if you've got links to detailed criticism of the complaints about Arbitron's methods, I'd be quite happy to read them. So far the ars technica threads have been less than helpful.

I'm honestly not sure how things work. I haven't found a clear discussion of the complaints, never mind the mechanism by which the PPM works. I'm not claiming that there definitely is a problem, I'm claiming that I don't understand why people are so certain that there isn't a problem, and trying to figure out what information will help me find out what's going on.

If you've got information about the details of the PPM, and arbitron's selection process, I'd love to see them. Similarly, if you've got links to detailed criticism of the complaints about Arbitron's methods, I'd be quite happy to read them. So far the ars technica threads have been less than helpful.

I put how does the artitron PPM work into google and it pointed to this on Arbitron's site.

There are FAQ PDF"s for the individual cities... New York's has this entry in it...

quote:

13. How many meters will be placed in the Metro and where?

The in-tab target is 3,878 meters in the New York Metro (including the embedded PPM markets of Nassau-Suffolk (Long Island) and Middlesex-Somerset-Union), with a panel that consists of Persons six years of age and older from landline telephone households and cell-phone-only households. The In- Tab target for Nassau-Suffolk (Long Island) is 1,080 panelists and the in-tab target for Middlesex-Somerset-Union is 694 panelists. All panelists will be included in the New York Metro report, weighted to their proper population percentage. Panelists are recruited using a random digit dial (RDD) telephone frame, as is used for the Diary.

18. Do you have control over panel demographics?There are several ways in which we exercise control over panel demographics in order to have a representative sample. We stratify our sample by geography (including High-Density Black and Hispanic areas) to ensure a representative starting sample. In Philadelphia, these stratifications are geography, race/ethnicity and the presence of 18- to 24-year-olds. We monitor each person’s compliance on a daily basis, and noncompliance triggers phone contact from an Arbitron panel relations specialist and other coaching.

Originally posted by bicarb:I put how does the artitron PPM work into google and it pointed to this on Arbitron's site.

There are FAQ PDF"s for the individual cities... New York's has this entry in it...

Thanks for the links. Unfortunately, they don't seem to shed much light on the criticisms of the PPM, or to provide the details of their methodology (which appear to be something you have to pay for access to).

I found this information about the PPM and telephone lines on their website:

quote:

SPI among 18-34 has improved, due to higher incentives, a higher sampling rate, the introduction of a new cellular hub that doesn’t require a phone company visit to the panelist’s household, PPM carry accessories, a panelist Web site and increased distribution of PPM travel chargers.

this is from the minutes of a Nov. 2008 meeting.. This implies that complaints about cell-only households being potentially under-represented were reasonable when filed in September.

No other information I found on the website contains directly useful information. I haven't spent too terribly much time on it, however, and given the state of discussion here at ars, I probably won't. There's too many people who seem content with 'this is crap from special interests' to make me want to even try digging this stuff out. We'll see what this NOI turns up. If it's all smoke and mirrors and lies, that should become obvious fairly quickly. If the criticisms have some validity, even if Arbitron has already taken them into account, that should become clear as well.

Originally posted by bicarb:I put how does the artitron PPM work into google and it pointed to this on Arbitron's site.

There are FAQ PDF"s for the individual cities... New York's has this entry in it...

Thanks for the links. Unfortunately, they don't seem to shed much light on the criticisms of the PPM, or to provide the details of their methodology (which appear to be something you have to pay for access to).

I found this information about the PPM and telephone lines on their website:

quote:

SPI among 18-34 has improved, due to higher incentives, a higher sampling rate, the introduction of a new cellular hub that doesn’t require a phone company visit to the panelist’s household, PPM carry accessories, a panelist Web site and increased distribution of PPM travel chargers.

this is from the minutes of a Nov. 2008 meeting.. This implies that complaints about cell-only households being potentially under-represented were reasonable when filed in September.

No other information I found on the website contains directly useful information. I haven't spent too terribly much time on it, however, and given the state of discussion here at ars, I probably won't. There's too many people who seem content with 'this is crap from special interests' to make me want to even try digging this stuff out. We'll see what this NOI turns up. If it's all smoke and mirrors and lies, that should become obvious fairly quickly. If the criticisms have some validity, even if Arbitron has already taken them into account, that should become clear as well.

You couldn't even be bothered to type into google a search on how the PPM works and are complaining that people aren't doing enough fact checking to prove the point about the diversity committee having merit to it's claims that your trying to advance? There has been plenty of evidence, including the deliberate incorrect statistics being chosen for comparison, to suggest that they are going for an agenda rather than over actual issues. It was fairly obvious that it was all smoke and mirrors when they complained that the total cellphone only household for all participants was 6% but black and hispanic only hoseholds in the area are one fifth (20%!) likely to be cellphone only; fudging numbers and picking deliberately misleading statistics with no relation tend to go a long way to showing you have no merit to whatever claim your trying to support with them.

Remember... this is the same diversity committee that didn't think sirius/xm had a diverse enough station lineup... They aren't exactly a group with a track record of being grounded in reality.

Originally posted by bicarb:I put how does the artitron PPM work into google and it pointed to this on Arbitron's site.

There are FAQ PDF"s for the individual cities... New York's has this entry in it...

Thanks for the links. Unfortunately, they don't seem to shed much light on the criticisms of the PPM, or to provide the details of their methodology (which appear to be something you have to pay for access to).

I found this information about the PPM and telephone lines on their website:

quote:

SPI among 18-34 has improved, due to higher incentives, a higher sampling rate, the introduction of a new cellular hub that doesn’t require a phone company visit to the panelist’s household, PPM carry accessories, a panelist Web site and increased distribution of PPM travel chargers.

this is from the minutes of a Nov. 2008 meeting.. This implies that complaints about cell-only households being potentially under-represented were reasonable when filed in September.

No other information I found on the website contains directly useful information. I haven't spent too terribly much time on it, however, and given the state of discussion here at ars, I probably won't. There's too many people who seem content with 'this is crap from special interests' to make me want to even try digging this stuff out. We'll see what this NOI turns up. If it's all smoke and mirrors and lies, that should become obvious fairly quickly. If the criticisms have some validity, even if Arbitron has already taken them into account, that should become clear as well.

You couldn't even be bothered to type into google a search on how the PPM works and are complaining that people aren't doing enough fact checking to prove the point about the diversity committee having merit to it's claims that your trying to advance?

fwiw, I think you're misreading what I wrote, but fair enough; it wasn't very clear. No, I'm not tempted to continue this conversation. Since I'm evidently an idiot for not understanding the 'obvious' problem with the statistics, and since you don't seem to think that the timeline evidence I was able to find is worth of any mention at all, I think it clear that I'm too moronic to contribute anything at all.

Thanks for explaining that I'm an idiot instead of addressing my ignorance; it certainly made the discussion shorter and more worthwhile.

Matthew Lasar / Matt writes for Ars Technica about media/technology history, intellectual property, the FCC, or the Internet in general. He teaches United States history and politics at the University of California at Santa Cruz.