from the network-shenanigans dept

While filing a net neutrality complaint is now easier than ever, actually identifying violations may not be. In the new age of interconnection, usage caps, overages, and pay-to-play zero rating deals, less technical users simply may not understand when they're being screwed by their ISP, as these violations aren't nearly as ham-fistedly obvious as outright blocking or throttling of services. That's why the Open Technology Institute’s MLAB recently introduced the Internet health test, which runs user connections through a bevy of speed and performance tests to determine whether or not ISPs are engaged in any shenanigans.

Last October, MLAB released a study (pdf) that strongly supported Netflix, Level3 and Cogent's claims that ISPs were intentionally letting peering points to transit operators saturate to try and force companies like Netflix to begin paying for direct interconnection. In short, neutrality advocates believe ISPs had moved net neutrality to the edge of the network, using interconnection to grab the pound of flesh from content companies they've long stated was their end goal.

The problem is both sides of the equation (whether that's Netflix or AT&T) keep most of their data on interconnection (and the deals they strike) private for competitive reasons, meaning that while signs (and thirty years of history) pointed to ISP skulduggery, actually proving it is difficult. It's apparently becoming less difficult with the new consumer connection data being collected by MLAB, which the Guardian this week claimed proves big ISPs are intentionally degrading network performance across some networks:

"The study, conducted by internet activists BattlefortheNet, looked at the results from 300,000 internet users and found significant degradations on the networks of the five largest internet service providers (ISPs), representing 75% of all wireline households across the US...In Atlanta, for example, Comcast provided hourly median download speeds over a CDN called GTT of 21.4 megabits per second at 7pm throughout the month of May. AT&T provided speeds over the same network of ⅕ of a megabit per second. When a network sends more than twice the traffic it receives, that network is required by AT&T to pay for the privilege."

This is, consumer advocate group Free Press claims, proof positive that ISPs are up to no good:

"For too long, internet access providers and their lobbyists have characterized net neutrality protections as a solution in search of a problem,” said Karr. “Data compiled using the Internet Health Test show us otherwise – that there is widespread and systemic abuse across the network. The irony is that this trove of evidence is becoming public just as many in Congress are trying to strip away the open internet protections that would prevent such bad behavior."

The problem? While the Guardian report references a "new study," no study has actually been released that I could find (MLAB apparently just shared some selective data with The Guardian). That brings us back to the fact that the biggest problem here continues to be a lack of transparency on the part of all the players involved. But it's pretty hard to claim a "study" proves much of anything when there's no actual study, suggesting some over-eagerness on the parts of consumer advocates here.

Shortly after the Guardian piece MLABS did post a blog entry that sheds a little more light on the data they're collecting, but it's worth noting that while MLAB engineers are confident in saying these slowdowns are due to business choices and not network capacity, they're not yet willing to definitively state why some transit routes suffer more than others:

"It is important to note that while we are able to observe and record these episodes of performance degradation, nothing in the data allows us to draw conclusions about who is responsible for the performance degradation. We leave determining the underlying cause of the degradation to others, and focus solely on the data, which tells us about consumer conditions irrespective of cause."

Still, despite some of the breathless rhetoric in the Guardian piece neutrality advocates still haven't obtained the AT&T lawyer proof silver bullet that indisputably proves large ISPs have been up to no good. I'm not entirely sure this can even be accomplished without access to raw, confidential ISP data and internal correspondence that may or may not even exist (how do you "prove" Verizon intentionally isn't upgrading a port?). Sure, most people can study AT&T and Verizon's behavior over the last thirty years and conclude that yes, this sort of thing would certainly be in their jackassery wheelhouse, but proving it is kind of important if you want these kinds of claims to be taken seriously.

Still, the fact that people are crunching and closely analyzing the data, combined with the new and novel threat of a regulator that's not asleep at the wheel, appears to have many of these companies magically and suddenly getting along famously. This suggests, contrary to broadband industry doomsday prognostications, that the net neutrality rules are having a positive impact on consumers, business interests, and the Internet at large.

from the eat-it dept

Look, I probably don't have to tell you Techdirt readers this, but I'm a strange sort of cat. I could go into all the reasons why I'm odd, but whenever I try to explain to people how non-normal I am, I usually just reveal this little bit of truth: I hate chocolate. No, I don't not-love chocolate. Nor do I dislike chocolate. I fucking hate it, nearly as much as I hate how low I appeared on this ingenious bit of sleuthing a commenter did in determining which Techdirt writers swear the most (a list which I insist is fucking bullshit, by the way). That said, everyone else loves chocolate, of course, so I'm sure they and many others were thrilled to see so many well-respected publications blaring headlines recently about how chocolate can help reduce weight. I'd show you a bunch of links to those stories put forth by supposedly well-respected journalism outlets and scientific journals that make heavy claims about peer-reviews and fact-checking, but I can't because most of those stories have been pulled. Why?

Because the whole thing was a bullshit hoax put on by a journalist to make the point that, at least when it comes to studies around diet and health, the journals and the media the reports on their papers are largely full of crap. Go read that entire thing, because it's absolutely fascinating, but I'll happily give you the truncated version. John Bohannon, who has a Ph.D in molecular biology of bacteria and is also a journalist, conspired with a German reporter, Peter Onneken, to see how badly they could fool the media to create BS headlines. They did this by turning John Bohannon into Johannes Bohannon (obviously) and creating a website for The Institute of Diet and Health, which isn't actually a thing. Then they conducted a very real study with three groups: 1 group eating a low-carb diet, 1 group eating their regular diet, and 1 group eating a low-carb diet and a 1.5oz bar of dark chocolate daily. After running background on the groups, conducting blood tests to correct for disease and eating disorders, and hiring a German doctor and statistician to perform the study, away they went. The results?

Onneken then turned to his friend Alex Droste-Haars, a financial analyst, to crunch the numbers. One beer-fueled weekend later and... jackpot! Both of the treatment groups lost about 5 pounds over the course of the study, while the control group’s average body weight fluctuated up and down around zero. But the people on the low-carb diet plus chocolate? They lost weight 10 percent faster. Not only was that difference statistically significant, but the chocolate group had better cholesterol readings and higher scores on the well-being survey.

Bam, results! Not just results, but results the media would absolutely love to sink their idiotic teeth into. The problem? Well, the method for running the entire study was bullshit.

Here’s a dirty little science secret: If you measure a large number of things about a small number of people, you are almost guaranteed to get a “statistically significant” result. Our study included 18 different measurements—weight, cholesterol, sodium, blood protein levels, sleep quality, well-being, etc.—from 15 people. (One subject was dropped.) That study design is a recipe for false positives.

Think of the measurements as lottery tickets. Each one has a small chance of paying off in the form of a “significant” result that we can spin a story around and sell to the media. The more tickets you buy, the more likely you are to win. We didn’t know exactly what would pan out—the headline could have been that chocolate improves sleep or lowers blood pressure—but we knew our chances of getting at least one “statistically significant” result were pretty good.

Bohannon goes into some of the gory math, and it really is fun to read, but this is pretty easy to understand. With a small enough sample size and testing for as wide a range of results and factors as possible, you absolutely expect to find greater variance than if your study was testing for less factors or had a higher sample size. It's simple: people are different and testing less people makes those difference statistically appear to be more significant.

Anyway, the team then went to the International Archives of Medicine, which Bohannon identifies as a "fake journal" publisher. In other words, pay enough Euros and your "study" gets "published", all without the bothersome time-waster known as being peer-reviewed. Not that IAM doesn't claim to be reviewed. It certainly does make that claim, but after payment was accepted Bohannon found that their study had been accepted without change. And, keep in mind, this study is designed to be bad. So, once the study had been published, it was time for the PR machine to swing into action.

Take a look at the press release I cooked up. It has everything. In reporter lingo: a sexy lede, a clear nut graf, some punchy quotes, and a kicker. And there’s no need to even read the scientific paper because the key details are already boiled down. I took special care to keep it accurate. Rather than tricking journalists, the goal was to lure them with a completely typical press release about a research paper. (Of course, what’s missing is the number of subjects and the minuscule weight differences between the groups.) But a good press release isn’t enough. Reporters are also hungry for “art,” something pretty to show their readers. So Onneken and Löbl shot some promotional video clips and commissioned freelance artists to write an acoustic ballad and even a rap about chocolate and weight loss. (It turns out you can hire people on the internet to do nearly anything.)

Onneken wrote a German press release and reached out directly to German media outlets. The promise of an “exclusive” story is very tempting, even if it’s fake. Then he blasted the German press release out on wire service based in Austria, and the English one went out on NewsWire. There was no quality control. That was left to the reporters.

And it didn't take the reporters long to pick up this crap-on-a-stick and run with it like children after the ice cream truck. Not all of them, but some of the stories are still up. The Daily Star covered their paper, for instance, as did the Times of India, international editions of The Huffington Post, and some television news programs. Men's Health was going to go with a story in September, though that probably won't run now. Shape Magazine didn't get off so lucky, with their story appearing in the June issue, in print. And remember, this is all bullshit. None of it is real. How does something like this happen?

The answer is lazy "journalists."

When reporters contacted me at all, they asked perfunctory questions. “Why do you think chocolate accelerates weight loss? Do you have any advice for our readers?” Almost no one asked how many subjects we tested, and no one reported that number. Not a single reporter seems to have contacted an outside researcher. None are quoted. These publications, though many command large audiences, are not exactly paragons of journalistic virtue. So it’s not surprising that they would simply grab a bit of digital chum for the headline, harvest the pageviews, and move on. But even the supposedly rigorous outlets that picked the study up failed to spot the holes.

Now, there is some humor in all of this, but also danger. It's one thing to claim that chocolate leads to weight loss and have the media run wild with it, but we all know that fad diets and exciting health claims rain down on us in buckets, and I think it's safe to say that not all of them are as harmless as Bohannon's. The average person hasn't done much thinking about the validity of these studies that they read about in the media; they simply trust the media to do the fact-checking. The media, it appears, largely trusts the journals to do the reviews and fact-checking. Except some (many?) of those journals don't. The whole thing harkens back to one of the funnier moments in the Anchorman movie, when the main character makes a ludicrous claim about women's brains being smaller than men's, and then punctuates the statement with a smirk, saying, "It's science." As far as much of the media reporting goes, it might as well be "science."

Strangely, Bohannon notes that readers of the articles were apparently more skeptical than the authors.

There was one glint of hope in this tragicomedy. While the reporters just regurgitated our “findings,” many readers were thoughtful and skeptical. In the online comments, they posed questions that the reporters should have asked.

“Why are calories not counted on any of the individuals?” asked a reader on a bodybuilding forum. “The domain [for the Institute of Diet and Health web site] was registered at the beginning of March, and dozens of blogs and news magazines (see Google) spread this study without knowing what or who stands behind it,” said a reader beneath the story in Focus, one of Germany’s leading online magazines. Or as one prescient reader of the 4 April story in the Daily Express put it, “Every day is April Fool’s in nutrition.”

If we've reached a time when readers are more skeptical than the reporters, that's a massive problem for journalism, but perhaps a delightful sign for the spread of skepticism and inquiry amongst the public. Either way, look with a critical eye the next time you hear about that fad diet or health food claim.

“Call of Duty increases risk of Alzheimer’s disease”, said the Telegraph. “Video game link to psychiatric disorders suggested by study”, reported the Guardian. The Daily Mail posed the problem as a question, “Could video games increase your risk of Alzheimer’s?”, reminding us that whenever a news headline asks a question, the answer is no.

We know that when science news is hyped, most of the hype is already present in the press releases issued by universities. This case is no exception - the press release was issued by the Douglas Mental Health University Institute, and unsurprisingly it focuses almost entirely on the tenuous link to Alzheimer’s disease.

Tenuous is being exceptionally kind in this case. The study in question, produced in the Proceedings of the Royal Society B, barely focused on any link between gaming and the disease, in fact. Instead, the team of Canadian researchers were simply studying the difference in brain-wave activity with groups of gamers and non-gamers. They noticed specifically a significant difference in the activity of one type of brain-wave with gamers, N2PC, which can have an effect on attention spans. So, how did we get from that to a link to Alzheimer's? Were there clinical tests done? Was the team of researchers even in any way focused on the most famous form of dementia?

No. Instead, the article describes the methodology for reaching the conclusion of a link thusly:

1. The type of learning shown by the gamers has been associated in previous studies with increased use of a brain region called the caudate nucleus

2. Increased use of the caudate nucleus can be associated with reduced volume of the hippocampus

3. Reduced volume of the hippocampus can be associated with increased risk of Alzheimer’s disease

That's three, three associations of mere correlation at best, with not even a shred of evidence for causality. And from that we get not only press reports of a link, which I can understand because the major media groups in Western culture have proven to be more interested in sensationalism than stuff that actually exists, but university institutions pushing out press releases to feed the hounds? That's not only wrong, it's borderline character-assassination on the wider gaming industry. Sadly, even some on the research team have gotten in on the act, likely in the hopes of generating press coverage of the study.

The press release also includes a statement from the lead researcher that is a clear exaggeration. Dr Gregory West is quoted as saying “we also found that gamers rely on the caudate nucleus to a greater degree than non-gamers”. Actually they didn’t find this at all, because their study didn’t measure activity in the caudate nucleus. Instead it measured a type of behaviour that previous studies have associated with activity in the caudate nucleus. There is a world of difference between these two, and readers would do well to take these latest claims with a generous helping of salt.

No, man! Salt intake is associated with water retention, which is associated with bloating, and weight-gain can be a factor in spousal infedelity, therefore salt leads to my wife cheating on me if I take these grains you prescribe!

from the so-why-do-we-need-information-sharing? dept

To hear politicians and the media talk about things, "cybersecurity" threats are some sort of existential threat that can only be stopped by giving the government more information and more control over our data. There is, of course, little to actually support that notion. And, two new studies show that (as has been the case for decades), the real threats are not because of super sophisticated technology and tools for hacking, but rather because end users are fallible and IT folks don't do a very good job locking doors (hat tip: WarOnPrivacy):

But two deeply researched reports being released this week underscore the less-heralded truth: the vast majority of hacking attacks are successful because employees click on links in tainted emails, companies fail to apply available patches to known software flaws, or technicians do not configure systems properly.

In fact, the real problem tends to be that people are still easily fooled by phishing emails:

In the best-known annual study of data breaches, a report from Verizon Communications Inc to be released on Wednesday found that more than two-thirds of the 290 electronic espionage cases it learned about in 2014 involved phishing, the security industry's term for trick emails.

Because so many people click on tainted links or attachments, sending phishing emails to just 10 employees will get hackers inside corporate gates 90 percent of the time, Verizon found.

And, then, of course, if the IT staff hasn't done much to secure things inside the gates, the hackers get the run of the place.

Stopping phishing is definitely a difficult problem, but it's difficult to see how that's one that's solved by giving the NSA more of our data. Of course, none of this should be new or surprising if you spend any time at all in online security realms. "Social engineering" has always been the most effective way to get into systems. But hyping up the fact that people are gullible and can be tricked into giving up their passwords isn't very sexy and doesn't get big companies and governments to shovel hundreds of millions of dollars at solutions. Freaking people out about sophisticated technology (that isn't nearly as effective) being used to launch hack attacks seems much sexier (and profitable).

from the shooting-themselves-in-the-foot dept

For many years, we've noted that while some in the legacy entertainment industry seem to think that there's a "battle" between "Hollywood" and "Silicon Valley" it's a very weird sort of war in which one of those parties -- Silicon Valley -- keeps supplying more and more "weapons" to the other party to help it adapt and succeed in a changing world. There are many examples of this, but the clearest is with the VCR, which the MPAA fought hard to outlaw in the 1970s and 1980s. The MPAA's Jack Valenti famously said in 1982 that "the VCR is to the American film producer and the American public as the Boston strangler is to the woman home alone." It was just four years later that home video revenue surpassed box office revenue for Hollywood. It wasn't the Boston strangler, it was the savior. Similar stories can be told elsewhere. The legacy entertainment industry has sued over MP3 players and YouTube, yet has now (finally) embraced online music and video years later than it should have.

And yet, that same legacy industry keeps trying to do everything to hamstring innovation that will only help it. A few years ago, we wrote about a fantastic post (sadly now gone from the internet) by Tyler Crowley, talking about the entrepreneur's view of innovation options and how many areas are welcoming for innovation -- which he described using the analogy of islands:

For tech folks, from the 35,000' view, there are islands of opportunity. There's Apple Island, Facebook Island, Microsoft Island, among many others and yes there's Music Biz Island. Now, we as tech folks have many friends who have sailed to Apple Island and we know that it's $99/year to doc your boat and if you build anything Apple Island will tax you at 30%. Many of our friends are partying their asses off on Apple Island while making millions (and in some recent cases billions) and that sure sounds like a nice place to build a business.

But what about Music Biz Island? Not so much:

Now, we also know of Music Biz Island which is where the natives start firing cannons as you approach, and if not stuck at sea, one must negotiate with the chiefs for 9 months before given permission to dock. Those who do go ashore are slowly eaten alive by the native cannibals. As a result, all the tugboats and lighthouses (investors, advisors) warn to stay far away from Music Biz Island, as nobody has ever gotten off alive. If that wasn't bad enough, while Apple and Facebook Island are built with sea walls to protect from the rising oceans, Music Biz Island is already 5 ft under and the educated locals are fleeing for Topspin Island.

As we pointed out, this leads to the legacy entertainment companies poisoning the well that contains the innovation water it desperately needs.

There's a parallel to this in terms of copyright laws. As the legacy entertainment industry keeps pushing for more draconian copyright laws, it only serves to scare more investors away. When we get good results, like the ruling in the Cablevision case saying that cloud-based services were legal, it resulted in a huge growth in investment in cloud services -- in contrast to much less spending in Europe, where the laws were a lot more ambiguous.

A new study from Fifth Era and Engine takes this finding even further, highlighting how bad or vague copyright laws are seriously scaring off investment in necessary platforms and innovation. A big part of this appears to be worries about absolutely insane statutory damages awards. The study surveyed tons of investors around the globe and they found an obvious concern about investing in areas where lawsuits could so easily destroy platforms:

In all eight countries surveyed, early stage investors view the risk of
uncertain and potentially large damages as of significant concern as they look to invest in [Digital Content Intermediaries]. 85% agree or
strongly agree that this is a major factor in making them uncomfortable about investing in [Digital Content Intermediaries].

And they're very specific about how the direct concern involves music and videos and the threat of a lawsuit that could simply put those companies out of business:

88% of worldwide investors surveyed said they are uncomfortable investing in [Digital Content Intermediaries] that offer user generated music
and video given an ambiguous regulatory framework.

This is really unfortunate on a number of different levels:

First, it limits the necessary innovation in services and business models that are likely to create the success stories of tomorrow. We need more experiments and platforms that allow places for artists and creators to create, promote, connect with fans and make money for their efforts. Yet if the legacy industry is scaring away all the investors, that's not going to happen.

Second, it locks in the few dominant players of today. Want to build the next YouTube? Good luck. You'll need lots of money to do so, but you're less likely to get it at this stage. The legacy players keep hating the big successful platforms, but don't realize that their own moves lock those players in the dominant positions.

Third, without competition in these spaces and platforms, content creators are less likely to get the best deals. When the legacy industry basically allows one player to become dominant, then it can set terms that are more in its favor. This is what so many from the legacy content industry are complaining about today -- without recognizing that their own actions regarding copyright law have helped create that situation.

Of course, many in those legacy industries actually see this sort of thing as a feature not a bug of pushing for greater copyright protectionism. They think -- ridiculously -- that by hamstringing innovation and investment they get to hold onto their perch longer. This is just wrong. It's trying to hold back the tide, while driving fans to alternative and often unauthorized platforms instead. Rather than supporting the innovation they need, pushing for bad copyright laws only helps to alienate the innovators the industry needs the most and the biggest fans whose support the content industry needs to thrive.

from the 'we've-tried-nothing-and-we're-all-out-of-ideas!' dept

The 2014 Best Places To Work in the Federal Government Survey, published by Stier’s group, ranked DHS dead last among large agencies.

[...]

Many DHS employees have said in the annual government “viewpoint” survey of federal employees that their senior leaders are ineffective; that the department discourages innovation, and that promotions and raises are not based on merit. Others have described in interviews how a stifling bureaucracy and relentless congressional criticism makes DHS an exhausting, even infuriating, place to work.

Beyond the problems listed here, there are a great many reasons why it might suck to work for the DHS. To begin with, the agency is actually a Frankensteinian monstrosity consisting of 22 agencies, all with their own ideas on how to run things and nearly all of them with their own sets of problems.

The DHS is in the (relatively) newly-minted business of securing the homeland against all comers -- mostly terrorists of the foreign and domestic varieties. Whether it's done out of paranoia or just the overwhelming need to look busy every time the national budget nears a vote, the DHS has gone overboard in its assessments of potential threats. The shorter of the two lists it has compiled by this point would be titled "Not Terrorists." Over the years, the DHS has conjectured that terrorists are hiding in food trucks, using hotel side entrances, exercising their First Amendment rights, possibly years away from graduating high school… etc.

The DHS also presides over the TSA, a security agency in name only that seems mostly interested in patting down mastectomy patients, running their brusquely officious hands over pre-teens, dumping breast milk and other "explosives precursors" into nearby garbage cans and feeling completely threatened by words printed in foreign languages.

Then there's ICE (with its own morale problems), the IP-focused Keystone Kops whose antics -- including yanking websites away from owners without a word of explanation and returning them years later without an apology, raiding lingerie shops for dangerously unlicensed panties, and struggling to come up with excuses for denying FOIA fee waiver requests -- are only outpaced by the imaginary rights vendettas of the City of London police.

That would be enough to depress anyone, especially the good employees who started out with ideals and enthusiasm but are now forced to answer question after question after question about why working for the nation's largest group of unhinged conspiracy theorists is a bit of a downer. The DHS has dumped a lot of money into divining the sources of its employees' unhappiness. But it seems more interested in spending money than fixing the problems.

The first study cost about $1 million. When it was finished, it was put in a drawer. The next one cost less but duplicated the first. It also ended up in a drawer.

So last year, still stumped about why the employees charged with safeguarding Americans are so unhappy, the department commissioned two more studies.

Yes, if anything's going to fix morale, it's going to be periodic questioning of employees who know their last several answers went completely ignored. Will the latest studies be titled "NO REALLY GUYS THIS TIME WE'RE LISTENING?"

To hear people like new DHS head Jeh Johnson tell it, the agency has never been more interested in improving morale.

Johnson and Deputy Secretary Alejandro Mayorkas have “personally committed themselves to improving the morale and workforce satisfaction across the Department of Homeland Security,” said Ginette Magana, a DHS spokeswoman. “They are directly engaging with employees, listening to their concerns, working diligently to improve employee recognition and training, and are focused on strengthening the skills and abilities of every employee. She said the studies “comprise a first step in a comprehensive process dedicated to tangible results.”

Yeah, but what about all the other "first steps" currently tucked away in drawers, presumably still in mint condition? How many "first steps" and empty promises are DHS employees expected to suffer through before they finally wander away from the metaphoric disinterested, lying spouse they call an employer? "No, really. This time will be different, honey. I SWEAR."

As it stands now, DHS employees pretty much have to stick guns in their mouths before someone will start paying attention to their morale issues.

Three years ago, officials in the department’s office of health affairs, which provides expertise on national security medical issues, began to wonder about the health of one of their own programs. In response to low scores on the viewpoint survey, officials had set up a program, DHSTogether, aimed at making DHS “one of the best places to work in the Federal government.”

The DHS spent over a million dollars on yet another study to find out why this study-prompted "Togetherness" wasn't working. The National Academy of Science's Institute of Medicine arrived at this alarming conclusion.

The report, released in September 2013, concluded that DHSTogether had been starved of money and support from DHS leaders and devolved into little more than an ineffective suicide prevention program.

The DHS apparently didn't feel like talking anyone down, so it buried the report on the report as well.

And the vicious cycle of studies will continue. On top of the two recently-commissioned studies, the agency plans to add a "follow-up" survey to its annual "viewpoint survey," and plans to follow up government contractor ICF's morale study with yet another study once that one's completed.

Clearly, bureaucracy -- especially the combined bureaucracy of 22 agencies forced by terroristsknee-jerk lawmaking to live together under one superagency's roof -- generates more questions than answers. And clearly, in the DHS's case, the questions are the only part that matters.

from the murder-death-kill dept

If you've been reading Techdirt for any decent length of time, you already know that the science behind whether violent video games cause real life violence is hardly settled. With that said, it's also true that those making the claim are the ones that have to prove their case. What that means is that the burden of proof on those that claim there is a link between real violence and gaming violence is much higher than on those of us that claim no link exists. So, when the most recent work and its researchers come out to again suggest that there is simply no link between violence and gaming, it's worth highlighting, particularly considering the antagonistic approach new, younger researchers are taking against the old guard and their reaching methodologies.

Stetson University psychologist Christopher Ferguson is one of the chief antagonists. In their drolly titled 2013 commentary, "Does Doing Media Violence Research Make One Aggressive?," Ferguson and his colleague, German researcher Malte Elson, invite readers to contemplate a thought experiment as a way to think about the plausibility of the "monkey see/monkey do" theory. "Take 200 children and randomize 100 to watch their parents viciously attack one another for an hour a day, the other 100 to watch a violent television program an hour a day," they suggest, "then assess their mental health after one month is over." Surely they are right when they assert that "to suggest the mental health outcomes for these children would be even remotely identical is absurd." As the thought experiment makes clear, ordinary folks do recognize that people, including children, can distinguish between real and fictional violence and will react accordingly.

The thought experiment reduces the violent media concept to an absurd level, surely, but that only serves in this case to highlight what the sandbagged-claims of some researchers are attempting to hide: people are smarter than they're given credit for. The moment you acknowledge that even the youngest children can make distinctions between real life violence and fictional violence, the game is almost entirely lost from the get go. All that's left to do is to find that fictional violence doesn't also magically make children, or adults, like being violent in real life, and the game is a rout and we can all go home. If only there was some kind of published metric that would allow us to show that as violent media has become more prevalent, people have actually become less violent in real life.

In October 2014 the Villanova psychologist Patrick Markey and colleagues published a study comparing trends in onscreen violence to America's murder and aggravated assault rates between 1960 and 2012. They report that movie violence has dramatically increased in the past 50 years, and that depictions of gun violence in PG-13 movies have tripled in the last 27 years. Controlling for possible confounders such as age shifts, poverty, education, incarceration rates, and economic inequality, they report, "Contrary to the notion that trends in violent films are linked to violent behavior, no evidence was found to suggest this medium was a major (or minor) contributing cause of violence in the United States." In November 2014, the FBI reported that the violent crime rate has fallen by nearly 50 percent over the past 20 years.

Who wants to suggest that movies, games, and television were more violent fifty years ago? Yeah, I didn't think so. This is the point we've been making for years. Setting aside the single, percussion-like occasions when some horrific violent act occurs, like, say, Sandy Hook, where is all this violence? Movies and television have been getting progressively more violent as time has marched on, but violent crime keeps dropping. And, don't think I'm forgetting that video games are a more recent thing, compared with movies and television.

In the December 2014 Computers in Human Behavior, a team of researchers at the University of Queensland in Australia used the standard 15-minutes-of-play format widely adopted by video aggression researchers to assess whether playing ultra-violent, violent, and nonviolent video games had any post-play effect on two measures of pro-social behavior. In one, players are paid $5, asked to fill out a brief questionnaire about a local children's charity, and told they can donate some money on their way out. In the second, players are told that they are choosing the level of difficulty of a puzzle that another subject has to finish in a limited time in order to earn money. The hypothesis was that the more violent the game, the harder the puzzle and the lower the charitable donations would be. Instead, the researchers reported that there was no difference among the three groups with regard to pro-social behavior, although the players of the ultra-violent games did donate more. "There is now growing reason to suspect that playing violent video games does not impact prosocial behavior in a normal population," concluded the researchers.

Again, when the burden of proof is higher on the side making the claim that a link between violence and video games exists and the side claiming no link exists continues to bury them with good, solid data, it's probably time to give this up and move on to the next moral panic.

from the who-can-you-trust-if-you-can't-trust-the-phone-company dept

On the heels of Obama's surprise support of Title II-based net neutrality rules last month, we noted that the broadband industry's anti-Title II talking points (primarily that it will kill network investment and sector innovation) not only were just plain wrong, they were getting more than a little stale. That's a problem for the industry given the increasingly bi-partisan support of real net neutrality rules and the groundswell of SOPA-esque activism in support of Title II. As such, the industry's vast think tank apparatus quickly got to work on new talking points to combat net neutrality rules that actually might do something.

The first product of this renewed effort is this study by the AT&T-funded Progressive Policy Institute. The study's central thesis is that if Title II net neutrality regulations are passed, the nation will be awash in $15 billion in various new Federal and State taxes and fees:

"We have calculated that the average annual increase in state and local fees levied on U.S. wireline and wireless broadband subscribers will be $67 and $72, respectively. And the annual increase in federal fees per household will be roughly $17. When you add it all up, reclassification could add a whopping $15 billion in new user fees on top of the planned $1.5 billion extra to fund the E-Rate program. The higher fees would come on top of the adverse impact on consumers of less investment and slower innovation that would result from reclassification."

Like a well-oiled machine, the cable, phone and broadband industry got to work pushing its study across all the major news outlets over the last week. AT&T CEO Randall Stephenson quickly took to NBC (skip to 1:05) to claim the average household broadband bill would increase by $19 a month under Title II (note amusingly that he got the study numbers his own company helped pay for wrong). The cable industry also not-so-subtly took to using this graphic in ads proclaiming Title II will result in vicious price hikes for everyone:

You'll probably be surprised to learn that the broadband industry had to resort to conflation, data cherry picking and a parade of worst-case scenarios to get these numbers. On the state level, Internet access has long received a Congressional exemption that's set to expire December 11 -- an issue totally unrelated to the Title II push. Congress can make sure the exemption is extended, keeping state sales taxes far away from broadband access. If they don't, again, it has nothing to do with Title II. Realize this, and nearly all of the PPI's estimate of $15 billion in new taxes as a direct result of Title II goes up in smoke right out of the gate.

In an e-mail conversation about the study I had with Free Press Research Director Derek Turner, Turner argues that PPI is also predicting a very worst-case scenario on Federal taxation that's simply not going to happen:

"The FCC could decide to forbear from requiring federal USF contributions for one. And whether or not the FCC does that, adding broadband into the USF mix wouldn't impact the overall size of the fund. That is, if broadband revenues were assessed but the fund size stayed constant, consumers would pay on broadband but, as a result, they'd pay less on their other services like wireless and wired voice. PPI asserts that consumers would pay more on aggregate than they do now (i.e. by adding broadband to the mix, their numbers imply that the burden for the fund will shift towards consumers from businesses), but the report out today offers no explanation of why the contribution percentage would tilt that way."

Of course the pretense that the broadband industry cares about how much your bills increase is also laughable, given the industry spends a large part of each day trying to figure out creative ways to pad your bill. This includes rate hikes, usage caps and a wide variety of fees imposed below the line to jack up the advertised rate post sale. These fees range from entirely bogus, non-government mandated "regulatory recovery fees" (pure-profit fees imposed to offset ambiguous government regulation despite a decade of deregulation) to new "broadcast TV fees" that simply bury a portion of programming costs below the line. They're all a variety of false advertising, but they highlight how the biggest increases to below-the-line charges and fees are coming from the industry itself.

The reality the broadband industry doesn't want to acknowledge is that very little changes for it under Title II if carriers aren't engaged in bad behavior. The broadband industry is fighting Title II solely to protect potential revenues generated from abusing uncompetitive markets. That this self-serving behavior is being dressed up as concern about the size of your broadband bill is the industry's best comedic work to date. Perhaps this slightly edited (by Mike) version of the NCTA ad is a bit more accurate:

from the good-to-see dept

For many years we've vocally criticized a very questionable line of arguments made by various lobbyists and (tragically) the US Commerce Department, that if you look at so-called "IP intensive industries" and see that those industries employ lots of people and are often profitable, it therefore means that stronger copyright, patent and trademark laws are good for the economy. There are all sorts of problems with this argument, highlighted simply by the fact that the single largest employment listed in one of these studies for an "IP intensive" industry is grocery stores. It's somewhat comical to believe that grocery stores employ 2.5 million people because of trademark law.

When challenged on this, the US Commerce Department's incredibly weak defense of this kind of argument (i.e., "Steve Jobs had patents, Steve Jobs made cool things, ergo, patents are important to innovation") was certainly troubling.

Thankfully, a new report by Eli Dourado and Ian Robinson at the Mercatus Center at George Mason University does a nice job dismantling most of the claims in these series of reports that suggest that lots of jobs in "IP intensive industries" automatically means "strong intellectual property laws are good." The report starts out by simply highlighting the problem of generally using "jobs" as a proxy for "good for society." That's a fallacy, and often a problematic one for innovation (where disruption may initially destroy a bunch of jobs, but, in the long-term, create many new opportunities).

Perhaps most fundamentally, jobs are not ends in themselves, and counting the number of jobs created is therefore not the best way to evaluate a policy. As Bryan Caplan notes, “Economists have been at war with make-work bias for centuries. [19th-century French economist Frederic] Bastiat ridicules the equation of prosperity with jobs as ‘Sisyphism,’ after the mythological fully employed Greek who was eternally condemned to roll a boulder up a hill.” Economic progress, Bastiat says, is defined by an increasing ratio of output to effort—indeed, economic nirvana is achieved when there is high output and zero labor effort.

Lawmakers could create jobs by requiring that construction projects be performed with spoons instead of shovels or tractors. Such a policy, however, would reduce worker productivity and decrease total economic output. Consequently, this spoon mandate would not promote economic progress.

Likewise, some of the jobs created by IP may harm the economy instead of helping it. Suppose IP laws necessitated that every firm hire 10 additional IP lawyers, but otherwise left output unchanged. IP could be said to create millions of additional jobs, but these would be jobs that reduced real output per worker, jobs that moved society further away from economic nirvana. They should be reckoned as economic costs of IP, not economic benefits. If (counterfactually) this were the only effect of IP, then abolition of IP would mean that the effort of the heretofore unproductively employed IP lawyers could be redirected to more productive uses.

But of course, the bigger issue, as we've outlined, is the silly argument that these jobs exist because of strict intellectual property laws. Not a single one of these studies has looked at how the jobs change with changes in the law. It simply is ridiculous to naturally assume that most of these jobs (like the grocery store point above) exist because of the current laws.

As a reductio ad absurdum, consider the blogging “industry.” As a matter of law, all authors are automatically, without registration or any other formal notice, bestowed with a copyright in their blog posts. Since the entire output of the blogosphere is copyrighted, under IPUSE’s methodology it would qualify as an IP-intensive industry (if it were considered an industry). Nevertheless, it seems clear that copyright protection accounts for at best a tiny sliver of bloggers’ output—the vast majority of blogs are accessible without a paid subscription, and many bloggers do not attempt to monetize their posts (with ads, say) at all.

If some industries resemble blogging—for example, if copyrights are automatically awarded but not relied on, or if patenting is done for primarily defensive purposes, or if trademarks exist but are rarely relied on by consumers—then IPUSE and the other reports that rely on simplistic counts of IP grossly overstate the number of jobs due to intellectual property. For these industries, IP intensity is not a reliable indicator of IP dependence.

On a similar note, for years we've pointed to CCIA's reports that did such a great job highlighting this fallacy by using the identical methodology to define "fair use intensive industries" to show that based on the copyright industry's own bogus methodology, clearly fair use is "more important" than copyright since it employs more people -- and thus, if we were to believe the original reports, then we should clearly expand fair use (massively, since it's so limited today). The whole point of the report was to mock the silly claims about "copyright intensive industries" -- and, amazingly, those who supported one report were horrified by the fair use report, attacking the methodology, without the self-awareness to recognize they were mocking their own preferred methodology.

As the report also notes, many of these other reports on how "important" IP is assume that intellectual property is the sole incentive for these jobs and related innovations and progress. That's just silly. As the new report notes, there are lots of possible impacts that these studies don't even remotely account for:

As a general matter, intellectual property law can overprotect as well as underprotect. When it overprotects, it creates jobs without a corresponding increase in real output, it creates jobs by destroying other jobs that are not accounted for, and at the margin it accounts for very little of the actual output created by supposedly IP-intensive industries.

The report then goes on to explore each of the three key areas -- copyrights, patents and trademarks -- to show why the assumptions underlying many of the reports simply don't hold up under scrutiny. It's a useful addition to counteract the bogus studies that make the bogus correlation argument based on the broadly defined "IP-intensive industries." If I have one complaint about it, it's that the report doesn't go as far as I expected, based on the title: "How Many Jobs Does Intellectual Property Create?" When I started reading the paper, I expected an attempt to actually look for some sort of causal methodology to determine such a figure, rather than just a dismantling of the arguments of those other reports. It's still a useful bit of research and analysis, but the title overpromises a bit.

from the just-the-facts dept

One of the common mantras is that patents are indispensable, particularly for smaller companies, in order to prevent inventions being appropriated. If that is true, then presumably innovative companies are patenting like mad in order to protect their inventions. But is that really the case? Since the necessity of patents is so "obviously" true, like so many other dogmas in the area of intellectual monopolies, people rarely look at the data to see whether it is. However, there is some research in this area, such as this 2012 paper from the UK, which explored the extent of patenting by companies over the last decade or so (pdf). Here are the main results:

One of the most puzzling findings in the empirical analysis of firms' patenting behaviour is the low proportion of patenting firms in the population of registered companies. Our investigation of this phenomenon in the UK finds that only 1.6% of all registered firms in the UK patent and that even among those that are engaged in some broadly defined form of R&D, only around 4% have applied for a UK or European patent during our period of analysis (1998-2006).

Perhaps famously "inventive" high-technology sectors employ them more than traditional markets? Or maybe this is just a UK thing? Well, yes, but only to a certain extent:

In our data, even in high-tech manufacturing sectors, which arguably produce the most patentable inventions, the share of patenting firms
in the UK does not surpass 10% . Restricting the high-tech sector to R&D-doing firms that also innovate, the share of patenting firms increases only to 16%.
Findings for the US are similar: Balasubramanian and Sivadasan (2011) find that only 5.5% of US manufacturing firms own a patent. Moreover, shares of patenting firms differ dramatically across sectors -- even within the manufacturing industry; for example in the UK, manufacturing of chemicals and chemical products has a share of around 10% of patenting firms whereas publishing and printing has a share of only around 1%. This suggests that (a) some firms
do not automatically patent all of their patentable inventions, (b) some firms avoid the patent system altogether, either because of its cost or because patenting is perceived to yield no additional benefit, and (c) some innovations involve inventions that are not patentable.

The rest of the paper explores the data in detail, and seeks to come up with some explanations as to why patents are not used as a matter of course. As you might expect, there's no simple answer; instead, it seems to be due to a complex mix of factors. But what is not in doubt is the fact that companies making things -- that is, those who aren't patent trolls -- do not regard patents as indispensable as some proponents would have us believe.