from the well-that's-convincing dept

We recently looked at the mysterious report coming out of Australia that suggested unauthorized file sharing was costing the economy $900 million per year. But after digging into the details, it became clear that this number was a complete fabrication, as it applied the same totally debunked methodology that was used in last year's TERA report in Europe.

However, reader Ivan lets us know that the author of the Australian report, Emilio Ferrer is now defending the report and its results. Except, his defense seems to suggest he doesn't realize what everyone is complaining about. He focuses on the $900 million number, and notes that of course that's not the actual loss number, but says that it's okay, because he also offered other "ballpark" figures and the actual number must be somewhere in there:

"I've applied the methodology to countries that yield the highest and lowest level of impact," Ferrer said.

"There are variables that are difficult to measure but whether you apply to the lowest country or the highest country the impact is between $500 million and $2 billion."

But Ferrer said such "technical" analysis was indicative only of a "ballpark" figure.

"When you apply business modeling and the average and end up concluding the impact was $900 million, of course the answer is not $900 million but we try to deal with that by looking at the range.

"The conclusion is there is a significant loss to economic activity and therefore jobs as a consequence of internet piracy."

But all of this ignores the main point: that the basic methodology he used for any of those calculations wasn't sound. People aren't complaining about the results. They're complaining about the methodology itself. And he doesn't seem to get that at all because he doesn't defend it at all.

I mean, look, I could use some bizarre fantasy methodology to claim that every time someone falsely claims that file sharing is theft, it means that a puppy gets kicked. And then, if people challenge me on that methodology, I could just say I didn't mean a puppy got kicked every time, but the methodology suggests that somewhere between 0.5 and 2 times it happens, a puppy gets kicked. But, of course, that's stupid. If the underlying methodology makes no sense, then the results make no sense. Just because they give you a range, doesn't make it any more justified that you used a bogus and debunked methodology.