from the hope-it-works dept

We've written a few times about the patent troll "Personal Audio" which claims to hold a patent (US Patent 8,112,504) that it says covers podcasting. With this patent it's been busy shaking down the creators of a bunch of popular podcasts. The company's owner and his lawyer recently asserted (somehow, without laughing) that their patent was "the roadmap" for podcasting, even though no one involved in the early days of podcasting was even remotely aware of the patent.

It's taken some time, but the EFF has been targeting this particular patent, and has now officially challenged the patent seeking to get it invalidated.

"Bad patents like this one slow down innovation—exactly the opposite of what the patent system was intended to do," said EFF Senior Staff Attorney Julie Samuels, the Mark Cuban Chair to Eliminate Stupid Patents. "We are thrilled to challenge this bad patent and make the world safer for creators and podcasters."

The process involving these kinds of things is very slow and convoluted, but hopefully this patent can be busted, and podcasters won't have to live in fear of a shakedown letter from Personal Audio.

from the 'targeted-killing' dept

At this point, both the NSA's extensive surveillance programs and the CIA's controversial drone "targeted killing" program (read: assassination program) are known and widely discussed. But the two had not been fully connected until now. There had been some reports from the various reporters who have access to the leaked Snowden documents that the NSA was involved, but the details are finally coming out. The first article comes via Barton Gellman at the Washington Post, detailing the NSA's extensive involvement in helping the CIA find targets to kill, including Hassan Ghul, who was taken out by a drone strike a year ago.

In the search for targets, the NSA has draped a surveillance blanket over dozens of square miles of northwest Pakistan. In Ghul’s case, the agency deployed an arsenal of cyber-espionage tools, secretly seizing control of laptops, siphoning audio files and other messages, and tracking radio transmissions to determine where Ghul might “bed down.”

The e-mail from Ghul’s wife “about her current living conditions” contained enough detail to confirm the coordinates of that household, according to a document summarizing the mission. “This information enabled a capture/kill operation against an individual believed to be Hassan Ghul on October 1,” it said.

The file is part of a collection of records in the Snowden trove that make clear that the drone campaign — often depicted as the CIA’s exclusive domain — relies heavily on the NSA’s ability to vacuum up enormous quantities of e-mail, phone calls and other fragments of signals intelligence, or SIGINT.

The NSA likes to talk about how its focus is on counter-terrorism operations in the form of finding out about potential terrorist activities in order to stop them. It likes to pretend that it isn't so involved in offensive actions. However, the reporting here suggests a different story altogether. The NSA is a key part of the assassination program.

While it may be a good thing to track down terrorists working to attack the US, the potential that these kinds of programs might also be abused is serious. Once again, what becomes clear is that the NSA will apparently do everything possible to get access to the information it wants:

“But if you wanted huge coverage of the FATA, NSA had 10 times the manpower, 20 times the budget and 100 times the brainpower,” the former intelligence official said, comparing the surveillance resources of the NSA to the smaller capabilities of the agency's IOC. The two agencies are the largest in the U.S. intelligence community, with budgets last year of $14.7 billion for the CIA and $10.8 billion for the NSA. “We provided the map,” the former official said, “and they just filled in the pieces.”

In broad terms, the NSA relies on increasingly sophisticated versions of online attacks that are well-known among security experts. Many rely on software implants developed by the agency’s Tailored Access Operations division with code-names such as UNITEDRAKE and VALIDATOR. In other cases, the agency runs “man-in-the-middle” attacks in which it positions itself unnoticed midstream between computers communicating with one another, diverting files for real-time alerts and longer-term analysis in data repositories.

Through these and other tactics, the NSA is able to extract vast quantities of digital information, including audio files, imagery and keystroke logs. The operations amount to silent raids on suspected safe houses and often are carried out by experts sitting behind desks thousands of miles from their targets.

The reach of the NSA’s Tailored Access Operations division extends far beyond Pakistan. Other documents describe efforts to tunnel into systems used by al-Qaeda affiliates in Yemen and Africa, each breach exposing other corridors.

It appears that the attacks are quite effective as well:

The operations are so easy, in some cases, that the NSA is able to start downloading data in less time than it takes the targeted machine to boot up. Last year, a user account on a social media Web site provided an instant portal to an al-Qaeda operative’s hard drive. “Within minutes, we successfully exploited the target,” the document said.

Now, to some extent, you can argue that these kinds of activities are the ones we'd expect the NSA to be taking: using systems to break into communications efforts of terrorists to track them down. But, as the report also notes, this main operative who they caught through use of this system, Ghul, was actually in CIA custody for years before they released him... only to then have the NSA go through this big process to re-find him and take him out with a drone.

Oh, and not just take him out... but then use the NSA to find out for sure that he was dead:

Even after Ghul was killed in Mir Ali, the NSA’s role in the drone strike wasn’t done. Although the attack was aimed at “an individual believed to be” the correct target, the outcome wasn’t certain until later when, “through SIGINT, it was confirmed that Hassan Ghul was in fact killed.”

The NSA and its supporters will undoubtedly spin this to show how good it is that the NSA has these kinds of capabilities, allowing them to track down and dispatch terrorists. But it remains concerning how this level of spying and power (all the way down to assassinations) can easily be combined and used in ways that are even more questionable.

from the urls-we-dig-up dept

Cars used to be fairly simple mechanical devices that gave drivers the freedom to zip around a city, but now cars are much more technologically advanced gadgets -- getting smarter and connecting with all kinds of other things (eg. sensors, phones, other cars, the internet). Pretty soon, cars could become our artificially-intelligent personal servants, helping us out with our daily tasks like KITT but without the turbo boost. Here are just a few steps towards every driver getting their own car sidekick.

from the no-that's-not-scammy-at-all... dept

In the past few months, we wrote about how patent troll Lodsys sued app developer Todd Moore because he called Lodsys a "patent troll." Stunningly, Lodsys' lawyer more or less admitted this to Moore's lawyer, and that allowed Moore to hit Lodsys with an anti-SLAPP motion for trying to stifle free speech. Lodsys quickly ran away, which happens semi-frequently. If you don't recall, Lodsys, which got some patents from Intellectual Ventures (though it's unclear if IV still gets a cut of any proceeds), likes to claim that a very large percentage of mobile apps infringe on its patents, and has spent the past few years sending around demand letters to shake down app developers.

However, at a lunchtime talk in DC recently, Todd Moore spoke about the legal battle with Lodsys, and apparently added a fascinating tidbit that I had not seen anywhere before. As reported by Rob Pegoraro, who was in attendance, Lodsys apparently demanded Moore pay up to a Swedish bank:

Moore noted the fundamental asymmetry of patent trolling, saying he could only fight because he had pro bono representation: "Most people who can't get a free lawyer like me will settle."

He also described some blunt bargaining by Lodsys--"how much will you give us so we go away?"--with the funds to be deposited in a Swedish bank account to avoid U.S. taxes.

So, not only is Lodsys up to some fairly questionable practices concerning demanding payment over a highly questionable patent -- and sometimes going after people for exercising their free speech rights -- according to Moore's statements at this event, the company may also be trying to dodge US taxes to boot. It makes you hope that perhaps Lodsys is on the list of 25 companies that the FTC plans to investigate in detail, to understand how they operate.

from the but-that-won't-happen dept

There's been some recent chatter over a Reuters report highlighting that both of the top two officials at the NSA, director Keith Alexander and deputy director Chris Inglis, are retiring in the next few months. Lots of people are misreading this, believing that this is something new, and suggesting that both were either pushed out, or are doing this in response to all of the Snowden revelations. That's simply not true. Alexander's retirement has been widely reported since at least June (and has been covered in a number of other publications as well). Both retirements were planned long ago, and appear to be exactly on schedule, rather than as any reaction to things happening in the news.

This is unfortunate, as it really does seem like there should be some punishment for the widespread excesses and abuses that have been revealed by Snowden. However, what is important to recognize is that this does present a real opportunity for the President to reshape the NSA. It seems unlikely that this will happen, but the President has said that he wants to rebuild the trust of Americans in the NSA and the wider intelligence community, and the choices he makes for who will lead the NSA are a real opportunity to at least take a step in that direction. No one actually expects him to, say, pick a civil liberties activist, but there are people out there who have experience in the intelligence community and who also have shown a respect and appreciation for privacy and civil liberties. Furthermore, finding someone who can present the case for reform -- one which recognizes that "collect it all" is not just bad policy, but bad for actually finding useful information -- would be a big step forward.

from the no-expectation-of-privacy-and-no-right-to-sue dept

Privacy activists EPIC have taken a novel approach to challenging the bulk records collections. Rather than work its way up through the circuit courts, it has appealed to the Supreme Court directly, asking it to find that the NSA has exceeded its authority by collecting data on American citizens.

Arguing that no lower court would have the authority to rule upon the legality of that FISC order, EPIC took its plea directly to the Supreme Court. Its filing in July asked the Court to rule that the FIS Court has wrongly claimed authority for its global data-gathering under a 2001 federal law. That law gave the FIS tribunal the power to issue electronic surveillance orders to produce "tangible things" during an investigation of potential threats to national security.

EPIC asked the Supreme Court either to vacate the FIS Court order to Verizon or to bar its further enforcement, contending that the compelled "production of millions of domestic telephone records . . . cannot plausibly be relevant to an authorized investigation" of potential terrorist activities.

The government has filed a brief arguing that EPIC's complaint should be routed through lower courts first. The government's rebuttal leans heavily on procedural arguments, first pointing out that only the federal government itself or the entity receiving the FISC orders can challenge these orders. In addition, the government points out that the law creating the FISA Court does not provide protection to third parties like EPIC.

It also argues (as it has successfully in the past) that EPIC can't prove it has suffered harm from the collection of its phone data.

Further, the government contended, EPIC has not offered proof that it could satisfy the requirements of the Constitution's Article III as a party with a specific claim to an injury as a result of government action.

Notably, the government isn't arguing that EPIC can't prove its metadata was obtained. Snowden's first leak eliminated that issue. Instead, it's arguing that no citizen or entity other that the entity the records were obtained from has standing to sue or otherwise challenge FISA court orders.

But the government has gone even further, playing both sides of the issue in order to both continue to acquire the bulk records and prevent anyone from challenging the collection. The government wants to enjoy all of the benefits of the bulk collection without suffering from any of the drawbacks. So far, this has paid off. Its arguments are inconsistent (to put it mildly), and a recent court case involving a convicted terrorist may test the limits of its arguments.

The government's response (PDF), filed on September 30th, is a heavily redacted opposition arguing that when law enforcement can monitor one person's information without a warrant, it can monitor everyone's information, "regardless of the collection's expanse." Notably, the government is also arguing that no one other than the company that provided the information—including the defendant in this case—has the right to challenge this disclosure in court.

The court (well, the FISA court) has agreed with this as well, at least part of it. It has stated that rights do not suddenly appear because a collection that is deemed legal for one person (like phone metadata) is used to collect data on several people.

The government's opposition to a new trial relies heavily on a recently declassified opinion from the Foreign Intelligence Surveillance Court, which concluded that "where one individual does not have a Fourth Amendment interest, grouping together a large number of similarly situated individuals cannot result in a Fourth Amendment interest springing into existence ex nihilo."

But that same argument should work against the government's claim that no single person or entity (other than the company handing over the data) has standing, especially in this case. Just as certainly as rights do not "spring into existence," standing doesn't suddenly disappear because the collection is untargeted. If the government can use the argument that a collection of millions of records is no more illegal than the collection of a single person's records, then it would seem reasonable that every person who "provides" these collectible records to third parties would have standing to challenge these disclosures.

What the government is doing in Moalin's case is highly hypocritical.

The government has always argued that there's no reasonable expectation to privacy in information handed to a third party like your phone or Internet provider, commonly referred to as the "third-party doctrine." But [EFF staff attorney Hanni] Fakhoury says that in this case, the government is taking an even more aggressive stance. In essence, its argument is that "these records aren't even Moalin's to begin with so he can't complain."

Fakhoury disagrees "with the idea that the user has no standing to challenge the use of evidence that says something about him" and thinks the government undermines its own argument about who has standing to contest the evidence. "[T]hey want to use the phone records to prove a fact about Moalin but then claim that these records aren't his."

The government needs this win very badly as it's using Moalin's case to prove the necessity of the 215 bulk records collections. But it wants to do so by arguing that someone who can assert they've suffered direct harm from this collection (Moalin is in jail, after all) doesn't have standing.

The government wants an unchallenged bulk collection and is throwing down every argument it can in order to head off possible challenges, either to the collection itself or to the evidence it provides. The end result is a very thorough abuse of the Third Party Doctrine that, so far, has allowed intelligence agencies to reap all the benefits and suffer none of the consequences. If the government wants to argue that collecting from everyone is no different than collecting one person's records, then it shouldn't be able to turn around and claim no one has standing to challenge the collected data -- either as evidence or the constitutionality of the collection.

from the government-induced-xenophobia dept

One of the three crafters of the RSA algorithm, Adi Shamir (who is Israeli), has been effectively locked out of attending the NSA-sponsored Cryptologic History Symposium, thanks to a combination of bureaucratic inefficiency and the US government's ongoing paranoia about all things terrorism.

I needed a new J1 visa, and I filed the visa application at the beginning of June, two and a half months before my planned departure to the Crypto conference in mid August. I applied so early since it was really important for me to attend the Crypto conference – I was one of the founders of this flagship annual academic event (I actually gave the opening talk in the first session of the first meeting of this conference in 1981) and I did my best to attend all its meetings in the last 32 years.

Despite this early start, it took four months before his visa was finally stamped on his passport -- September 30th, to be exact, narrowly avoiding the government shutdown that would have likely prevented his visit entirely.

And it wasn't just Shamir who had difficulty securing a visa. It appears the US government's reticence and reluctance to approve visas for certain people has had a deleterious effect on other foreign scientists. Shamir quotes a letter from the head of Israel's Weizmann Institute of Science stating that more scientists are choosing to "opt out" rather than deal with the laborious approval process.

“I’m allowing myself to write you again, on the same topic, and related to the major difficulties the scientists of the Weizmann Institute of Science are experiencing in order to get Visa to the US. In my humble opinion, we are heading toward a disaster, and I have heard many people, among them our top scientists, saying that they are not willing anymore to visit the US, and collaborate with American scientists, because of the difficulties. It is clear that scientists have been singled out, since I hear that other ‘simple citizen’, do get their visa in a short time.”

After Shamir's paper was accepted by the symposium, he contacted the NSA, hoping that it could intervene to get his visa approved in time to make the conference.

In July 2013 I told the NSA-affiliated conference organizers that I was having some problems in getting my visa, and gently asked whether they could do something about it. Always eager to help, the NSA people leaped into action, and immediately sent me a short email written with a lot of tact:

“The trouble you are having is regrettable…Sorry you won’t be able to come to our conference. We have submitted our program and did not include you on it.”

Such helpful folks at the NSA. "That sucks for you. We'll just cross your name off the list." Shamir says he's never seen one of his accepted papers treated so cavalierly in his 35 years of attending conferences. (I would imagine he himself hasn't been treated that cavalierly either.) Perceiving this to be a dead end (and not feeling like attending an event where it seemed he "wasn't wanted"), Shamir scheduled an appearance at MIT -- only to be contacted far too late to change plans with a "reinvitation" to the NSA-sponsored event.

Shamir is clearly irritated by this lumbering bureaucracy and a visa process that has succumbed to intelligence agency/administration paranoia that clearly perceives foreigners, especially those in scientific fields, to be a "threat," rather than the non-harmful, non-dangerous human beings they are. If active terrorists only make up a very slim percentage of the world's population, why does the government continue to treat a large percentage of certain non-US citizens as potential threats?

Shamir's final paragraph takes aim at the painful visa process and adds a hilarious slam against the "dangerous foreigners" mentality that overrides logic and common sense in certain government agencies.

Clearly, no one in the US is trying to see the big picture, and the heavy handed visa bureaucracy you have created seems to be collapsing under its own weight. This is not a security issue – I have been to the US close to a hundred times so far (including some multi-year visits), and had never overstayed my visas. In addition, the number of terrorists among the members of the US National Academy of Science is rather small. As a friend of the US I am deeply worried that if you continue to delay visas in such a way, the only thing you will achieve is to alienate many world-famous foreign scientists, forcing them to increase their cooperation with European or Chinese scientists whose countries roll the red carpet for such visits. Is this really in the US best interest?

Shamir's criticism is dead-on. The overriding mentality post-9/11 throws everything out and starts at square one, even if it's someone like Shamir, who has visited the country hundreds of times. Apparently, every time someone visiting on a visa exits the country and returns to their homeland, they're opening themselves up to radicalization by our nation's unquantifiable and unverifiable "enemies." The post-9/11 climate of fear doesn't allow anyone to build up a track record of successful, peaceful, non-terrorist-related visits to the US. It's a blank slate every time.

As Shamir points out, the long-term repercussions of this mindset will be a reduction in cooperation and shared knowledge which will likely result in the US falling behind other countries in terms of technological and scientific advancement. Just as certainly as we view certain bigot-heavy areas of our country as "backwards," our country's xenophobic, supposedly "anti-terrorist" policies will soon see our country viewed as the world's Birmingham, Alabama.

from the and-the-problem-with-existing-laws-is-what? dept

Two alleged cyberbullies have been arrested in Florida, but not because as a result of hastily erected cyberbullying laws. Not that Lakeland could have been blamed for rushing some legislation into existence.

On the strength of that post, Shaw was arrested and is facing charges for "felony aggravated stalking." Another unnamed 12-year-old was picked up and is facing the same charge.

Officials have presumably secured previous posts from the two arrestees aimed at Sedwick that justify the felony charges. This message alone would not qualify as a felony or misdemeanor.

(2) A person who willfully, maliciously, and repeatedly follows, harasses, or cyberstalks another person commits the offense of stalking, a misdemeanor of the first degree, punishable as provided in s. 775.082 or s. 775.083.

(3) A person who willfully, maliciously, and repeatedly follows, harasses, or cyberstalks another person and makes a credible threat to that person commits the offense of aggravated stalking, a felony of the third degree, punishable as provided in s. 775.082, s. 775.083, or s. 775.084.

The difference between the two is the existence of a "credible threat." Messages sent to Sedwick from other students included phrases like, "You should die" and "Why don't you go kill yourself," including some sent by Shaw herself.

Witnesses told investigators that Shaw harassed Sedwick by calling her ugly, told her to "drink bleach and die," and suggested that she should kill herself.

Prosecutors hoping to make these charges stick may have trouble turning suggestions into threats, but as stated earlier, there may be more posts that haven't been made public that are actual threats. Even so, the felony charge is Class 3, one step up from a misdemeanor.

But Shaw's parents told ABC News Tuesday that they regularly look at their daughter's account — and would never allow her to write anything so vile."I would check her Facebook every time she would get on it," said Shaw's mom, who wasn't identified.

"If we saw something that was not right, we would've addressed it and it would've ended right then," her dad added.

Shaw’s attorney also said the girl denies stalking Sedwick, a former classmate at Crystal Lake Middle School in Lakeland, and isn’t responsible for the online message.

So, there's that. The claim seems to be a bit unlikely, especially if investigators have collected posts made from the account over a lengthy time period. Even if Shaw didn't make the post that resulted in her arrest, it's going to be a stretch to assert that all negative posts directed at Sedwick from her account were a result of hacking. The latest post was the trigger for law enforcement, but the aggravated stalking charge relates to "repeated" actions. Even so, it's still an obstacle prosecutors will need to surmount.

That the teens are being charged under existing law indicates the "need" for separate cyberbullying laws is overstated. Existing statutes are capable of addressing the most harmful bullying behavior. Cyberbullying laws, at least to date, have tended to replace targeting clearly criminal behavior with targeting unpleasant behavior, not all of which is bullying and very much of which is protected expression.

That the outcome of this will be unsatisfactory for those seeking justice for Sedwick's suicide (most likely no time served for either arrestee) doesn't prove the existing laws don't go far enough. As tragic as Sedwick's suicide was, and as reprehensible as the behavior was that led to it, attempting to prosecute people for someone else's choice is a problematic area that no legislator should be willing to rush into. But so many have, applying heated emotions to a process that tends to result in many harmful unintended consequences if not dealt with rationally.

This means the controversial Encrypted Media Extensions (EME) proposal will continue to be part of that group's work product, and may be included in the W3C's HTML5.1 standard. If EME goes through to become part of a W3C recommendation, you can expect to hear DRM vendors, DRM-locked content providers like Netflix, and browser makers like Microsoft, Opera, and Google stating that they can now offer W3C standards compliant "content protection" for Web video.

The same post offers a chilling glimpse of where EME could take us:

A Web where you cannot cut and paste text; where your browser can't "Save As..." an image; where the "allowed" uses of saved files are monitored beyond the browser; where JavaScript is sealed away in opaque tombs; and maybe even where we can no longer effectively "View Source" on some sites, is a very different Web from the one we have today. It's a Web where user agents – browsers -- must navigate a nest of enforced duties every time they visit a page. It's a place where the next Tim Berners-Lee or Mozilla, if they were building a new browser from scratch, couldn't just look up the details of all the "Web" technologies. They'd have to negotiate and sign compliance agreements with a raft of DRM providers just to be fully standards-compliant and interoperable.

Rather ironically, given the fact that EME may well lead to the official closing-down of much of the open Web, Tim Berners-Lee has recently written an article entitled "The many meanings of Open", which included the following section:

The W3C community is currently exploring Web technology that will strike a balance between the rights of creators and the rights of consumers. In this space in particular, W3C seeks to lower the overall proprietary footprint and increase overall interoperability, currently lacking in this area.

Techdirt readers will immediately recognize the framing here: people who use the Web are either active "creators" or passive "consumers". Since the needs and desires of those groups are in opposition, somehow they have to be "balanced". It's exactly how the copyright industry tries to present the online world as it demands rights there that can be used against the public as part of that "balance". It's curious to see Berners-Lee adopt this formulation to justify putting DRM into HTML5. The same thinking came up in a W3C blog post that Berners-Lee wrote around the same time:

So we put the user first, but different users have different preferences. Putting the user first doesn't help us to satisfy users' possibly incompatible wants: some Web users like to watch big-budget movies at home, some Web users like to experiment with code. The best solution will be one that satisfies all of them, and we're still looking for that. If we can't find that, we're looking for the solutions that do least harm to these and other expressed wants from users, authors, implementers, and others in the ecosystem.

Again, there is the idea that the desires of people who want an open Web -- the ones that want to "experiment", by examining the underlying HTML code, say -- and those who want to watch "big-budget movies at home", are somehow in opposition, and have to be "balanced". That's nonsense. The standards currently underlying the Web are open, with no direct support for DRM -- although companies can and do add it in various non-standard ways. And people are already able to watch films at home, so there is no need to destroy the open Web in order to make the latter possible. Elsewhere in the same post we learn perhaps the real reason why the Berners-Lee and the W3C want to take this step:

if content protection of some kind has to be used for videos, it is better for it to be discussed in the open at W3C, better for everyone to use an interoperable open standard as much as possible, and better for it to be framed in a browser which can be open source, and available on a general purpose computer rather than a special purpose box. Those are key arguments for the decision that this topic is in scope.

Leaving aside the dubious initial premise -- there is no evidence that DRM is necessary, and Apple's decision to drop it for music indicates quite the contrary -- this suggests that going along with demands for adding DRM to HTML5 is about the W3C's fear of becoming marginalized by Hollywood studios as they make more of their films available online.

Perhaps the W3C should worry less about its own position and more about the users it claims to put first. After all, the net effect of creating an official standard for interoperable DRM will be to make it easier for copyright companies to adopt it -- there won't even be the present barriers and friction caused by incompatible ad-hoc systems that might make them think twice about adding it. Instead, it is likely to become the default on most online products, placing more obstacles in the way of fair-use rights of users, particularly those who are visually-impaired, who will find it harder to access these materials at all if such DRM becomes commonplace.

from the stop-blaming-others dept

Just last month, we talked about two studies released by Hollywood. The first was one from the MPAA itself, which (as happens all too often) focused heavily on blaming Google for its supposed problems in monetizing the online world. It used some highly questionable methodology to suggest that people doing searches ended up downloading unauthorized flicks. This was released in coordination with a Congressional hearing on copyright, in which the message pushed by the MPAA's sister organization, the RIAA, was "what we really need is for Google to help us return to our former glory." Oddly, that same day, NBC Universal released the second version of its own "piracy" study, which aimed to show how "big" the problem was. However, as we noted with both this and the original version of that study, when you look at the data, it shows pretty clearly that the "problem" is one that Hollywood has made for itself. That is, when good, convenient and reasonably priced offerings hit the market, an awful lot of video watching moved to those authorized offerings. It's when those offerings were missing entirely that the amount of unauthorized access seemed to shoot up.

Inspired, in part, by the thinking about these studies and claims, Jerry Brito from the Mercatus Center teamed up with Eli Dourado and Matt Sherman to launch a new site called piracydata.org, which attempts to collect (and visualize and -- most importantly -- make available) data that shows whether or not the most "pirated" works each week are available for legal access. It's still at a small sample size so far, but the initial results don't speak well to Hollywood's claims that it's adapted to the digital era.

The Walking Dead was pirated 500,000 times within 16 hours despite the fact that it is available to stream for free for the next 27 days on AMC's website and distributed in 125 countries around the world the day after it aired. Our industry is working hard to bring content to audiences when they want it, where they want it, but content theft is a complex problem that requires comprehensive, voluntary solutions from all stakeholders involved.

Now, if you're not the MPAA and so tied up and confused by unauthorized access, you might look at that information and realize that putting it on AMC's website was probably the mistake. That's not where people look for stuff these days. Yes, they made it free, but they didn't make it convenient, meaning they didn't put it in a form that some portion of consumers want, and watching it directly on AMC's website appears to take a large group out of their natural flow. That's something that Hollywood could learn from, but it never does. It just points the blame finger.

However, the data continues to be fairly overwhelming that the "piracy problem" is a problem of Hollywood's own making. It could solve it if it wanted to, by focusing on making more content more widely available in more convenient ways and prices. Yet, instead, it wants to blame everyone else and order them around to "fix" a problem of it's own making.

from the most-massive-wrist-slap-to-date dept

A year-long review of a police shooting in Cleveland has finally concluded. The investigation stems from a police pursuit late last year that resulted in the deaths of both suspects in the vehicle, who were at the receiving end of 137 bullets fired by Cleveland police officers.

A state investigation previously concluded there was a systemic problem of an attitude of "refusal to look at the facts," and handed the case over the prosecutors. In August, East Cleveland's mayor said prosecutors were considering filing charges against the cops involved in the shooting, but as of this month the shooting is still being investigated.

Both suspects were killed by the barrage of gunfire. The driver, Timothy Russell, was shot 23 times. His passenger, Malissa Williams, was shot 24 times. No weapons or casings were found inside the vehicle.

The chase began when an officer thought he heard gunfire coming from the Russell's car. Another witness on the scene thought it may have just been the vehicle backfiring. Either way, it led to a 23-minute chase involving five dozen police vehicles and nearly 100 officers and supervisors. Both suspects had criminal records, which may have influenced their decision to flee.

The pursuing police were ordered to stop by their supervisors but overrode this decision because they thought a police officer had been wounded. In order to right this perceived wrong, officers chased Russell at speeds of up to 120 mph before stopping him in a middle school parking lot. Thirteen officers then fired 137 shots, a majority of them in just over 20 seconds.

An initial review of the chase found 75 patrol officers violated orders, but the disciplinary hearings reduced that number to 64 officers. All but one received a suspension, with the longest being 10 days, McGrath said.

None of the violations was so serious it warranted termination. Some of the officers received a written warning.

Police previously announced punishments for 12 supervisors stemming from the chase. One sergeant was fired. A captain and lieutenant were demoted, and nine sergeants were suspended.

Additional charges most likely await the thirteen officers who fired 137 shots into a single vehicle, including one officer who managed to squeeze off 49 rounds in less than 20 seconds. The DOJ's investigation also hangs overhead, but it could be another year or two before it reaches any conclusions. What's been handed down so far barely amounts to a slap on the wrist for the 63 officers being punished. The maximum suspension is only 10 days. Their supervisors appear to have fared worse, with one firing and two demotions.

The police officers' union has (of course) defended the actions of the thirteen shooters.

The union has said the shootings were justified because the driver tried to ram an officer.

One wonders if the union feels every bullet fired was "justified" or just the 47 kill shots. One also wonders how many stray shots (with only about a third of the shots hitting the targets) went wandering into the nearby neighborhood. The state AG's animated reconstruction (above) indicates some remedial gun safety training might be wise, as the officers form (more than once) a semi-circle, firing shots in the direction of each other. (That this hail of gunfire took place at night made it even more dangerous for everyone involved.)

For the rank-and-file, the punishments being handed down are too light to discourage insubordination and unsafe pursuits in the future. For some cops, ignoring supervisors' orders in order to "avenge" one of their own is always justified and any resulting punishments are worn as badges of honor. But make no mistake, this pursuit wasn't about justice or any higher duty. It was a squad of officers looking to extract revenge as self-appointed judges, jurors and executioners. Nothing else explains the massive number of shots fired or the dozens of officers facing (minimal) suspensions for directly disobeying orders.

from the 1.-Eliminate-revenue-stream-2.-Gripe-about-it-3.-???-4.-Profit! dept

David Byrne, former lead singer of the Talking Heads, has pulled "as much of his catalogue" as he can from Spotify. Why? Because it's the thing to do these days. Abused math and sins of omission have led to headlines declaring Spotify to be the worst ripoff since the major labels, paying only pennies for millions of plays. Many artists have done it. Some are insulted by the low payoffs. Others believe it will cannibalize their sales.

Byrne's editorial for the Guardian names a few of these artists -- the Black Keys, Aimee Mann, Thom Yorke, etc. These artists have withheld their music from the super-popular streaming service simply because they've deemed the payout too low, and the risk of losing sales too high, to take part in the way people listen to music at the present. A rather strange about-face for Byrne, who previously lauded Radiohead's pay-what-you-want experiment and other efforts along that same line.

Along the way to what I feel is the simplest, most succinct point to be made in this Spotify "debate," Allen also points out how Byrne (along with Thom Yorke and others) are clouding the issue by couching the discussion of dispassionate themes (economic and technological shifts) in emotional language ("fairness," "ethical internet"). Critics of Spotify insist the royalty payouts are too low -- proof that the streaming service is evil -- despite the fact that these payouts are 70% of Spotify's revenue.

As for the decline of the recording industry -- which Allen takes pains to point out is not the same thing as the "music industry" -- it's been a long time coming. This doesn't mean music is dying -- only the industry that attached itself to musicians in a remora-like fashion and sucked as much income out as possible over the past several decades is dying. As Allen puts it in his post, the recording industry simply made it possible for artists to "pay off the mortgage, but never own the house" by providing advances in exchange for copyright control.

The industry was set up to fail -- an untenable construct that began to disintegrate upon the first sign of friction. What the industry considers to be right and fair and normal -- selling music to make money -- is nothing more than a blip on the timeline, as no less an old-school artist than Mick Jagger has stated.

[T]here was a small period from 1970 to 1997, where people did get paid, and they got paid very handsomely and everyone made money. But now that period has gone.

So if you look at the history of recorded music from 1900 to now, there was a 25 year period where artists did very well, but the rest of the time they didn't.

But here's the ultimate point:

One thing is certain: When artists remove their music from Spotify they are simply ensuring that they will receive zero royalties from that service. They will also ensure that they are not in a service that provides massive distribution of their work that is not a walled garden like FM radio is. And remember, not all artists are popular therefore not all artists receive the same amount of royalties from airplay or from streaming services. It is worth noting that Spotify has one billion playlists created by users. Musicians are not the only creators. Internet users most likely make far more content to post to the Web for free than all musicians combined. It’s a societal phenomenon that can’t be denied.

The first sentence is key. As has been stated here before, you cannot achieve a positive result simply by removing a negative. Labels and movie studios may spend tons of money fighting piracy, but that doesn't budge the needle towards a purchase. If millions of people are happy "renting" their music through streaming services (or YouTube), you can't push them towards a purchase by removing your music. They'll likely just find someone else to listen to, and when that artist tours or runs a Kickstarter or whatever, it's the artists they've been listening to that will receive that additional support.

All these artists are doing is shutting down a revenue stream under the mistaken impression that they'll pick up the money elsewhere. It may only be pennies, but it's pennies they don't need to lift a finger to collect. For every artist that has pulled their music from Spotify and pointed to first week sales as "proof" that ditching the world's most popular streaming service "works," there's another list of artists that have sold just as much without resorting to cutting out streaming revenue.

Giving music fans fewer reasons to use Spotify is also short-sighted. As a streaming services, its high end is only limited by the number of users. If enough artists pull out, the service loses some of its ability to attract users. Artists should want millions more to join, which is the most efficient way to increase royalty payouts. More users is more money. If Spotify manages to find a way to attract more paid users, the amounts will increase exponentially.

Cutting Spotify out doesn't make sense, no matter how small the checks are. You can't "force" sales, especially not when your attitude fails to sync with a majority of your potential customers.

[Bonus: here's some Shriekback for your listening pleasure, just in case your estimation of Dave Allen needed to be increased...]

from the a-failure-of-knowledge dept

While there's plenty of attention being paid to Lavabit's temporary re-opening for the sake of letting people export their accounts, a much more interesting issue is the recent development in the legal case. Lavabit has filed its latest brief, and there are some interesting discussions about the details of the case. From my reading, Lavabit makes a very strong argument that the government has no right to demand the production of Lavabit's private SSL keys, as it's an overreach way beyond what traditional wiretapping laws allow. Lawyer Orin Kerr's analysis argues that Lavabit's case is weak, mainly arguing that the federal government can subpoena whatever the hell they want, and just because it conflicts with your business model: too bad. Lavabit argues that complying with the government's order is oppressive because it would effectively mean it would be committing fraud on all its customers:

[T]o comply with the government’s subpoena would have either required Lavabit to perpetrate a fraud on its customer base or shut down entirely. That is the key point, and the resulting harm goes far beyond a mere inconvenient search for records. Just as requiring a hotel owner to install glass doors on all its hotel rooms would destroy the hotel’s business, Lavabit cannot exist as an honest company if the government is entitled to take this sort of information in secret. Its relationship with its customers and business partners depends on an assurance that it will not secretly enable the government to monitor all of their communications at all times. If a mere grand jury subpoena can be used to get around that (in secret, no less), then no business—anywhere—can credibly offer its customers a secure email service.

But Kerr points out that this is a "really weak argument":

This strikes me as a really weak argument. Lavabit is essentially claiming that its anti-government business model trumps the subpoena power. That is, it is arguing that the subpoena is “oppressive” precisely because it would work: It would allow the government to conduct the surveillance it is allowed to conduct under the Pen Register statute.

Further, Kerr argues that to accept Lavabit's argument would mean that any company that announces an "ideology or business strategy" that opposes government surveillance could then resist legitimate government subpoenas simply by arguing that they are oppressive and abusive.

I respect Kerr and always look forward to his legal analysis, but I think he's wrong at a variety of levels here, and, tragically the judge in the case seems to have the same confused view of what Lavabit is actually arguing (though, one could argue, that is actually the fault of Lavabit in not making its case clearly). Lawyer Scott Greenfield does a good job explaining why Kerr has mischaracterized Lavabit's defense -- first noting that being pro-privacy is hardly being "anti-government" as Kerr implies. Then pointing out that Lavabit's argument isn't that the government's demand for its private keys was merely oppressive because of its business model, but because it would put Lavabit out of business -- which is not the same thing.

This isn't really a fair characterization of Lavabit's point. Initially, the argument is that revelation of the private key would be the ruination of the business. By exposing every customer to government disclosure, and covert disclosure at that, the government would take a viable business, making money and delivering a service as businesses are allowed to do in America, and destroy it. Poof, company gone. Business gone. Revenue gone. Wham, bam, thank you, Ladar.

But there's an even bigger point in here, which I think Kerr misses entirely, and Greenfield skips over: from a technology standpoint, what the government is demanding of Lavabit is absolutely oppressive and abusive. And, for that, it helps to look at Ed Felten's discussion of the case, in which he notes that the judge and other DOJ supporters in this case (including, it would seem, Kerr) are basically arguing that "If court orders are legitimate, why should we allow engineers to design services that protect users against court-ordered access." But Felten points out that requiring "court ordered access" is tantamount to requiring a massive vulnerability to insider attacks:

To see why, consider two companies, which we’ll call Lavabit and Guavabit. At Lavabit, an employee, on receiving a court order, copies user data and gives it to an outside party—in this case, the government. Meanwhile, over at Guavabit, an employee, on receiving a bribe or extortion threat from a drug cartel, copies user data and gives it to an outside party—in this case, the drug cartel.

From a purely technological standpoint, these two scenarios are exactly the same: an employee copies user data and gives it to an outside party. Only two things are different: the employee’s motivation, and the destination of the data after it leaves the company. Neither of these differences is visible to the company’s technology—it can’t read the employee’s mind to learn the motivation, and it can’t tell where the data will go once it has been extracted from the company’s system. Technical measures that prevent one access scenario will unavoidably prevent the other one.

Insider attacks are a big problem. You might have read about a recent insider attack against the NSA by Edward Snowden. Similar but less spectacular attacks happen all the time, and Lavabit, or any well-run service that holds user data, has good reason to try to control them.

Now, go back to the judge's order or Kerr's analysis, and revisit it with what Felten pointed out, and you realize how far off-base both the Judge and Kerr are in their analyses. Lavabit didn't design its system to be setup the way it was because it was "anti-government," but rather because it wanted to create secure email that protects against a variety of different kinds of attacks, both insider and outsider. That's why it found the government's request so "abusive" and "oppressive." Not because of an ideological disagreement, but rather because of the technological reality that handing over Lavabit's private keys absolutely wrecks any real security of Lavabit's system, which is Lavabit's entire business.

So, while Kerr and the judge in the case seem to think it's a mere ideological issue, that's simply not true. It's a technological issue, on which Lavabit's entire business was based. If Kerr and the judge are correct, then, as Felten properly notes, it becomes effectively illegal to build a really secure communications system. That seems positively ridiculous, especially in a time when we're told (by the very government agency that wants to do all this spying) that we need better online security to protect against attacks.

from the sometimes-it-requests-changes dept

The FISA Court is still trying to shed its reputation as a rubber stamp. In a new letter, responding to questions from Senator Grassley, the court explains that, while it's true that it eventually approves basically all requests that the intelligence community brings before the court, that doesn't take into account the changes it requires. It starts out by noting, as it has previously, that the "over 99%" approval rate is only for "final applications" and doesn't fairly consider that it frequently asks for substantial changes. In response to that earlier statement, Senator Grassley had asked how many times does it ask for substantial changes, and the court sent back the following late last week:

During the three month period from July 1, 2013 through September 30, 2013, we have observed that 24.4% of matters submitted ultimately involved substantive changes to the information provided by the government or to the authorities granted as a result of Court inquiry or action. This does not include, for example, mere typographical corrections. Although we have every reason to believe that this three month period is typical in terms of the historic rate of modifications, we will continue to collect these statistics for an additional period of time and we will inform you if those data suggest that the recent three months were anomalous.

Of course, by July 1st, it's pretty clear that the FISC knew that it was going to be under substantial scrutiny concerning these efforts, and the whole "rubber stamp" discussion had already been widespread in the press. While the court insists that this is probably no different than in the past, the real question is what was it in the past... and there, the FISC admits, it has no idea. In responding to the request for historical data, it notes:

FISC [is] just beginning a practice of collecting statistics on the rate at which such modifications occur.

In other words, back before there was any public scrutiny, nobody much bothered to monitor these things and make sure that the FISC wasn't just a rubber stamp -- and there's no real way to go back and check.

Also, there's the whole question of how do you define "substantial." While the FISC at least admits the bar is higher than typographical corrections, it also admits that the rating is somewhat in the eye of the beholder:

It should be noted, however, that these statistics are an attempt to measure the results of what are, typically, informal communications between the branches. Therefore, the determination of exactly when a modification is "substantial," and whether it was caused solely by the FISC's intervention, can be a judgment call.

Last night, the video was "debuted" in New York City, as the film was broadcast on the side of a building from Washington Square Park in Manhattan. While regular readers here won't find anything too surprising or new, it's a good quick summary of what's going on, and is useful for catching people up on the situation if they haven't been following it.

“We followed the law, we follow our policies, we self-report, we identify problems, we fix them,” he said. “And I think we do a great job, and we do, I think, more to protect people’s civil liberties and privacy than they’ll ever know.”

Yes, by collecting pretty much every bit of data they can on everyone. That protects their privacy and civil liberties? How? By trampling the 4th Amendment? I don't think so. The whole "self-report... identify problems" claim is also hogwash. As we've noted, many of that "self-reporting" came years after the fact, and it's almost certain that plenty of other abuses have never been caught or reported.

Then there's General Alexander trying to claim he supports more transparency and that the American people need to know what's going on. I know. Stop laughing. He really said it:

“Given where we are and all the issues that are on the table, I do feel it’s important to have a public, transparent discussion on cyber so that the American people know what’s going on,” General Alexander said. “And in order to have that, they need to understand the truth about what’s going on.”

Of course, in the very same interview he insisted that this discussion that we're now having has done "significant and irreversible damage" to national security. So... he wants to have an open discussion and tell people what's going on, but solely on his own terms, and if anyone else brings up anything, we're all at risk.

He insisted that it would have been impossible to have made public, in advance of the revelations by Mr. Snowden, the fact that the agency collected what it calls the “business records” of all telephone calls, and many other electronic communications, made in the United States.

Why? This is a serious question, because it wouldn't have been impossible at all. The government could have easily said (as they're trying to now after Snowden revealed it) that they're doing this in a manner that (they believe) doesn't compromise our privacy, and it's for a good reason. And then let us have a public debate to see if people believe you or if they think you're full of it. That's what transparency is about.

The NY Times actually does a decent job in some points highlighting the ridiculousness of Alexander's answers, such as with this tidbit:

But he said the agency had not told its story well. As an example, he said, the agency itself killed a program in 2011 that collected the metadata of about 1 percent of all of the e-mails sent in the United States. “We terminated it,” he said. “It was not operationally relevant to what we needed.”

However, until it was killed, the N.S.A. had repeatedly defended that program as vital in reports to Congress.

Yup. The same way they continue to insist the telephone records are "vital" despite not actually showing how they've been necessary in stopping a single terrorist attack on the US.

At this point, you have to wonder what Alexander thinks he's accomplishing with each of these interviews or talks. It just seems like this strained, repetitive "but, really, I'm not such a bad guy, you just have to trust me!!!" exclamation over and over again that doesn't give us any reason to actually trust him. In fact, nearly all of the evidence that's come out from Snowden has actually shown (over and over and over again) why Alexander shouldn't be trusted at all.