Privacy from private parties – Digital Age Defensehttp://www.digitalagedefense.org/wp
On regulation of technologyTue, 22 Aug 2017 17:29:08 +0000en-UShourly1https://wordpress.org/?v=4.8.2Comments on DMCA Section 512: First Amendment issues with counter-notifications and repeat infringer policies (and more)http://www.digitalagedefense.org/wp/2016/04/08/dmca-section-512-first-amendment-issues-with-dmca-requirements/
Fri, 08 Apr 2016 20:13:33 +0000http://www.digitalagedefense.org/wp/?p=929[...]]]>The following is Section III.C of the comment I submitted in the Copyright Office’s study on the operation of Section 512 of the copyright statute.

Questions #16 and #17 more specifically contemplate the effectiveness of the put-back process articulated at subsection 512(g). As explained in Section III.B this mechanism is not effective for restoring wrongfully removed content and is little used. But it is worth taking a moment here to further explore the First Amendment harms wrought to both Internet users and service providers by the DMCA.[1]

It is part and parcel of First Amendment doctrine that people are permitted to speak, and to speak anonymously.[2] Although that anonymity can be stripped in certain circumstances, there is nothing about the allegation of copyright infringement that should cause it to be stripped automatically. Particularly in light of copyright law incorporating free speech principles[3] this anonymity cannot be more fragile than it would in any other circumstance where speech was subject to legal challenge. The temptation to characterize all alleged infringers as malevolent pirates who get what they deserve must be resisted; the DMCA targets all speakers and all speech, no matter how fair or necessary to public discourse this speech is.

And yet, with the DMCA, not only is speech itself more vulnerable to censorship via copyright infringement claim than it would be for other types of allegations[4] but so are the necessary protections speakers depend on to be able to speak.[5] Between the self-identification requirements of subsection 512(g) put-back notices and the ease of demanding user information with subsection 512(h) subpoenas that also do not need to be predicated on actual lawsuits,[6] Internet speakers on the whole must fear the loss of their privacy if anyone dares to construe an infringement claim, no matter how illegitimate or untested that claim may be. Given the ease of concocting an invalid infringement claim,[7] and the lack of any incentive not to,[8] the DMCA gives all-too-ready access to the identities of Internet users to the people least deserving of it and at the expense of those who most need it.[9]

Furthermore, the DMCA also compromises service providers’ own First Amendment interests in developing the forums and communities they would so choose. The very design of the DMCA puts service providers at odds with their users, forcing them to be antagonistic their own customers and their own business interests as a condition for protecting those interests. Attempts to protect their forums or their users can expose them to tremendous costs and potentially incalculable risk, and all of this harm flows from mere allegation that never need be tested in a court of law. The DMCA forces service providers to enforce censorship compelled by a mere takedown notice, compromise user privacy in response to subsection 512(h) subpoenas (or devote significant resources to trying to quash them), and, vis a vis Questions #22 and 23, disconnect users according to termination policies whose sufficiency cannot be known until a court decides they are not.[10]

The repeat infringer policy requirement of subsection 512(i)(A) exemplifies the statutory problem with many of the DMCA’s safe harbor requirements. A repeat infringer policy might only barely begin to be legitimate if it applied to the disconnection of a user after a certain number of judicial findings of liability for acts of infringement that users had used the service provider to commit. But as at least one service provider lost its safe harbor for not permanently disconnecting users after only a certain number of allegations, even though they were allegations that had never been tested in a court consistent with the principles of due process or prohibition on prior restraint.[11]

In no other context would we find these sorts of government incursions against the rights of speakers constitutional, robbing them of their speech, anonymity, and the opportunity to further speak, without adequate due process. These incursions do not suddenly become constitutionally sound just because the DMCA coerces service providers to be the agent committing these acts instead.

[5]SeeMcIntyre, 514 U.S. at 341-42 (1995) (“The decision in favor of anonymity may be motivated by fear of economic or official retaliation, by concern about social ostracism, or merely by a desire to preserve as much of one’s privacy as possible. Whatever the motivation may be, at least in the field of literary endeavor, the interest in having anonymous works enter the marketplace of ideas unquestionably outweighs any public interest in requiring disclosure as a condition of entry.”).

[6]CompareFed. R. Civ. P. 45(a)(1)(A)(ii) (“Every subpoena must … state the title of the action and its civil-action number.”), with17 U.S.C. § 512(h) (lacking any similar requirement or other mention that the subpoena be predicated on a commenced civil action). Note that many jurisdictions explicitly forbid pre-litigation discovery. See, e.g., Cal. Code of Civ. Proc. 2035.010(b) (“One shall not employ the procedures of this chapter for the purpose … of identifying those who might be made parties to an action not yet filed.”). Many jurisdictions further require careful testing of a plaintiff’s claims before stripping Internet speakers of their anonymity. See, e.g., Krinsky v. Doe, 72 Cal.Rptr.3d 231, 241-246 (discussing standards for determining whether a plaintiff can be allowed to unmask an anonymous speaker).

[9] The abusive practices of many extortionate copyright plaintiffs illustrate why judicial oversight is required before Internet users are forced to be stripped of their privacy protection. See, e.g., AF Holdings, LLC v. Does 1-1058, 752 F. 3d 990, 992 (D.C. Cir. 2014) (describing the affairs of copyright plaintiffs who built a business on demanding money from people they discovered via subpoenas to pay settlements to avoid litigation, despite the putative plaintiffs not having a valid copyright to sue upon).

[11]Id. The court in this case also required the service provider to terminate users regardless of the impact on the user if they were forced to exist in the modern world without broadband internet connectivity. To the extent that this holding was drawn from a fair reading of the statute, while perhaps in the 20th Century the consequences of losing Internet access were negligible, in the 21st Century we know they are not. There may not be many other options for broadband access available to terminated users, and the cumbersome nature of the DMCA combined with expansive theories of secondary liability do little to encourage investment by new market entrants.

Question #12 asks if the notice-and-takedown process sufficiently protects against fraudulent, abusive, or unfounded notices and what should be done to address this concern. Invalid takedown notices are most certainly a problem,[1] and the reason is that the system causes them to be a problem. As discussed in Section II.B the notice-and-takedown regime is inherently a censorship regime, and it can be a very successful censorship regime because takedown notice senders can simply point to content they want removed and use the threat of liability as the gun to the service provider’s head to force it to remove it, lest the service provider risk its safe harbor protection.

Thanks to courts under-enforcing subsection 512(f) they can do this without fear of judicial oversight.[2] But it isn’t just the lax subsection 512(f) standard that allows abusive notices to be sent without fear of accountability. Even though the DMCA includes put-back provisions at subsection 512(g) we see relatively few instances of it being used.[3] The DMCA is a complicated statute and the average non-lawyer may not know these provisions exist or be able to know how to use them. Furthermore, trying to use them puts users in the crosshairs of the party gunning for their content (and, potentially, them as people) by forcing them to give up their right to anonymous speech in order to keep that speech from being censored. All of these complications are significant deterrents to users being able to effectively defend their own content, content that would have already been censored (these measures would only allow the content to be restored, after the censorship damage has already been done).[4] Ultimately there are no real checks on abusive takedown notices apart from what the service provider is willing and able to risk reviewing and rejecting.[5] Given the enormity of this risk, however, it cannot remain the sole stopgap measure to keep this illegitimate censorship from happening.

Continuing on, Question #13 asks whether subsection 512(d), addressing “information location tools,” has been a useful mechanism to address infringement “that occurs as a result of a service provider’s referring or linking to infringing content.” Purely as a matter of logic the answer cannot possibly be yes: simply linking to content has absolutely no bearing on whether content is or is not infringing. The entire notion that there could be liability on a service provider for simply knowing where information resides stretches U.S. copyright law beyond recognition. That sort of knowledge, and the sharing of that knowledge, should never be illegal, particularly in light of the Progress Clause, upon which the copyright law is predicated and authorized, and particularly when the mere act of sharing that knowledge in no way itself directly implicates any exclusive right held by a copyright holder in that content.[6] Subsection 512(d) exists entirely as a means and mode of censorship, once again blackmailing service providers into the forced forgetting of information they once knew, and irrespective of whether the content they are being forced to forget is ultimately infringing or not. As discussed above in Section II.B above, there is no way for the service provider to definitively know.

[2]SeeRossi v. Motion Picture Association of America, 391 F.3d 1000, 1004 (9th Cir. 2004) (finding that “the ‘good faith belief’ requirement in subsection 512(c)(3)(A)(v) encompasses a subjective, rather than objective standard.”). With regard to Question #28, this standard is a very low bar for a takedown notice sender to hurdle and has made effective redress for people whose speech has been wrongfully removed has become more elusive.

[4] The problem of takedown abuse is particularly acute during campaign seasons, when politically-motivated takedown requests can suppress the most effective and cheapest means of communicating political messages for which timeliness is of the essence. See Center for Democracy and Technology, Campaign Takedown Troubles: How Meritless Copyright Claims Threaten Online Political Speech (Sept. 2010), https://www.cdt.org/files/pdfs/copyright_takedowns.pdf.

[5] The “takedown-and-staydown” regimes contemplated by Question #10 would only exacerbate the effects of this censorship.

[6] In other words, sharing a link to content is not the same thing as making a copy of that content.

And that’s why so much outrage is warranted when bullies try to strip speakers of their anonymity simply because they don’t like what these people have to say, and why it’s even more outrageous when these bullies are able to. If anonymity is so fragile that speakers can be so easily unmasked, fewer people will be willing to say the important things that need to be said, and we all will suffer for the silence.

We’ve seen on these blog pages examples of both government and private bullies make specious attacks on the free speech rights of their critics, often by using subpoenas, both civil and criminal, to try to unmask them. But we’ve also seen another kind of attempt to identify Internet speakers, and it’s one we’ll see a lot more of if the proposal ICANN is currently considering is put into place.

In that case the critic had selected a domain incorporating Carreon’s name in order to best get his point about Carreon’s thuggery across, which the First Amendment and federal trademark law allowed him to do. When he registered the domain name he also paid extra to avail himself of the registrar’s proxy WHOIS service in order to maintain his anonymity by keeping his identifying details hidden – a service that up to now domain registrars have been permitted to offer. Unfortunately, the registrar immediately caved to Carreon’s pressure and disclosed the critic’s identifying information, thereby eviscerating the privacy protection the critic expected to have, and depended on, for his commentary.

It was a denial of anonymity that never should have happened, but under the new ICANN proposal this sort of exposure of speakers’ identifying information will only happen more often as ICANN seeks to make the privacy protections of the WHOIS proxy service less available and more flimsy, particularly in cases where IP owners dislike the speech taking place at that domain.

It is a proposal that is extraordinarily glib about its consequences for any Internet speaker preferring not to be dependent on another domain host for their online speech. First, it naively pre-supposes that the identifying information of a domain name holder would only ever be used for litigation purposes, when we sadly already know that this presumption is misplaced. As this letter to ICANN points out (linked to from the independently expressive domain name “icann.wtf”), people objecting to others’ speech often use identifying information about Internet speakers to enable campaigns of harassment against them, sometimes even with the threat of life and limb (for example, by “swatting”).

Secondly, it pre-supposes that even if this identifying information were to be used solely for litigation purposes that a lawsuit is a negligible thing for a speaker to find itself on the receiving end of, when of course it is not. In the case of Carreon’s critic he was fortunate to be able to secure pro bono counsel, but not everyone can, and having to pay for representation can often be ruinously expensive.

Thirdly it pre-supposes that there is somehow an IP-related exemption to the First Amendment, when there most certainly is not. Speech is speech and it is all protected by the First Amendment. Attempts to carve out exemptions from its protections for speech that somehow implicates IP should not be tolerated, particularly when the consequences to discourse are just as damaging to speech chilled by IP owners as they are by anyone else seeking to suppress what people may say.

It is important to hold fast on this all-speech-is-protectable principle especially because, fourthly, just because an IP owner may object to certain speech does not magically make that objection valid. Remember, Carreon’s critic was ultimately vindicated, but only after he had lost his anonymity, which Carreon was way too easily able to destroy. In fact, Carreon’s example shows how, in light of this potential for abuse, we should actually be strengthening the ability of intermediaries to resist demands to unmask their users, not making them more vulnerable to this pressure, as ICANN currently proposes.

Also, continue to watch how this issue develops and be prepared to let appropriate U.S. government representatives know that they need to ensure that all of the First Amendment protections online discourse depends on, including the right to speak anonymously, remain protected, particularly in instances like this one when they are under such direct threat. Sadly this is not the only example of how online free speech is under fire, but that’s a subject for other blog posts another day . . . .

]]>A law requiring deletionhttp://www.digitalagedefense.org/wp/2013/09/29/a-law-requiring-deletion/
Sun, 29 Sep 2013 19:45:20 +0000http://www.digitalagedefense.org/wp/?p=784[...]]]>This past week California passed a law requiring website owners to allow minors (who are also residents of California) to delete any postings they may have made on the website. There is plenty to criticize about this law, including that it is yet another example of a legislative commandment cavalierly imposing liability on website owners with no contemplation of the technical feasibility of how they are supposed to comply with it.

But such discussion should be moot. This law is precluded by federal law, in this case 47 U.S.C. Section 230. By its provisions, Section 230 prevents intermediaries (such as websites) from being held liable for content others have posted on them. (See Section 230(c)(1)). Moreover, states are not permitted to undermine that immunity. (See Section 230(e)(3)). So, for instance, even if someone were to post some content to a website that might be illegal in some way under state law, that state law can’t make the website hosting that content itself be liable for it (nor can that state law make the website delete it). But that’s what this law proposes to do at its essence: make websites liable for content others have posted to them.

Some might argue that the intent of the law is important and noble enough to forgive it these problems. Unlike in generations past, kids today truly do have something akin to a “permanent record” thanks to the ease of the Internet to collect and indefinitely store the digital evidence of everyone’s lives. But such a concern requires thoughtful consideration for how to best ameliorate those consequences, if it’s even possible to, without injuring important free speech principles and values the Internet also supports. This law offers no such solution.

]]>Deal v. Spearshttp://www.digitalagedefense.org/wp/2013/05/13/deal-v-spears/
http://www.digitalagedefense.org/wp/2013/05/13/deal-v-spears/#commentsTue, 14 May 2013 00:37:16 +0000http://www.digitalagedefense.org/wp/?p=735[...]]]>One of the cases I came across when I was writing an article about Internet surveillance was Deal v. Spears, 980 F. 2d 1153 (8th Cir. 1992), a case involving the interception of phone calls that was arguably prohibited by the Wiretap Act (18 U.S.C. § 2511 et seq.). The Wiretap Act, for some context, is a 1968 statute that applied Fourth Amendment privacy values to telephones, and in a way that prohibited both the government and private parties from intercepting the contents of conversations taking place through the telephone network. That prohibition is fairly strong: while there are certain types of interceptions that are exempted from it, these exemptions have not necessarily been interpreted generously, and Deal v. Spears was one of those cases where the interception was found to have run afoul of the prohibition.

It’s an interesting case for several reasons, one being that it upheld the privacy rights of an apparent bad actor (of course, so does the Fourth Amendment generally). In this case the defendants owned a store that employed the plaintiff, whom the defendants strongly suspected – potentially correctly – was stealing from them. In order to catch the plaintiff in the act, the defendants availed themselves of the phone extension in their adjacent house to intercept the calls the plaintiff made on the store’s business line to further her crimes. Ostensibly such an interception could be exempted by the Wiretap Act: the business extension exemption generally allows for business proprietors to listen in to calls made in the ordinary course of business. (See 18 U.S.C. § 2510(5)(a)(i)). But here the defendants didn’t just listen in to business calls; they recorded *all* calls that the plaintiff made, regardless of whether they related to the business or not, and, by virtue of being automatically recorded, without the telltale “click” one hears when an actual phone extension is picked up, thereby putting the callers on notice that someone is listening in. This silent, pervasive monitoring of the contents of all communications put the monitoring well-beyond the statutory exception that might otherwise have permitted a more limited interception.

[T]he [defendants] recorded twenty-two hours of calls, and […] listened to all of them without regard to their relation to his business interests. Granted, [plaintiff] might have mentioned the burglary at any time during the conversations, but we do not believe that the [defendants’] suspicions justified the extent of the intrusion.

[T]here is a vast difference between overhearing someone on an extension and installing an electronic listening device to monitor all incoming and outgoing telephone calls.

And so the defendants, hapless victims though they seemed to have been in their own right, were found to have violated the Wiretap Act.

But Deal v. Spears is a telephone case, and telephone cases are fairly straight forward. The statutory language clearly reaches the contents of those communications made with that technology, and all that’s really been left for courts to decide is how broad to construe the few exemptions the statute articulates. What has been much harder is figuring out how to extend the Wiretap Act’s prohibitions against surveillance to those communications made via other technologies (ie, the Internet), or to aspects of those communications that seem to apply more to how they should be routed than their underlying message. However privacy interests are privacy interests, and no amount of legal hairsplitting alleviates the harm that can result when any identifiable aspect of someone’s communications can be surveilled. There is a lot that the Wiretap Act, both in terms of its statutory history and subsequent case law, can teach us about surveillance policy, and we would be foolish not to heed those lessons.

More on them later.

]]>http://www.digitalagedefense.org/wp/2013/05/13/deal-v-spears/feed/1Privacy prioritieshttp://www.digitalagedefense.org/wp/2013/03/27/privacy-priorities/
Wed, 27 Mar 2013 14:30:52 +0000http://www.digitalagedefense.org/wp/?p=713[...]]]>I’ve written before about the balance privacy laws need to take with respect to the data aggregation made possible by the digital age. When it comes to data aggregated or accessed by the government, on that front law and policy should provide some firm checks to ensure that such aggregation or access does not violate people’s Fourth Amendment right “to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures.” Such limitations don’t forever hobble legitimate investigations of wrongdoing; they simply require adequate probable cause before the digital records of people’s lives be exposed to police scrutiny. You do not need to have something to hide in order not to want that.

But all too often when we demand that government better protect privacy it’s not because we want the government to; on the contrary, we want it to force private parties to. Which isn’t to say that there is no room for concern when private parties aggregate personal data. Such aggregations can easily be abused, either by private parties or by the government itself (which tends to have all too easy access to it). But as this recent article in the New York Times suggests, a better way to construct the regulation might be to focus less on how private parties collect the data and more on the subsequent access to and use of the data once collected, since that is generally from where any possible harm could flow. The problem with privacy regulation that is too heavy-handed in how it allows technology to interact with data is that these regulations can choke further innovation, often undesirably. As a potential example, although mere speculation, this article suggests that Google discontinued its support for its popular Google Reader product due to the burdens of compliance with myriad privacy regulations. Assuming this suspicion is true — but even if it’s not — while perhaps some of this regulation vindicates important policy values, it is fair to question whether it does so in a sufficiently nuanced way so that it doesn’t provide a disincentive for innovators to develop and support new products and technologies. If such regulation is having that chilling effect, we may reasonably want to question whether these enforcement mechanisms have gone too far.

Meanwhile public outcry has largely been ignoring much more obvious and dangerous incursions into their privacy rights done by government actors, a notable example of which will be discussed in the following post.

]]>Follow, not leadhttp://www.digitalagedefense.org/wp/2013/02/20/follow-not-lead/
Wed, 20 Feb 2013 18:37:01 +0000http://www.digitalagedefense.org/wp/?p=704[...]]]>At an event on CFAA reform last night I heard Brewster Kahle say what to my ears sounded like, “Law that follows technology tends to be ok. Law that tries to lead it is not.”

His comment came after an earlier tweet I’d made:

I think we need a per se rule that any law governing technology that was enacted more than 10 years ago is inherently invalid.

In posting that tweet I was thinking about two horrible laws in particular, the Computer Fraud and Abuse Act (CFAA) and the Electronic Communications Privacy Act (ECPA). The former attempts to forbid “hacking,” and the second ostensibly tried to update 1968’s Wiretap Act to cover information technology. In both instances the laws as drafted generally incorporated the attitude that technology as understood then would be the technology the world would have forever hence, a prediction that has obviously been false. But we are nonetheless left with laws like these on the books, laws that hobble further innovation by how they’ve enshrined in our legal code what is right and wrong when it comes to our computer code, as we understood it in 1986, regardless of whether, if considered afresh and applied to today’s technology, we would still think so.

To my tweet a friend did challenge me, however, “What about Section 230? (47 U.S.C. § 230).” This is a law from 1996, and he has a point. Section 230 is a piece of legislation that largely immunizes Internet service providers for liability in content posted on their systems by their users – and let’s face it: the very operational essence of the Internet is all about people posting content on other people’s systems. However, unlike the CFAA and ECPA, Section 230 has enabled technology to flourish, mostly by purposefully getting the law itself out of the way of the technology.

The above are just a few examples of some laws that have either served technology well – or served to hamper it. There are certainly more, and some laws might ultimately do a bit of both. But the general point is sound: law that is too specific is often too stifling. Innovation needs to be able to happen however it needs to, without undue hindrance caused by legislators who could not even begin to imagine what that innovation might look like so many years before. After all, if they could imagine it then, it would not be so innovative now.

]]>On regulating privacy and technologyhttp://www.digitalagedefense.org/wp/2012/05/19/on-regulating-privacy-and-technology/
http://www.digitalagedefense.org/wp/2012/05/19/on-regulating-privacy-and-technology/#commentsSun, 20 May 2012 02:28:32 +0000http://www.digitalagedefense.org/wp/?p=612[...]]]>There’s no discussing technology law without discussing how it implicates privacy. But privacy is such a broad concept; to discuss it in any meaningful requires a definition with more detail.

I see there being (at least for purposes of the sort of discussion on this site) two main types privacy: privacy from the government, and privacy from other individuals. And when it comes to regulating the intersection of privacy and technology, these two types of privacy require very different treatment.

It’s not that privacy is unimportant in either sphere. Knowledge is power, and knowledge of the details of people’s lives gives power over them. It therefore makes sense for law to regulate when and how that knowledge can be collected and used. But that regulation does not necessarily mean outright prohibition. It’s important to balance the reasons for and against information collection for both types of privacy — but that balance will be different for each.

Privacy from the government protects people from government intrusion in their affairs. This protection is important because information is the fuel the state uses to prosecute. Of course, sometimes the government does have a legitimate need to seek this information. Crimes do happen, and with probable cause to believe a particular person is culpable, the state may legitimately seek out informational evidence to prove it. But how we decide when those needs are legitimate, and how we allow technology to be deployed in furtherance of those needs, is something we need to carefully consider. Law should ensure that these exceptions are drawn no more expansively than necessary in order not to expose people to undue state scrutiny and prosecution.

Privacy from other individuals prevents those other individuals from leveraging the information they glean in a harmful way. But regulation of how this information is collected and used requires more nuance than the more absolute prohibition against government access to private data. For these issues we need to define such things as who is doing what data collection, under what pretense, for what purpose, with what notice, and for what benefit. We can’t regulate it all with a sledgehammer without inviting more problems.

While we may wish the technologies of today to handle privacy better, we would certainly not want to wish them away entirely. We would not even be faced with these technologies’ downsides if we didn’t also benefit from their tremendous upsides, and if we don’t regulate carefully we risk destroying the latter while trying to deal with the former. We have only been able to get the benefits of these technologies because people were free to develop them as their imagination saw fit. But heavy-handed regulation will prevent innovators from developing the next exciting tools, better tools that might even be able to mitigate some of the privacy problems of the current generation. We need a regulatory regime that allows this future to develop, for who can innovate with the threat of liability hanging over their head? Privacy regulation of this type therefore needs a delicate touch, to minimize the harmful effects technology may cause without causing harm to its promise.

]]>http://www.digitalagedefense.org/wp/2012/05/19/on-regulating-privacy-and-technology/feed/1Preventing user sharinghttp://www.digitalagedefense.org/wp/2012/02/02/preventing-user-sharing/
Thu, 02 Feb 2012 19:28:22 +0000http://www.digitalagedefense.org/wp/?p=553[...]]]>I’ve written before about Netflix petitioning Congress to modify the Video Privacy Protection Act (VPPA) to allow for users to easily share what they are watching to social networks. Right now users can easily share what books they are reading and what music they are listening to, but because the videos they stream may be covered by this videotape-era law, Netflix is concerned it could run afoul of it if it allowed for similarly easy sharing.

But as Susan Crawford notes in this article, Netflix’s attempt to harmonize privacy law vis a vis the sharing of what people are streaming with what they are reading or listening to may be backfiring: harmonization may well occur, not by making it easier to share video but rather by making it harder to share those other media too.

Privacy advocates in favor of tightening these regulations make a valid point about the importance of privacy when it comes to what culture people consume. Whether justifiably or not, people are frequently judged, and harshly, for what cultural works they enjoy, meaning they can suffer real harm when these details are divulged. Ensuring that these details are not divulged without their consent therefore seems a valid policy objective.

But can it be taken too far? We want people to be in control of what they share, but how much control do they really need to have? And what harm do we cause by mandating, through law, how web sites give them this control? According to Crawford’s article, Congress is essentially debating the coding and UI architecture of these web sites, which seems like an unfortunate bit of micromanaging. True, on the one hand, concrete guidance on how a website should be built might be able to provide more tangible protection to people looking to make complying websites than fuzzy regulatory language a court might later need to interpret during some expensive litigation. But on the other hand, Congress’s expertise is not in web development. Micromanaging how websites may be coded risks mandating that they be coded badly, both now and in the future as the technology now can’t evolve beyond what the law has specified.

It’s also important to remember that these laws don’t just apply to well-funded companies rife with inside counsel who can (at least theoretically) enforce compliance. These laws apply to individual people looking to make their own interesting websites and applications in their own spare time. Overregulation risks turning these individuals into scofflaws, especially if they don’t realize the laws are in effect. Sure, ignorance of the law may be no excuse, but is it really good public policy that every web coding manual now needs to include a chapter telling developers how the law governs what they may code? Especially because if they do become so aware, their choice then is to either ignore the law (something public policy generally does not want to encourage) or just not bother making the next cool tools and useful Internet resources, not even ones that might better protect the user privacy. It simply won’t be worth the legal risk. Consequently Congress may indeed succeed in securing users’ data, because there simply will be fewer places for them to share it.

]]>Webkinz privacy complaint filed with FTChttp://www.digitalagedefense.org/wp/2011/12/14/webkinz-privacy-complaint-filed-with-ftc/
Wed, 14 Dec 2011 16:12:14 +0000http://www.digitalagedefense.org/wp/?p=228[...]]]>From the Los Angeles Times, the Campaign for a Commercial-Free Childhood has filed a complaint with the FTC alleging deceptive and unfair trade practices by the Webkinz website. The organization accuses the children’s site and its corporate parent Ganz of violating facets of the Children’s Online Privacy Protection Act, which prohibits the collecting and maintaining of children’s personal information about users by failing to link to its privacy policy from its home page, in violation of the act, and that the policy is written in “vague, confusing and contradictory” language.

According to the complaint, Webkinz asks children to provide their first name, date of birth, gender and state of residence during registration, urging the users “it is important to use real information.” As the child navigates the animated website, dubbed Webkinz World, Webkinz monitors the child’s activity by depositing software to track his or her movements through the site, the complaint said.

As the children play in Webkinz World — which is aimed at children ages 6 to 13 and enables users to play games and interact with other members — Ganz allows third parties to track their activities for behavioral advertising purposes, the advocacy group alleges.

Ganz says parents can “easily opt out” of having their children view ads, noting it is “committed to being highly responsible in our approach to advertising.” But ads continue to appear on the site, even after parents have opted out, according to the complaint. In fact, the complaint said, ads are incorporated into Webkinz games such as “Wheel of Wow,” which attracts some 4 million players a month.

The Campaign for a Commercial-Free Childhood alleges that Ganz’s privacy policy is deceptive because it states that the information it gathers from children during the registration process could not be used to identify the child offline. It further alleges that the practice of using software — “cookies” and “web Beacons” — to track children’s activities and serve them targeted ads without parental consent “contravenes FTC guidance on behavioral advertising” and amounts to an unfair trade practice.

]]>Video Privacy Protection Act amendment proposedhttp://www.digitalagedefense.org/wp/2011/12/11/video-privacy-protection-act-amendment-proposed/
Sun, 11 Dec 2011 21:06:52 +0000http://www.digitalagedefense.org/wp/?p=199[...]]]>During the 1987 Supreme Court confirmation hearings for Robert Bork a Washington newspaper published his video rental history, which it had obtained from his local store. Fearing what their records, if published, would also disclose, Congress passed the Video Privacy Protection Act, forbidding disclosure of such records without the customer’s consent.

Since then, however, video rental stores have given way to online rental and streaming services, as well as changes in privacy norms associated with social media. While in the 1980s it may not have dawned on anyone to publicly share what movies they watched with anyone, today some people like to make such information known via social media. Online video services would like to easily let them, but their ability to do so may be limited by this law. Consequently Netflix, a large online video service, has been lobbying to amend it. The open question is whether that amendment updates the law sufficiently to empower users to share when and how they want to, or whether the amendment, as currently proposed, has the effect of decimating its basic privacy protections.

“It really is meant to empower the consumer to be able to share with their friends,” says David Hyman, the general counsel of Netflix. He says the bill simply updates an outmoded law so that it matches the way we live now. “It really kind of levels the playing field in social media.”

But some privacy scholars and advocates are warning that the bill actually diminishes a person’s ability to select what to share — and with whom — on a case-by-case basis. If the Senate passes the bill as currently written, they say, the revised law would undermine consumers’ control over information collected about them even as it empowers companies to create and share more detailed customer profiles. Netflix isn’t lobbying for a mere amendment, they argue; it wants Congress to dismantle a gold standard among privacy statutes.

“They are not trying to modernize the law,” says Marc Rotenberg, executive director of theElectronic Privacy Information Center, a public interest research group in Washington. “They are trying to gut the law.” At stake, he argues, is not the ostensible sharing of a person’s video viewing history, but rather the larger issue of meaningful consent.

…

People prefer frictionless sharing, a convenience hindered by the current law, says Christopher Wolf, a lawyer who is co-chairman of the Future of Privacy Forum, a Washington research group that receives financing from Google, Facebook and other digital media companies.

Moreover, Mr. Wolf says, the law restricts video services that seek to integrate with social networks like Facebook even as some music sites have already introduced sharing.

“Companies should not be exposed to hundreds of millions in damages just because particular hoops weren’t jumped through,” he says. “If people can share what they are listening to on Spotify, why shouldn’t they be able to share what videos they are watching?”

Still, video viewing remains a delicate area for many people because movie choices may open a window to a person’s religious or sexual preferences.

“Do you want your conservative friends to know that you watched a hyperviolent “Saw” movie or movies about the gay experience like ‘Brokeback Mountain’?” says Kevin Bankston, a senior staff lawyer at the Electronic Frontier Foundation, a digital civil rights group in San Francisco. “Do you want your liberal friends to know you watch an enormous amount of religious movies?”

Any amendment, he argues, should preserve a person’s ability to choose what to share, case by case, rather than ceding control by giving a general waiver to a company.

“You should have the option to decide what goes on your wall,” he says.

For its part, Netflix argues that the market should determine what level of privacy protections services like it need to provide. If people demand case-by-case control, that’s what the company will need to provide. Otherwise, blanket consent should be sufficient, if that’s what customers accept.

As with many proposed technology laws, however, the problem may not be in the proposals themselves but in the paucity of care and attention paid to passing them. Netflix may have a point, that the law shouldn’t create an artificial barrier toward people easily communicating whatever personal details they might want to communicate, especially when only certain types of information are singled out. On the other hand, privacy experts are correct in cautioning against the harms for over-disclosure. Perhaps these interests can be balanced, but not when the proposals are being rushed through Congress.

Perhaps ironically the article quotes Bork himself on this point.

“If you are going to enact change to a statute,” Judge Bork said, “you have to debate the question of whether the costs outweigh the benefits.

]]>Michigan man facing hacking charge for accessing wife’s emailhttp://www.digitalagedefense.org/wp/2011/12/08/michigan-man-facing-hacking-charge-for-accessing-wifes-email/
Fri, 09 Dec 2011 06:57:57 +0000http://www.digitalagedefense.org/wp/?p=170[...]]]>From the Detroit Free Press, Leon Walker is facing a five-year felony charge after accessing now-ex-wife Clara Walker’s Gmail account to see whether she was having an affair. A 1979 Michigan law prohibits accessing a computer system without consent.

Walker and his attorneys, Leon Weiss and Matthew Klakulak, said the law was never intended for domestic matters, but was designed to prevent identity theft and the theft of trade secrets.

Earlier this year, the attorneys asked the appellate court to throw out the charges. On Tuesday, three appellate judges peppered Klakulak with questions, asking why Walker’s actions weren’t unlawful hacking.

Klakulak said the law was “ambiguous” and wasn’t intended for “ridiculously innocuous conduct” like peeping at a family member’s Gmail account.

But judge Pat Donofrio said Walker’s actions appear to fall squarely under the law the way it was written.

“Your client is being charged with securing intellectual property — her e-mail, accessing her intellectual property,” he said.

Klakulak also argued legislators never intended the law to be used for snooping spouses and that if it’s used as such, it could criminalize activities such as parents monitoring their children’s online activities.

]]>CarrierIQ and its implications – UPDATEDhttp://www.digitalagedefense.org/wp/2011/12/06/carrieriq-and-its-implications/
Tue, 06 Dec 2011 20:47:23 +0000http://www.digitalagedefense.org/wp/?p=138[...]]]>Mashable has an excellent summary of CarrierIQ, including what it is and why there is an uproar about it. The even more summarized version is that CarrierIQ, software made by a company by the same name, runs in the background of many smart phones and tablets tracking performance and relaying that information to the wireless carrier. The initial controversy flared up when a researcher, Trevor Eckhart, noticed CarrierIQ and what it was doing and wrote a paper questioning whether what it was doing was acceptable, and then the company threatened him with a lawsuit to keep him quiet. Thanks to the intervention of the EFF, that situation resolved. But we are still left with the basic questions of is what CarrierIQ doing in any way ok, legally or otherwise.

Regarding the former quality, as the Mashable summary notes, some think what it’s doing may amount to an unlawful interception under the already-existing US Wiretap Act statute. Dating back to 1968, with a few major updates since, the Wiretap Act basically made it civilly and criminally impermissible for either state or private actors to intercept communications. If CarrierIQ is doing what has been alleged, including tracking keystrokes and relaying the information it discovers to another, it may well be doing exactly the sort of interception of communications the Wiretap Act prohibits.

As will inevitably be discussed on this blog further, the Wiretap Act has substantial weaknesses in how it applies to electronic, rather than traditional telephonic, communications. Updated in 1986 to specifically incorporate electronic communications, the statute is now bloated with language that doesn’t clearly and obviously apply to the electronic communications of the 21st century. Properly amending it so it does extend the basic privacy protections incorporated in the original Wiretap Act to these communications is a topic most certainly relevant to the discussion here.

But the CarrierIQ situation raises an interesting issue that could easily be lost in the discussion: when, if ever, is it reasonable for communications to be intercepted? Even the original Wiretap Act has a maintenance exception, allowing for a telephonic provider to essentially eavesdrop on communications to the extent necessary to maintain the function of the network. What would such an exception reasonably mean in the digital age?

In answering that question it may be worth thinking about the CarrierIQ example and what feels so wrong about it. Much of the criticism seems to boil down to people being upset that more of what they were communicating was being captured and shared than they were aware of. It’s not just a privacy issue, it’s also a transparency issue. People are also unhappy to find that they had so little control over their own devices, as this software was installed not only without their knowledge but in a way that made it difficult to discover and remove.

These would all seem to be valid objections and ones that future regulation should take into account. But perhaps not at the complete expense of legitimate network maintenance concerns. The right law will understand the realities of the technology well enough to allow for minimal and carefully defined exceptions to make sure the technology can continue functioning while protecting the historically-recognized import of communications privacy.

The proposals would bolster significantly the EU’s powers on combating data protection breaches, such as when companies sell customer data to third parties without authorisation or fail to adequately protect information held by social networks and “cloud computing” services.

Companies would have 24 hours to notify data protection authorities and the effected parties in cases where private data are compromised, as happened this year when the details of 77m Sony PlayStation accounts were hacked.

By ensuring the rules also apply to foreign groups’ European subsidiaries, the new rules will force global companies to strengthen their data policies.

The article mentions, but not does not discuss in further detail, that measures covering the “right to be forgotten” are included in these proposals, which are expected to take two years to be finalized and then be approved by member states, whom, the article notes, may not be eager to cede jurisdiction on privacy matters to the EU.

The terms of the FTC’s proposed settlement apply only to Facebook. But to paraphrase noted legal scholar Bob Dylan, companies that want to stay off the law enforcement radar don’t need a weatherman to know which way the wind blows. What practical pointers can your business take from the Facebook case and other recent FTC actions dealing with consumer privacy?

1) Promises, promises. Not making any privacy promises? Think again. Reread your privacy policy to see just what you’re telling customers and visitors you do with their information. And take a look at the privacy settings and other controls you offer. Like any other advertising claim, what you say about how you handle people’s info has to be truthful, not deceptive, and backed up with objective proof.

2) Legal-ease. Now that you have your privacy policy in front of you, show it to a real person — your receptionist, the guy in the warehouse, a member of your family. If they’re not clear on what it says, chances are your customers aren’t sure either. Yes, run it past Legal, but like the rest of your site, your privacy policy should be clear, direct, and easy to understand. Keep geek-speak and legal mumbo jumbo to a minimum.

3) Attitudes, not platitudes.“We at Acme Industries use every means to protect your privacy and never share your information without your permission.” Some retailers lace their privacy policies with lofty language, but don’t back their words up with actions. Remember: Statements like that aren’t just yadda yadda. They’re promises you have to keep. For example, the FTC settled a case with a company that claimed “We are committed to maintaining our customers’ privacy,” and yet failed to protect personal information from a well-known and easily preventable form of hack attack.

4) Color my world. Let’s face it: A lot of privacy policies mumble “Don’t read me.” The type is tiny and the text is dense. They’re often formatted in snooze-inducing shades of grey, in contrast to the eye-catching graphics on parts of the website designed to sell something. So here’s a crazy idea: How about giving your creative team a crack at rebooting the look of your privacy policy? A little color here, a bigger font there. Why not give it a shot?

5) Ch-ch-ch-changes. For security-minded customers, your information practices may be a key factor in their decision to do business with you. But what if you collected info from them under one set of rules and now want to change what you do? Wise marketers call customers’ attention to the proposed change and get their express OK first. Just editing what you say in your privacy policy won’t alert them to what you plan to do.

6) Time for a tech tune-up. If it’s been a while since you wrote your privacy policy, reconsider it in light of new technology you’ve put in place. What was true back in the day may not be the case if you’ve introduced a mobile app, switched service providers, or made other changes to your business.

7) Natural resources. You’ve got a business to run, so save time and money by using free resources from the FTC. Bookmark the Business Center’s Privacy & Security portal for the latest on law enforcement and plain-language compliance suggestions. Visit OnGuardOnline.gov for tips from the federal government and the technology industry.