Privacy from government – Digital Age Defensehttp://www.digitalagedefense.org/wp
On regulation of technologyTue, 22 Aug 2017 17:29:08 +0000en-UShourly1https://wordpress.org/?v=4.8.2Tech policy in the time of Trump (cross-post)http://www.digitalagedefense.org/wp/2016/12/17/tech-policy-in-the-time-of-trump/
Sat, 17 Dec 2016 18:41:56 +0000http://www.digitalagedefense.org/wp/?p=966[...]]]>The following was recently published on Techdirt, although with a different title.

Regardless of what one thinks about the apparent result of the 2016 election, it will inevitably present a number of challenges for America and the world. As Mike wrote about last week, they will inevitably touch on many of the tech policy issues often discussed here. The following is a closer look at some of the implications (and opportunities) with respect to several of them, given the unique hallmarks of Trump and his proposed administration.

The problem here is that our previous decades of relative political stability have allowed attitudesto become a bit too casual about the importance of free speech as an escape valve against tyranny. But now that the need to speak out is so critical for so many, perhaps it will make us all be a little less glib about it.

One area where we need to be less glib is in copyright. While I would not be surprised to see Trump do something damaging in this space (probably in furtherance of Trump TV), copyright policy has always cut across party lines, and saner policy has in the past had the support of several GOP members of Congress, some of whommay still be in office. The silver lining here is that now that the need to preserve free speech is so apparent, it may become easier to point out how copyright policy interferes with it. For instance, because President Trump, or anyone supporting him in government or otherwise, can so easily cause criticism of him to be disappeared simply by sending a takedown notice or have people cut off from their online services with simply the mere allegations of infringement (as they effectively could right now thanks to recent jurisprudence on DMCA Section 512(i)), opposing voices are extremely vulnerable. As the opposition party, Democrats in particular need to start realizing how IP rights in general (copyright and also trademark and other quasi-IP monopolies like publicity rights) have been providing censors with enormous leverage over other people’s speech. Now that these levers can be used against them and their constituencies, perhaps they will be more likely to see the problem and finally push back against it (or at least stop actively trying to make the situation even worse).

Mass surveillance/encryption. The problem with the policy debates on mass surveillance to date is that they have tended to get bogged down by the assumption that the government was inherently good, and that all the spying it did was in furtherance of protecting its people. Until now many of those who disagreed with that assumption have largely been marginalized. Now, however, it appears that millions of people will have serious doubts about the motivations of the chief executive. It is therefore going to be much harder for surveillance advocates to push the “trust us,” argument when the incoming government has already indicated its strong desire to punish its internal enemies. Libertarians were already alarmed by the power of the surveillance state, and more Democrats may start seeing things their way pretty soon. The opportunity here is that there is now a new framing to help people see what a significant constitutional violation and danger this surveillance represents.

Encryption raises the same issues, and, as with mass surveillance, the public and even other members of Congress may also soon come to the painful realization about how important it is for them and the public to have robust, workable, non-backdoored encryption available to them too. After all, as we saw with Nixon, it is not unprecedented for a President to spy on his political adversaries. But this time Trump can leverage the NSA to do it.

Net Neutrality/Intermediary immunity. There are (at least) two other policy areas where the importance of continuing to protect free speech principles remains evident. Regarding net neutrality, there’s little reason to believe Trump will have anything positive to contribute along these lines, unless he decides it is to his business advantage. But what has also become apparent from this election is the tremendous damage consolidated mass media can cause to democracy. Politics is too important to be left to just a few outlets to tell us about, yet without net neutrality that’s the situation we will be left with.

The danger posed by homogeneous media is also why bolstering the protection of internet intermediaries is so important. Their protection is what helps ensure that a diversity of voices can be heard. The unfortunate reality is that there will likely be a lot of calls by people unhappy with this election and its fallout to limit those voices, particularly those whose message is most divisive, and with them also the platforms that facilitate their speech. But it will be important to hold fast to the intermediary-shielding principles that have to date largely protected platforms from liability in their users’ content. It’s only by leaving them free to operate without fear of liability that they are most able to voluntarily refuse the most awful content and be available for the most good. Neither is the case if the government effectively takes that decision away from them with the threat of punitive law, particularly when that law will inevitably reflect the government’s own agenda regarding what it considers to be worthwhile content or not.

Internet governance. With regard to Internet governance, at least the TPP appears to be dead and with it its speech-chilling provisions. Trump claims to detest free trade treaties, and in this regard his presidency may be helpful for innovation policy, which has been poorly served by US trade representatives trying to bind the United States into secretly negotiated international trade agreements that undermine key American liberties by imposing crippling limitations and liability on tech businesses and other platforms. On the other hand, from time to time international accords are helpful and even necessary for technology businesses to continue to thrive, innovate, and employ people worldwide. (See, e.g., the former Safe Harbor rules.) Unfortunately Trump’s presidency appears to have precipitated a loss of credibility on the world stage, creating a situation where it seems unlikely that other countries will be as inclined to yield to American leadership on any further issues affecting tech policy (or any policy in general) as they may have been in the past.

The bigger concern with respect to Internet governance, however, is whether tech policy advocates from America will be taken seriously in the future, if we go back on previous promisesdeveloped in thorough processes involving all stakeholders. It was already challenging enough to convince other countries that they should do things our way, particularly with respect to free speech principles and the like, but at least when we used to tell the world, “Do it our way, because this is how we’ve safely preserved our democracy for 200 years,” people elsewhere (however reluctantly) used to listen. But now people around the world are starting to have some serious doubts about our commitment to internet freedom and connectivity for all. So we will need to tweak our message to one that has more traction.

Our message to the world now is that recent events have made it all the more important to actively preserve those key American values, particularly with respect to free speech, because it is all that stands between freedom and disaster. Now is no time to start shackling technology, or the speech it enables, with external controls imposed by other nations to limit it. Not only can the potential benevolence of these attempts not be presumed, but we are now facing a situation where it is all the more important to ensure that we have the tools to enable dissenting viewpoints to foment viable political movements sufficient to counter the threat posed by the powerful. This pushback cannot happen if other governments insist on hobbling the Internet’s essential ability to broker these connections and ideas. It needs to remain free in order for all of us to be as well.

Questions #16 and #17 more specifically contemplate the effectiveness of the put-back process articulated at subsection 512(g). As explained in Section III.B this mechanism is not effective for restoring wrongfully removed content and is little used. But it is worth taking a moment here to further explore the First Amendment harms wrought to both Internet users and service providers by the DMCA.[1]

It is part and parcel of First Amendment doctrine that people are permitted to speak, and to speak anonymously.[2] Although that anonymity can be stripped in certain circumstances, there is nothing about the allegation of copyright infringement that should cause it to be stripped automatically. Particularly in light of copyright law incorporating free speech principles[3] this anonymity cannot be more fragile than it would in any other circumstance where speech was subject to legal challenge. The temptation to characterize all alleged infringers as malevolent pirates who get what they deserve must be resisted; the DMCA targets all speakers and all speech, no matter how fair or necessary to public discourse this speech is.

And yet, with the DMCA, not only is speech itself more vulnerable to censorship via copyright infringement claim than it would be for other types of allegations[4] but so are the necessary protections speakers depend on to be able to speak.[5] Between the self-identification requirements of subsection 512(g) put-back notices and the ease of demanding user information with subsection 512(h) subpoenas that also do not need to be predicated on actual lawsuits,[6] Internet speakers on the whole must fear the loss of their privacy if anyone dares to construe an infringement claim, no matter how illegitimate or untested that claim may be. Given the ease of concocting an invalid infringement claim,[7] and the lack of any incentive not to,[8] the DMCA gives all-too-ready access to the identities of Internet users to the people least deserving of it and at the expense of those who most need it.[9]

Furthermore, the DMCA also compromises service providers’ own First Amendment interests in developing the forums and communities they would so choose. The very design of the DMCA puts service providers at odds with their users, forcing them to be antagonistic their own customers and their own business interests as a condition for protecting those interests. Attempts to protect their forums or their users can expose them to tremendous costs and potentially incalculable risk, and all of this harm flows from mere allegation that never need be tested in a court of law. The DMCA forces service providers to enforce censorship compelled by a mere takedown notice, compromise user privacy in response to subsection 512(h) subpoenas (or devote significant resources to trying to quash them), and, vis a vis Questions #22 and 23, disconnect users according to termination policies whose sufficiency cannot be known until a court decides they are not.[10]

The repeat infringer policy requirement of subsection 512(i)(A) exemplifies the statutory problem with many of the DMCA’s safe harbor requirements. A repeat infringer policy might only barely begin to be legitimate if it applied to the disconnection of a user after a certain number of judicial findings of liability for acts of infringement that users had used the service provider to commit. But as at least one service provider lost its safe harbor for not permanently disconnecting users after only a certain number of allegations, even though they were allegations that had never been tested in a court consistent with the principles of due process or prohibition on prior restraint.[11]

In no other context would we find these sorts of government incursions against the rights of speakers constitutional, robbing them of their speech, anonymity, and the opportunity to further speak, without adequate due process. These incursions do not suddenly become constitutionally sound just because the DMCA coerces service providers to be the agent committing these acts instead.

[5]SeeMcIntyre, 514 U.S. at 341-42 (1995) (“The decision in favor of anonymity may be motivated by fear of economic or official retaliation, by concern about social ostracism, or merely by a desire to preserve as much of one’s privacy as possible. Whatever the motivation may be, at least in the field of literary endeavor, the interest in having anonymous works enter the marketplace of ideas unquestionably outweighs any public interest in requiring disclosure as a condition of entry.”).

[6]CompareFed. R. Civ. P. 45(a)(1)(A)(ii) (“Every subpoena must … state the title of the action and its civil-action number.”), with17 U.S.C. § 512(h) (lacking any similar requirement or other mention that the subpoena be predicated on a commenced civil action). Note that many jurisdictions explicitly forbid pre-litigation discovery. See, e.g., Cal. Code of Civ. Proc. 2035.010(b) (“One shall not employ the procedures of this chapter for the purpose … of identifying those who might be made parties to an action not yet filed.”). Many jurisdictions further require careful testing of a plaintiff’s claims before stripping Internet speakers of their anonymity. See, e.g., Krinsky v. Doe, 72 Cal.Rptr.3d 231, 241-246 (discussing standards for determining whether a plaintiff can be allowed to unmask an anonymous speaker).

[9] The abusive practices of many extortionate copyright plaintiffs illustrate why judicial oversight is required before Internet users are forced to be stripped of their privacy protection. See, e.g., AF Holdings, LLC v. Does 1-1058, 752 F. 3d 990, 992 (D.C. Cir. 2014) (describing the affairs of copyright plaintiffs who built a business on demanding money from people they discovered via subpoenas to pay settlements to avoid litigation, despite the putative plaintiffs not having a valid copyright to sue upon).

[11]Id. The court in this case also required the service provider to terminate users regardless of the impact on the user if they were forced to exist in the modern world without broadband internet connectivity. To the extent that this holding was drawn from a fair reading of the statute, while perhaps in the 20th Century the consequences of losing Internet access were negligible, in the 21st Century we know they are not. There may not be many other options for broadband access available to terminated users, and the cumbersome nature of the DMCA combined with expansive theories of secondary liability do little to encourage investment by new market entrants.

Unlike Smith v. Obama and other similar cases, which argued that even collecting “just” telephonic metadata violated the Fourth Amendment, in Jewel the surveillance involved the collection of communications in their entirety. It didn’t just catch the identifying characteristics of these communications; it captured their entire substance.

The Electronic Frontier Foundation originally filed this case in 2008 following the revelations of whistleblower Mark Klein, a former tech at AT&T, that a switch installed in a secret room at AT&T’s facilities were diverting copies all the Internet traffic passing through their systems to the government. This, the EFF argued in a motion for summary judgment, amounted to the kind of “search and seizure” barred by the Fourth Amendment without a warrant.

Like in Smith v. Obama, this surveillance necessarily implicates the Sixth Amendment in how it violates the privacy of communications between lawyers and their clients. But because the surveillance involves the collection of the content of these communications it also inherently violates the Fifth Amendment right against self-incrimination as well.

And so a different rule applies. In various cases, including Miranda v. Arizona, which many people are familiar with, the Supreme Court has held that when the disclosure of one’s testimony is involuntary there is no requirement to expressly invoke one’s testimonial privilege. Here the disclosure of people’s testimony to the government is completely involuntary and so the Fifth Amendment should have prevented the capture of their information, whether they had been aware of it happening or not.

In that case the Supreme Court held that it did not violate the Fourth Amendment for the government to acquire records of people’s calls. The government only violates the Fourth Amendment when it invades a “reasonable expectation of privacy society recognizes as reasonable” without a warrant. But how could there be an expectation of privacy in the phone number a person dialed, the Supreme Court wondered. How could anyone claim the information was private, if it had been voluntarily shared with the phone company? Deciding that it could not be considered private, the court therefore found that no expectation of privacy was being invaded by the government’s collection of this information, which therefore meant that the collection could not violate the Fourth Amendment.

The problem is, in the Smith v. Maryland case the Supreme Court was contemplating the effect on the Fourth Amendment raised by the government acquiring only (1) specific call information (2) from a specific time period (3) belonging only to a specific individual (4) already suspected of a crime. It was not considering how the sort of surveillance at issue in this case implicated the Fourth Amendment, where the government is engaging in the bulk capturing of (1) all information relating to all calls (2) made during an open-ended time period (3) for all people, including (4) those who may not have been suspected of any wrongdoing prior to the collection of these call records. What Smith is arguing on appeal is that the circumstances here are sufficiently different from those in Smith v. Maryland such that the older case should not serve as a barrier to finding the government’s warrantless bulk collection of these phone records violates the Fourth Amendment.

In particular, unlike in Smith v. Maryland, in this case we are dealing with aggregated metadata, and as even the current incarnation of the Supreme Court has noted, the consequences of the government capturing aggregated metadata are much more harmful to the civil liberties of the people whose data is captured than the Supreme Court contemplated back in 1979. In U.S. v. Jones, a Fourth Amendment decision issued in 2012, Justice Sotomayor observed that aggregated metadata “generates a precise, comprehensive record” of people’s habits, which in turn “reflects a wealth of detail about [their] familial, political, professional, religious, and sexual associations.” One of the reasons we have the Fourth Amendment is to ensure that these associations are not chilled by the government being able to freely spy on people’s private affairs. But when this form of warrantless surveillance is allowed to take place, they necessarily will be.

While it’s bad enough that any associations are chilled, in certain instances that chilling implicates other Constitutional rights. The amicus brief by the Reporters Committee for Freedom of the Press addressed how the First Amendment is undermined when journalists can no longer be approached by anonymous sources because, if the government can easily discover evidence of their conversations, the sources effectively have no anonymity and will be too afraid to reach out. Similarly, the brief I wrote discusses the impact on the Sixth Amendment right to counsel when another type of relationship is undermined by this surveillance: that between lawyers and their clients.

It is because the lack of privacy can prejudice the client that these privacy protections exist. Their existence independently suggests that there is a legitimate expectation of privacy in lawyer-client relationships that society recognizes as reasonable, and that when the government invades that privacy, doing so without a warrant therefore violates the Fourth Amendment. But it’s because the client can be so prejudiced that the surveillance at issue here is so constitutionally problematic for reasons beyond just the Fourth Amendment: it also violates the Sixth Amendment.

The Sixth Amendment guarantees the right to counsel. This right has been interpreted to mean “effective” counsel. But lawyers cannot provide effective counsel when clients are not assured of sufficient secrecy within their relationship to induce their candor. As the Supreme Court has noted severaltimes, clients need to be able to trust in the privacy of their communications with their attorneys in order to engage in the “full and frank” conversation necessarily to fully apprise their lawyers of all the facts and circumstances the lawyer needs to be able to put on an effective defense.

But when the government can easily vacuum up evidence of these communications they are no longer private. When the government can look at the call records of an attorney and determine who might have turned to him for help, or whom he might have contacted when preparing a defense, the sphere of privacy the attorney-client relationship depends on to form and function effectively evaporates. The government is now a witness to these conversations, a third party that has inserted itself into the attorney-client relationship, whose presence necessarily chills it. When this chilling happens the Sixth Amendment right to counsel is thus undermined, and for this reason, too, the government’s surveillance program to indiscriminately and warrantlessly collect all the phone records of nearly all people, including those between lawyers and those seeking their help, cannot be allowed to continue.

]]>Six degrees of Mohamed Attahttp://www.digitalagedefense.org/wp/2013/09/30/six-degrees-of-mohamed-atta/
Mon, 30 Sep 2013 17:17:38 +0000http://www.digitalagedefense.org/wp/?p=787[...]]]>Have you met me? Are we acquainted, either in real life or in social media, or even just had a passing exchange at some point via email? If so, congratulations – you are now connected to a 9/11 mastermind, and the NSA probably knows it.

I am not, of course, a 9/11 mastermind, nor have I been personally acquainted with anyone who was. In fact, by the time I learned of this connection, 9/11 had long since happened and Mohamed Atta was long since dead. But I have a friend in Germany who has a friend who was at the same college in Hamburg that Atta attended, and now, by the NSA’s logic, we are all tainted by the association.

Which is complete and utter nonsense, of course. Mere acquaintance (even when not so attenuated) is not a proxy for influence. Even friendship itself is not a proxy for influence. Relationships between people are many and nuanced and the simple knowledge of one person by another (or even a close social or familial tie) in no way connotes endorsement of every, or even any, aspect of one’s life by the other. Unfortunately the NSA doesn’t seem to realize this (or, more likely, doesn’t care).

It’s also just as wrong to attempt to derive meaning from these connections because it turns out we are all connected. There is a reason we play “Six Degrees of Separation,” because it reveals the miracle of how upwards of seven billion people spread out on a planet surface of nearly 200 million square miles share this nonetheless pretty small world after all.

Obviously some people share in it more constructively than others. There are some who would choose to do violence to it. But not all of us, or even most of us, and in mapping all of our connections so indiscriminately we are all treated with the same suspicion and surveillance as the few actual bad actors. The NSA might argue that such surveillance of our interconnectivity is necessary to “discover and track” these bad actors, but by putting all of our lives under such scrutiny the NSA presumptively treats the innocent as equally guilty by association.

]]>Techdirt posts of the week (annotated)http://www.digitalagedefense.org/wp/2013/07/28/techdirt-posts-of-the-week-annotated/
Sun, 28 Jul 2013 23:21:49 +0000http://www.digitalagedefense.org/wp/?p=774[...]]]>I was asked to write the “Posts of the Week” for Techdirt this past weekend and used it as an opportunity to convey some of the ideas I explore here to that audience. The post was slightly constrained by the contours of the project — for instance, I could only punctuate my greater points with actual posts that appeared on Techdirt last week — but I think they held together with coherence, and I appreciated the chance to reframe some of the issues Techdirt was already exploring in this way.

It’s one thing for the government to pass laws that give citizens rights vis a vis each other. We see how the grant of these rights plays out, for instance, with most intellectual property law, and on that front this past week Techdirt covered one of the latest rounds in the Prenda saga, a powerful sanctions ruling by Judge Chen awarding the defendant the attorney fees he sought with some devastating findings as to the abusiveness of the litigation overall. It’s no picnic being on the receiving end of a civil suit, especially not one as unjustified as those promulgated by Prenda or the defamation suit filed by an AIDS denialist against one of his critics, all lawsuits that stand to be incredibly chilling towards people using this amazing communications technology, the Internet.

Some commenters got confused and thought I was speaking to government creation of rights in a human rights sort of sense. Above I was merely referring the creation of rights-as-causes-of-action that people can sue others for damages for violating, as in copyrights, rights guaranteed by tort laws, contractual rights, etc. The types of things one private party would sue another for in civil court — and doesn’t need to call the police to help vindicate.

The state AGs argue that the intermediaries do, in fact, have something to do with the creation of the content because but for the intermediaries that content would never have gotten posted. But that’s exactly right: but for Internet intermediaries absolutely no content gets posted on the Internet, and if we make it possible (and, indeed, likely) for an intermediary to be held liable (and in this case criminally liable) for all the content they intermediate, it will be too dangerous a proposition for an intermediary to enable even non-wrongful content to appear on their systems. As a consequence, whole swathes of legitimate content and ideas we currently get to enjoy on the Internet won’t be available.

And lest we think this chilling effect is a purely hypothetical concern, we saw it play out this week with KTVU demanding YouTube (an intermediary) remove the video of its anchors incorrectly listing the Asiana pilots’ names over the air, despite the fact that it remains (particularly as KTVU fires the people it believes responsible) an issue of public concern worthy of discussion. KTVU has this sort of leverage over YouTube because liability for intellectual property is already exempted from Section 230, meaning that the intermediary now has little choice but to remove content others posted through it in order to avoid having to potentially bear liability for it, even when such demands for deletion are nothing but a textbook case of censorship.

The KTVU example illustrates the failure of the Digital Millennium Copyright Act. Unlike with Section 230, which completely immunizes the intermediary for the content it hosts as long as it was created by others, the DMCA predicates its “immunity” (which is actually a safe harbor from copyright infringement remedies, rather than immunity from suit) only if the intermediary does certain things, including, most importantly, removing content that a putative copyright owner claims violates its rights. In contrast, no such removal must happen for the intermediary to benefit from Section 230’s protections, which means that legitimate content is less vulnerable to attack by critics of it and thus can remain available online. Not so with the DMCA, which as the KTVU situation presents one of the many, many examples of, is rife for abuse. At minimum the DMCA and its jurisprudence is hungry for reform to alleviate the abuse problem, but there probably always will be attempts to game it and thus the DMCA serves as a cautionary tale for why Section 230’s pure immunity is so important to maintain. For even when it comes to allegations of criminal wrongdoing in content (rather than copyright), we see that the government often gets those wrong too.

Of course, maybe there are good reasons certain Internet content should somehow be illegal, either civilly or criminally (for instance, maybe this Bitcoin outfit really was running a Ponzi scheme), and, subject to constitutional limitations, as a democratic society, we can make the law criminalize whatever we want it to in deference to those reasons. On a practical level, however, sometimes this lawmaking goes better than others, and last week Techdirt documented some of the highs and lots of the lows in lawmaking. One significant low is the continued development of the Trans-Pacific Partnership, an ultimately law-creating treaty that, rather than being driven by the priorities of the legislature Constitutionally-tasked with making law, is instead being negotiated in secret by the executive branch. Even if the TPP were to turn out to be the most wonderful treaty affecting US law in the most wonderful of ways, it fails utterly as an example of representative democracy, and the legal liabilities it stands to create are therefore of little legitimacy.

This phenomenon of having the executive branch negotiate the US into treaties that will require changing its laws in order to comply in a way that Congress was unwilling to do legislatively on its own political initiative is known as “policy-laundering.” These treaties force Congress to either amend its laws — at times even those with criminal sanctions — or allow the US to be in breach of the treaty and subject to economic sanctions.

But even if these programs are consistent with either their enabling statutory language or previous Fourth Amendment case law, it is not at all clear that they are consistent with either the spirit or bare language of the Fourth Amendment:

The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.

Over time, on a case by case basis specific to the facts before them, courts have whittled away at what we might understand the Fourth Amendment to protect. Which is unfortunate, because on its face it would appear to protect quite a bit of personal privacy from government intrusion, except under very narrow circumstances. But as we learn more about these surveillance programs we see how, even if “legal,” they intrude upon that privacy, and in a way that essentially destroys all vestiges of it for everyone, criminal (or foreign) or not.

Digital photos and movies of kittens
Ebooks that I’ve read and writings I’ve written
Torrents of email my ISP brings
These are a few of my tangible things

Cookies and beacons and Photoshop doodles
Cell calls and voip calls and searches with Google
Web browsing hist’ry and cloud-based storings
These are a few of my tangible things

Data that’s sitting in unsalted hashes
Logfiles that show when my laptop it crashes
GPS footprints and TCP pings
These are a few of my tangible things

When my job bites
When some news stings
When I’m feeling sad
I simply relate all my thoughts and feelings
For the NSA to grab

Perhaps not what Rogers and Hammerstein had in mind, but likely not the Founding Fathers’ either.

]]>A shielding law (cross-post)http://www.digitalagedefense.org/wp/2013/06/16/a-shielding-law/
Sun, 16 Jun 2013 20:34:56 +0000http://www.digitalagedefense.org/wp/?p=737[...]]]>While originally I intended this blog to focus only on issues where cyberlaw collided with criminal law, I’ve come to realize that this sort of analysis is advanced by discussion of the underlying issues separately, even when they don’t implicate either criminal law or even technology. For example, discussions about how copyright infringement is being criminally prosecuted is aided by discussion on copyright policy generally. Similarly, discussions about shield laws for bloggers are advanced by discussions of shield laws generally, so I’ve decided to import one I wrote recently on my personal blog for readers of this one:

Both Ken @ Popehat and “Gideon” at his blog have posts on the position reporter Jana Winter finds herself in. To briefly summarize, the contents of the diary of the alleged Aurora, CO, shooter ended up in her possession, ostensibly given to her by a law enforcement officer with access to it and in violation of judicial orders forbidding its disclosure. She then reported on those contents. She is not in trouble for having done the reporting; the problem is, the investigation into who broke the law by providing the information to her in the first place has reached an apparent dead end, and thus the judge in the case wants to compel her, under penalty of contempt that might include jailing, to disclose the source who provided it, despite her having promised to protect the source’s identity.

In his post Gideon make a compelling case for the due process issues at stake here. What’s especially notable about this situation is that the investigation isn’t just an investigation into some general wrongdoing; it’s wrongdoing by police that threatens to compromise the accused’s right to a fair trial. However you might feel about him and the crimes for which he’s charged, the very fact that you might have such strong feelings is exactly why the court was motivated to impose a gag order preventing the disclosure of such sensitive information: to attempt to preserve an unbiased jury who could judge him fairly, a right he is entitled to by the Constitution, irrespective of his ultimate innocence or guilt, which the police have no business trying to undermine.

Ken goes even further, noting the incredible danger to everyone when police and journalists become too chummy, as perhaps happened in the case here. Police power is power, and left unchecked it can often become tyrannically abusive. Journalists are supposed to help be that check, and when they are not, when they become little but the PR arm for the police, we are all less safe from the inherent danger that police power poses.

But that is why, as Ken and Gideon wrestle with the values of the First Amendment versus the values of the Fifth and Sixth the answer MUST resolve in favor of the First. There is no way to split the baby such that we can vindicate the latter interests here while not inadvertently jeopardizing these and other important interests further in the future.

Ken began with a personal anecdote that shaped his view, so I will include mine. On my watch as editor of the high school newspaper, we accepted, under condition of anonymity, a letter confessing to an act of politically-motivated criminal mischief. (More specifically, the source of the letter claimed to have ripped up the “no parking” signs and painted very real-looking parking spaces on the pavement in order to protest a much-loathed-by-students policy forbidding students from parking on the streets neighboring the high school.) Neither the underlying defiant act, nor the letter, sat well with school officials. Enraged with embarrassment that this crime had happened under their noses, together with the town police they went on the warpath to find the culprits. The miscreant(s) had woken the bear, and he was hungry for fresh meat, even if it was that of journalists. I was called into the principal’s office and (erroneously) threatened with charges of perjury if I did not divulge the source of the letter. (Important note: as powerful as public officials may be, their power does not necessarily correlate with their correctness.) I refused and got a lawyer instead.

Would the world have ended if I’d divulged the source? Maybe not. Maybe no one would have even gone to jail. But here was an issue relevant to the community that only with the help of the source we were able to fully report on. (Indeed, many students wanted to know what had transpired, because seeing the spaces and no signs, they’d parked in them and then gotten tickets.) If as a journalist I couldn’t get that sort of assistance because my promises of anonymity were meaningless, there would be a lot less that I could report on – no matter how much the community really needed to know it. Which brings us back to the situation in Aurora.

Ken and Gideon are likely right that in this instance the divulging of the diary’s contents by the police was a craven abuse of its power and position – and in a way that potentially represents real harm to the due process rights of the defendant. But I don’t think there is a way we could except this particular situation from the shield law (“shield law” being the term for the law generally permitting journalists to protect the identity of their sources, also sometimes referred to as “newsman’s privilege” or something similar) without doing some violence to the shield law’s durability and utility in other ways.

Since the 1970s we have seen the journalist’s privilege to protect a source as a qualified one that can be balanced against other compelling state interests. Even the Colorado shield law statute makes clear the privilege is not absolute. But great care must be made to not back away from it too easily – and subsequent jurisprudence supports this view – for the very reasons Ken and Gideon contemplate for why they may be tempted to do so here: because police power can so easily be abused.

It wasn’t just abused today, in this instance, but may also be tomorrow in many others, and we need to be able to know about it. But we are much less likely to when sources are chilled from coming forward and informing journalists about the things the public needs them to report on. Today, yes, it seems the anonymous police source has sought the shield of anonymity simply to protect himself from the consequences for having done something both highly illegal and gravely wrong. But what if tomorrow an anonymous police source seeks the shield of anonymity to protect it for when he does something that might similarly be illegal but, on balance, nonetheless right? Like, for instance, whistleblowing on other police abuse?

Whenever the shield law is asserted it’s never really about that particular situation; it’s always about being able to assert it in future situations, and that ability is undermined when the assertion can so easily be countermanded with post hoc judicial review. Both sources and reporters need a way to anticipate whether the promise of anonymity will either real or illusory, and the more frequently and more easily the promise is punctured the more illusory it will become. True, the Colorado shield law statute does contemplate situations under which the shield might be made to yield, but for the shield to retain any meaning these situations must be defined as narrowly as possible, practically to the point of never and even in the face of extremely compelling countervailing reasons. It cannot be denied based on merit of the reporter’s story, for no one is fit to arbitrate that worth. It cannot be denied based on the specific crime revealed by the information the source divulged, nor can it be denied based on crime potentially committed when the source divulged it, for no amount of journalist testimony will ever provide a cure for those crimes, and it’s sometimes only that promise of anonymity that let us know such a crime had even occurred. And it cannot be denied based on the interest, no matter how valid or important, that might potentially be jeopardized by the privilege’s assertion, for that is never the only interest in play.

First Amendment-enabled protections like shield laws provide an escape valve from the tyranny abusive police actions present. If, as Ken and Gideon ably argue, we need to ensure we have some defense against this power, then we need to sure that important safety measures such as newsman’s privilege remain in place, as potent as ever, to protect us.

]]>Pervasive surveillancehttp://www.digitalagedefense.org/wp/2013/05/14/pervasive-surveillance/
Tue, 14 May 2013 16:25:26 +0000http://www.digitalagedefense.org/wp/?p=740[...]]]>This specific blog post has been prompted by news that the Department of Justice had subpoenaed the phone records of the Associated Press. Many are concerned about this news for many reasons, not the least of which being that this revelation suggests that, at minimum, the Department of Justice violated many of its own rules in how it did so (ie, it should have reported the existence of the subpoena within 45 days, maybe 90 on the outside, but here it seems to have delayed a year). The subpoena of the phone records of a news organization also threatens to chill newsgathering generally, for what sources would want to speak to a reporter if the government could be presumed to know that these communications had been taking place? For reasons discussed in the context of shield laws, reporters can’t do their information-gathering-and-sharing job if the people they get their information from are too frightened to share it. Even if one were to think that in some situations loose lips do indeed sink ships and it’s sometimes bad for people to share information, there’s no way the law can differentiate which situations are bad and which are good presumptively or prospectively. In order to for the good situations to happen – for journalists to help serve as a check on power — the law needs to give them a free hand to discover the information they need to do that.

But the above discussion is largely tangential to the point of this post. The biggest problem with the story of the subpoena is not *that* it happened but that, for all intents and purposes, it *could* happen, and not just because of how it affected the targeted journalists but because of how it would affect anyone subject to a similar subpoena for any reason. Subpoenas are not search warrants, where a neutral arbiter ensures that the government has a proper reason to access the information it seeks. Subpoenas are simply the form by which the government demands the information it wants, and as long as the government only has to face what amounts to a clerical hurdle to get these sorts of communications records there are simply not enough legal barriers to protect the privacy of the people who made them.

In 1968 when Congress passed the Wiretap Act there was a strong public policy impetus to protect the privacy of communications, and the result is a fairly strong law that generally protects their contents from both the government and private parties (at least when it comes to telephonic communications; we will save for another day the question of whether the content of other communications, such as Internet communications, are similarly protected). But this law pretty has been interpreted to pretty much only protect the *content* of the communications. Other aspects of the communications, like their basic addressing information, are not covered by the same prohibitions against their access. And at first glance, this exception may seem to make sense: how can the phone company connect your call without them knowing who you want to talk to? That information – the number itself – cannot possibly be considered private since you have to share it in order to make the call. And, in the end, it’s just a number. Unlike the contents of communications, which are the private business shared only between the people communicating, the basic number doesn’t really convey any information about what was said.

Or does it? To return to the Associated Press story, the whole reason the government wanted all those phone records was because it was trying to find the source of certain information. It didn’t need to listen in to the phone calls for the privacy of those calls to be compromised; it simply needed to know who made them for that privacy interest to be violated. Ah, but what privacy interest, the government would say, pointing to Smith v. Maryland, 442 U.S. 735, 744 (1979), wherein the Supreme Court held that government access of call records was not a search because the caller had “voluntarily conveyed numerical information to the telephone company.” This supposition of non-privacy is certainly debatable to the extent that this “third party doctrine” applies to even a single call record, but it most certainly is suspect when it comes to government access of multiple call records, as was the case with the AP. In this case the government apparently got *all* call records for an extended period of time, thereby learning everything about who AP reporters were communicating with, whether they were talking to the target of the government investigation or not.

And that’s where yesterday’s post about Wiretap Act jurisprudence comes in. One of the lessons from Deal v. Spears and the Sixth Circuit US v. Jones case also cited was that wholesale, indiscriminate, undetectable surveillance of communications offends the privacy standards laid out by the Fourth Amendment and statutorily enshrined in the Wiretap Act. Under both the Fourth Amendment and the Wiretap Act surveillance of some communications could still be acceptable, but only on a case by case basis when the specific communication warranted it. The defendants in Spears still could not listen to all calls, even if they would have had the right to listen to some calls. Nor could the defendant in Jones.

So, you might say, in those cases we are dealing only with the content of phone calls, which are clearly covered by the Wiretap Act. But the Wiretap Act’s principles have been extended beyond basic telephony before, when the situation appeared to warrant doing so. Video surveillance, for example, was not contemplated back when the original Act was passed in the late Sixties, but eventually the courts came to find that the Wiretap Act reached it. See, for instance, U.S. v. Taketa, 923 F.2d 665, 677 (9th Cir. 1991), finding that “the silent, unblinking lens of the camera was intrusive in a way that no temporary search … could have been.”

What we see in these cases, as well as the concurrence by Justice Sotomayor in a different US v. Jones case, is an emerging judicial sense that the pervasiveness of a form of surveillance can implicate a privacy interest in a way that a single access of not-strictly-private information may not. In other words, it’s the access of this data in bulk that violates the Fourth Amendment principles because it was done in bulk. As, indeed, it logically should. The Fourth Amendment allows for government access of private information when there is a particular need for it. But wholesale access to information hardly satisfies the Fourth Amendment’s requirement for particularization, and as we see in the case of the AP, it can reveal to the government in aggregate what it never could have discovered in the singular.

]]>Deal v. Spearshttp://www.digitalagedefense.org/wp/2013/05/13/deal-v-spears/
http://www.digitalagedefense.org/wp/2013/05/13/deal-v-spears/#commentsTue, 14 May 2013 00:37:16 +0000http://www.digitalagedefense.org/wp/?p=735[...]]]>One of the cases I came across when I was writing an article about Internet surveillance was Deal v. Spears, 980 F. 2d 1153 (8th Cir. 1992), a case involving the interception of phone calls that was arguably prohibited by the Wiretap Act (18 U.S.C. § 2511 et seq.). The Wiretap Act, for some context, is a 1968 statute that applied Fourth Amendment privacy values to telephones, and in a way that prohibited both the government and private parties from intercepting the contents of conversations taking place through the telephone network. That prohibition is fairly strong: while there are certain types of interceptions that are exempted from it, these exemptions have not necessarily been interpreted generously, and Deal v. Spears was one of those cases where the interception was found to have run afoul of the prohibition.

It’s an interesting case for several reasons, one being that it upheld the privacy rights of an apparent bad actor (of course, so does the Fourth Amendment generally). In this case the defendants owned a store that employed the plaintiff, whom the defendants strongly suspected – potentially correctly – was stealing from them. In order to catch the plaintiff in the act, the defendants availed themselves of the phone extension in their adjacent house to intercept the calls the plaintiff made on the store’s business line to further her crimes. Ostensibly such an interception could be exempted by the Wiretap Act: the business extension exemption generally allows for business proprietors to listen in to calls made in the ordinary course of business. (See 18 U.S.C. § 2510(5)(a)(i)). But here the defendants didn’t just listen in to business calls; they recorded *all* calls that the plaintiff made, regardless of whether they related to the business or not, and, by virtue of being automatically recorded, without the telltale “click” one hears when an actual phone extension is picked up, thereby putting the callers on notice that someone is listening in. This silent, pervasive monitoring of the contents of all communications put the monitoring well-beyond the statutory exception that might otherwise have permitted a more limited interception.

[T]he [defendants] recorded twenty-two hours of calls, and […] listened to all of them without regard to their relation to his business interests. Granted, [plaintiff] might have mentioned the burglary at any time during the conversations, but we do not believe that the [defendants’] suspicions justified the extent of the intrusion.

[T]here is a vast difference between overhearing someone on an extension and installing an electronic listening device to monitor all incoming and outgoing telephone calls.

And so the defendants, hapless victims though they seemed to have been in their own right, were found to have violated the Wiretap Act.

But Deal v. Spears is a telephone case, and telephone cases are fairly straight forward. The statutory language clearly reaches the contents of those communications made with that technology, and all that’s really been left for courts to decide is how broad to construe the few exemptions the statute articulates. What has been much harder is figuring out how to extend the Wiretap Act’s prohibitions against surveillance to those communications made via other technologies (ie, the Internet), or to aspects of those communications that seem to apply more to how they should be routed than their underlying message. However privacy interests are privacy interests, and no amount of legal hairsplitting alleviates the harm that can result when any identifiable aspect of someone’s communications can be surveilled. There is a lot that the Wiretap Act, both in terms of its statutory history and subsequent case law, can teach us about surveillance policy, and we would be foolish not to heed those lessons.

More on them later.

]]>http://www.digitalagedefense.org/wp/2013/05/13/deal-v-spears/feed/1The high cost of the Golden Gate Bridge cashless tollshttp://www.digitalagedefense.org/wp/2013/03/27/high-cost-of-the-golden-gate-bridge-cashless-tolls/
Wed, 27 Mar 2013 14:31:43 +0000http://www.digitalagedefense.org/wp/?p=715[...]]]>I was interviewed yesterday about my concerns for the new Golden Gate Bridge toll system. Like an increasing number of other roadways, as of this morning the bridge will have gone to all-electronic tolling and done away with its human toll-takers, ostensibly as a cost-cutting move. But while it may save the Bridge District some money on salaries, at what cost does it do so to the public?

With the toll-takers bridge users could pay cash, anonymously, whenever they wanted to use the bridge. Fastrak, the previous electronic toll system, has also been an option for the past several years, offering a discount to bridge users who didn’t mind having their travel information collected, stored, and potentially accessed by others in exchange for some potential expediency. But now bridge users will either have to use Fastrak, or agree to have their license plates photographed (and thereby have their travel information collected, stored, and potentially accessed by others) and then compared to DMV records in order to then be invoiced for their travels.
As an aside, there are plenty of valid logistical concerns about this new system. Even for locals the options for paying are myriad and confusing, and the change-over has happened on such a rapid timeline that the public may not have realized that this rather drastic change to the toll system is in fact upon them. Furthermore, the Golden Gate Bridge draws countless tourists, including many from other countries, and while the privacy implications may be smaller for people making one-off trips across the bridge, the administrative hassles of paying for them may in effect impose an extra tax on these tourists. There’s also the question of what happens if people don’t pay the tolls: for in-state drivers the DMV can freeze their auto registrations, but for out-of-state drivers the only leverage will be to send them to collections, a “solution” which raises significant due process concerns as well as increased enforcement costs for the Bridge District.

But the biggest problem with the change is its privacy implications. With the change the only convenient way to use the bridge now is to set up an account, either with a Fastrak transponder or by association with a license plate, that is connected to a credit card. But either arrangement means the Bridge District, a state actor, or the Fastrak administration, another state actor that is in charge of all the electronic toll collections, will be able to develop a detailed inventory of bridge crossings associated to identifiable travelers. Moreover, per the privacy policy, this data can be retained for up to 4.5 years after the billing cycle has completed and the balance finally satisfied. As I told the reporter today, there are plenty of Law & Order episodes where the cops used EZPass (New York’s Fastrak equivalent) records to figure out whom they wanted to drag down to the police station. That real life California cops might do the same is hardly a fanciful fear.

Thankfully it does appear that there is a way to avoid having one’s travels collected in this database, but it’s an extremely cumbersome and inconvenient alternative. One must go to the Fastrak Customer Service Center, of which there is only one (in San Francisco, on The Embarcadero at Broadway), and buy a Fastrak transponder with a $20 deposit and $50 pre-load minimum. When the balance drops (which, at $5 per toll, will happen quickly) people can then add more cash to top off their accounts, but the locations for doing so are few and far between and not particularly well-identified anywhere on the Bridge District website.

Will people avail themselves of this option? Perhaps. Arguably everyone should, but it simply may be too much of a burden to have to make such large cash outlays as well as have to frequently run a separate and distinct errand in order to continue to pay cash anonymously. Thus for all intents and purposes, there is no real alternative to having one’s travels tracked, and all the privacy risks such tracking represents remain present, potent, and unmitigated. As I told the reporter, if cash-less tolling is the way of the future we will need to have a long conversation about what that means for the privacy rights of drivers. Right now they are being compromised just so the toll authorities can (arguably) save a buck, and that’s a bad bargain the public should not be forced to pay for.

]]>Privacy prioritieshttp://www.digitalagedefense.org/wp/2013/03/27/privacy-priorities/
Wed, 27 Mar 2013 14:30:52 +0000http://www.digitalagedefense.org/wp/?p=713[...]]]>I’ve written before about the balance privacy laws need to take with respect to the data aggregation made possible by the digital age. When it comes to data aggregated or accessed by the government, on that front law and policy should provide some firm checks to ensure that such aggregation or access does not violate people’s Fourth Amendment right “to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures.” Such limitations don’t forever hobble legitimate investigations of wrongdoing; they simply require adequate probable cause before the digital records of people’s lives be exposed to police scrutiny. You do not need to have something to hide in order not to want that.

But all too often when we demand that government better protect privacy it’s not because we want the government to; on the contrary, we want it to force private parties to. Which isn’t to say that there is no room for concern when private parties aggregate personal data. Such aggregations can easily be abused, either by private parties or by the government itself (which tends to have all too easy access to it). But as this recent article in the New York Times suggests, a better way to construct the regulation might be to focus less on how private parties collect the data and more on the subsequent access to and use of the data once collected, since that is generally from where any possible harm could flow. The problem with privacy regulation that is too heavy-handed in how it allows technology to interact with data is that these regulations can choke further innovation, often undesirably. As a potential example, although mere speculation, this article suggests that Google discontinued its support for its popular Google Reader product due to the burdens of compliance with myriad privacy regulations. Assuming this suspicion is true — but even if it’s not — while perhaps some of this regulation vindicates important policy values, it is fair to question whether it does so in a sufficiently nuanced way so that it doesn’t provide a disincentive for innovators to develop and support new products and technologies. If such regulation is having that chilling effect, we may reasonably want to question whether these enforcement mechanisms have gone too far.

Meanwhile public outcry has largely been ignoring much more obvious and dangerous incursions into their privacy rights done by government actors, a notable example of which will be discussed in the following post.

]]>Follow, not leadhttp://www.digitalagedefense.org/wp/2013/02/20/follow-not-lead/
Wed, 20 Feb 2013 18:37:01 +0000http://www.digitalagedefense.org/wp/?p=704[...]]]>At an event on CFAA reform last night I heard Brewster Kahle say what to my ears sounded like, “Law that follows technology tends to be ok. Law that tries to lead it is not.”

His comment came after an earlier tweet I’d made:

I think we need a per se rule that any law governing technology that was enacted more than 10 years ago is inherently invalid.

In posting that tweet I was thinking about two horrible laws in particular, the Computer Fraud and Abuse Act (CFAA) and the Electronic Communications Privacy Act (ECPA). The former attempts to forbid “hacking,” and the second ostensibly tried to update 1968’s Wiretap Act to cover information technology. In both instances the laws as drafted generally incorporated the attitude that technology as understood then would be the technology the world would have forever hence, a prediction that has obviously been false. But we are nonetheless left with laws like these on the books, laws that hobble further innovation by how they’ve enshrined in our legal code what is right and wrong when it comes to our computer code, as we understood it in 1986, regardless of whether, if considered afresh and applied to today’s technology, we would still think so.

To my tweet a friend did challenge me, however, “What about Section 230? (47 U.S.C. § 230).” This is a law from 1996, and he has a point. Section 230 is a piece of legislation that largely immunizes Internet service providers for liability in content posted on their systems by their users – and let’s face it: the very operational essence of the Internet is all about people posting content on other people’s systems. However, unlike the CFAA and ECPA, Section 230 has enabled technology to flourish, mostly by purposefully getting the law itself out of the way of the technology.

The above are just a few examples of some laws that have either served technology well – or served to hamper it. There are certainly more, and some laws might ultimately do a bit of both. But the general point is sound: law that is too specific is often too stifling. Innovation needs to be able to happen however it needs to, without undue hindrance caused by legislators who could not even begin to imagine what that innovation might look like so many years before. After all, if they could imagine it then, it would not be so innovative now.

]]>On regulating privacy and technologyhttp://www.digitalagedefense.org/wp/2012/05/19/on-regulating-privacy-and-technology/
http://www.digitalagedefense.org/wp/2012/05/19/on-regulating-privacy-and-technology/#commentsSun, 20 May 2012 02:28:32 +0000http://www.digitalagedefense.org/wp/?p=612[...]]]>There’s no discussing technology law without discussing how it implicates privacy. But privacy is such a broad concept; to discuss it in any meaningful requires a definition with more detail.

I see there being (at least for purposes of the sort of discussion on this site) two main types privacy: privacy from the government, and privacy from other individuals. And when it comes to regulating the intersection of privacy and technology, these two types of privacy require very different treatment.

It’s not that privacy is unimportant in either sphere. Knowledge is power, and knowledge of the details of people’s lives gives power over them. It therefore makes sense for law to regulate when and how that knowledge can be collected and used. But that regulation does not necessarily mean outright prohibition. It’s important to balance the reasons for and against information collection for both types of privacy — but that balance will be different for each.

Privacy from the government protects people from government intrusion in their affairs. This protection is important because information is the fuel the state uses to prosecute. Of course, sometimes the government does have a legitimate need to seek this information. Crimes do happen, and with probable cause to believe a particular person is culpable, the state may legitimately seek out informational evidence to prove it. But how we decide when those needs are legitimate, and how we allow technology to be deployed in furtherance of those needs, is something we need to carefully consider. Law should ensure that these exceptions are drawn no more expansively than necessary in order not to expose people to undue state scrutiny and prosecution.

Privacy from other individuals prevents those other individuals from leveraging the information they glean in a harmful way. But regulation of how this information is collected and used requires more nuance than the more absolute prohibition against government access to private data. For these issues we need to define such things as who is doing what data collection, under what pretense, for what purpose, with what notice, and for what benefit. We can’t regulate it all with a sledgehammer without inviting more problems.

While we may wish the technologies of today to handle privacy better, we would certainly not want to wish them away entirely. We would not even be faced with these technologies’ downsides if we didn’t also benefit from their tremendous upsides, and if we don’t regulate carefully we risk destroying the latter while trying to deal with the former. We have only been able to get the benefits of these technologies because people were free to develop them as their imagination saw fit. But heavy-handed regulation will prevent innovators from developing the next exciting tools, better tools that might even be able to mitigate some of the privacy problems of the current generation. We need a regulatory regime that allows this future to develop, for who can innovate with the threat of liability hanging over their head? Privacy regulation of this type therefore needs a delicate touch, to minimize the harmful effects technology may cause without causing harm to its promise.

]]>http://www.digitalagedefense.org/wp/2012/05/19/on-regulating-privacy-and-technology/feed/1Korean online portals and game companies to stop asking for user datahttp://www.digitalagedefense.org/wp/2012/01/23/korean-online-portals-and-game-companies-to-stop-asking-for-user-data/
Mon, 23 Jan 2012 19:21:27 +0000http://www.digitalagedefense.org/wp/?p=455[...]]]>This article in the Korea Times reports that several large online presences in Korea have stopped asking for users’ resident registration numbers when subscribing to their sites. They began to ask in 2007 as a means of ensuring compliance with the government’s requirement that users provide their real names. However, the government had no means to enforce that rule on foreign websites, and it has led to instances of identity theft.

Nexon recently had the private data of 13 million users hacked. Nate and Cyworld, its sister social networking service, had 35 million users’ details compromised after being hacked. After a series of private information leaks at large businesses like Nate, Nexon, Auction, and Hyundai Capital, now virtually all the resident registration numbers of Koreans are available.

As they hold the key to entering Internet sites, criminals can collect almost anyone’s details by collecting information from two or three websites, acquiring names, phone numbers, email addresses, home addresses, office addresses, shopping records, bank account numbers and even blood types.

Some victims submitted a petition to the court last month, requesting they be allowed to change their registration number. “We are on the verge of suffering from more damage as we are forced to continuously use our leaked registration numbers with no countermeasures being taken so far,” the complainants said in their suit.

The Korea Communications Commission is now planning regulation preventing resident numbers from being held online.

]]>Death by chocolate cupcakeshttp://www.digitalagedefense.org/wp/2012/01/10/death-by-chocolate-cupcakes/
Tue, 10 Jan 2012 20:54:51 +0000http://www.digitalagedefense.org/wp/?p=424[...]]]>Yes, I do have other relevant things to blog about than more TSA antics. This isn’t supposed to be a TSA-only blog. But (a) some recent news is too outrageous/tempting to skip, and (b) there are relevant lessons to be extrapolated.

People in authority are very good at deeming things threats. They are very good at using their police power to exert control over what they deem as threats. They are less good at actually meting out their authority commensurate to the actual problem, and as a consequence it’s very easy for innocent people to have their rights unduly affected.

These observations hold for many contexts, and technology regulation is no exception. Exercises of governmental power can easily be heavy-handed, imprecise, and ill-suited for the problems they pretend to solve. The identification and definition of the underlying problems can also be equally ham-fisted and oftentimes ignorant of actual risk. Which is not to say that all government regulation is illegitimate. On the contrary, these examples illustrate why it’s important to question and discuss exactly when and how governments should be involved in technology use and development. They may well have important roles to play. But only if they are played with care.

]]>Tracking suspects through “silent” SMShttp://www.digitalagedefense.org/wp/2012/01/06/tracking-suspects-through-silent-sms/
Fri, 06 Jan 2012 17:02:26 +0000http://www.digitalagedefense.org/wp/?p=404[...]]]>This post on F-secure raises the specter of German authorities tracking suspects through clandestine use of the SMS system. (The post references an article on Heise Online that translates to “Customs, Federal Police and Protection of the Constitution in 2010 sent more than 440,000 ‘silent SMS.'”

So what exactly does this mean?

Well, basically, various German law enforcement agencies have been “pinging” mobile phones. Such pings only reply whether or not the targeted resource is online or not, just like an IP network ping from a computer would.

But then after making their pings, the agencies have been requesting network logs from mobile network operators. The logs don’t reveal information from the mobile phones themselves, but they can be used to locate the cell towers through which the pings traveled. And thus, can be used to track the mobile targeted.

]]>TSA police powerhttp://www.digitalagedefense.org/wp/2012/01/05/tsa-police-power/
Thu, 05 Jan 2012 16:37:57 +0000http://www.digitalagedefense.org/wp/?p=393[...]]]>Maybe there’s more TSA news breaking these days. Or maybe it’s just that I’m noticing it more. Whatever the reason, on the heels of the last post I have some new items to add. But maybe it makes sense to begin by explaining what this topic is doing on this technology blog. Airline security is, after all, somewhat peripheral to the usual subject matter in play here, which ordinarily pertains to the law as applied to computing and communications. Airplanes are obviously a technology, but that’s not what makes air travel security salient to this blog. It’s the law pertaining to air travel security that is. For one thing, it frequently involves the government using technology for policing purposes, which is indeed quite relevant to the questions raised here. Furthermore, the relevant law, and underlying policy values motivating such law, are often echoed in other aspects of technology law.

In any case, from a policy standpoint, here is an example where lawmakers decided to just throw a technology at a problem without first determining whether it was an appropriate or efficacious solution — or one that actually created a danger itself. Such unfortunate policymaking is certainly ripe for this blog.

The full-body scanning machines also perpetuate an illusion that policy makers have been reluctant to let go of, that being the idea that anything is knowable. And that anything SHOULD be knowable. Although this issue may have possibly been slightly ameliorated since they were first installed, the x-ray machines, and the full-body scanning millimeter wave machines, also raise huge privacy concerns. It’s as if lawmakers envied Superman’s x-ray vision and decided that if they, too, could see through solid objects, like people’s clothes, they would be just as good at crime fighting. Superman is, of course, fiction, but with policies like this, so is the Fourth Amendment. The government does not get to know private details about people without probable cause of a crime. And wanting to board a plane is not a crime. Yet the government (or the federal government, at least) feels it should be entitled to search through passengers papers and effects, both literally and figuratively, in no small part, it seems, because it can. In the 2001 case Kyllo v. US the government had tried to use technology “to explore details of the home that would previously have been unknowable without physical intrusion” even though it had no warrant, and its efforts were stymied. But the restraint of the Kyllo decision seems to have increasingly little impact. The temptation to deploy technology designed to allow the government to know everything about everyone is frequently too tempting for the government to resist.

Which, returning to the TSA news, is why the proposed law to de-uniform the TSA confuses me. Some legislators are concerned that the TSA is acting like police without actually having police powers. The thing is, these people DO have police powers, whether they have been officially endowed with them or not. It’s not random people we let look through people’s luggage and under their clothes; it’s agents of the state using actual state power to deny the right to travel and threaten with arrest. Justly or unjustly, constitutionally or not, they are fulfilling a police function. It’s time to acknowledge it so then we can finally decide how much power we want these police to have.

]]>Town council used surveillance law to spy on citizens and staffhttp://www.digitalagedefense.org/wp/2011/12/28/town-council-used-surveillance-law-to-spy-on-citizens-and-staff/
Wed, 28 Dec 2011 14:25:12 +0000http://www.digitalagedefense.org/wp/?p=352[...]]]>This article in the Lancaster Telegraph suggests the practice may have ended in 2007, but between 2000 and then the Burnley Council used the Regulation of Investigatory Powers Act (RIPA) of 2000 to spy on its own staff.

The regulation was brought in in 2000 and allowed council bosses to carry out surveillance on residents they suspected of committing crimes.

The vast majority of uses of the act relate to offences such as benefit fraud, fly-tipping and anti-social behaviour.

…

A Burnley Council spokesman said: “The vast majority of cases where we have used RIPA authorisations were to tackle noise nuisance, anti-social behaviour, fly-tipping and benefit fraud – all things we know our residents want us to sort out.

According to the article, local town councils used the law hundreds of times during this period to spy on residents. Ribbley Valley Borough Council, for instance, used it for dog fouling prosecutions. But Burnley also used it to spy on staff.

In 2001 they used covert surveillance cameras to monitor instances of thefts within council buildings, and in the same year directly observed movements on and off the town hall car park to see whether the council’s flexi-time system was being abused.

In 2005 it was used three times to see if a council employee was using council facilities in work time, such as making personal phone calls to avoid payment.

On two occasions they have snooped on staff near to their home to check if they were genuinely off sick or were working for a third party.

And in 2006 they set up an observation to see if a council employee was using the gym and showers whilst clocked in.

The practice was halted in 2007 following a “clarification of the law by the courts elsewhere in the country,” said a Burnley Council rep.

“Next year, the law changes and councils will need to get the approval of a magistrate before carrying out surveillance. We are looking at our procedures to make sure we can respond to the new system effectively.”

]]>Recent news in aviation safety – updatedhttp://www.digitalagedefense.org/wp/2011/12/27/recent-news-in-aviation-safety/
Tue, 27 Dec 2011 17:14:43 +0000http://www.digitalagedefense.org/wp/?p=347[...]]]>This upcoming week’s Quicklinks was starting to have quite a few examples related to aviation safety, so I thought I’d distill them into one post. There is likely universal agreement: we want to be able to travel through the air safely. There is not, however, agreement on what sort of public policy is necessary to ensure such an outcome.

Even regarding the same safety issues there’s not consensus. For example, passengers are forbidden from using certain portable electronics during takeoffs and landings for fear they’d cause electromagnetic interference that could disable the plane’s instruments. Unfortunately, whether that is a valid concern or a modern old wives’ tale is still subject to debate. This article in the New York Times Bits blog ran some tests on various objects and noted that they did not seem to emit interference that would approach dangerous levels, even in the aggregate. On the other hand, it’s worth reading this recent article from Salon.com’s Ask the Pilot columnist Patrick Smith as a counterpoint. He notes that even a minor blip in airplane instrument functionality could be risky, but moreover, the other reason to ban such devices during these periods is because they can become dangerous projectiles in case of emergency. Sure, he observes, so can books, which aren’t banned, but if one is going to draw a line somewhere this could be a reasonable place.