PayPal recently made news for implementing a policy denying its payment processing services to publications including obscene content. There are several things objectionable about this policy, including the lack of any clear way of delineating what content would qualify as “obscene,” and its overall censorious impact.

But I’m not entirely sure that PayPal is necessarily the appropriate target for criticism of this policy. It may be, to the extent that it is a truly discretionary policy PayPal has voluntarily chosen to pursue. If it could just as easily chosen not to pursue it it can be fairly criticized for the choice it did make. For this policy is not as simple as banning certain objectively horrible content 100% of all people would agree should be stricken from the face of the earth. After all, there *is* no objectively horrible content 100% of all people would agree is objectionable. Instead this policy has the effect of denying market opportunities to all sorts of writers producing all sorts of valid content, even if some people may not happen to like it. And it does this not just by denying particular publications access to its services but by forcing electronic publishers to overcensor all the works they publish lest PayPal services be shut off to their entire businesses.Continue reading »

“Online disappointment: Young Tunisian bloggers who promoted and recorded the events of the Arab Spring now find that, without a common enemy, the social media are just a cacophony of divided and conflicting views,” Smain Laacher and Cédric Terzi, LeMonde Diplomatique, Feb. 15, 2012.

“The U.N. Threat to Internet Freedom: Top-down, international regulation is antithetical to the Net, which has flourished under its current governance model,” Robert M. McDowell, Wall Street Journal, Feb. 21, 2012.

This article (in Dutch, translatable by Google) is about the Netherlands looking to amend its law to allow for government wiretapping. Apparently the government is currently allowed to intercept non-wired communications but not communications sent over a wire, and this law would change that. “Government investigates internettap for security,” Ot van Daalen, Bits of Freedom, Dec. 29, 2012.

The first relates to the need for a person to identify themselves when approached by police. (According to reports, upon finding the man walking his dogs off-leash she asked him his name and he gave a false one, an act that apparently had the effect of escalating the incident.) In some respects this aspect is slightly beyond the scope of this blog because it doesn’t directly involve a use of technology. But like the stories of the TSA, it does relate to the insistence of police authorities to be able to know everything about everyone, no matter what, and runs headlong into constitutional protections that would otherwise shield people from that scrutiny.

The other relates to the use of technology by the state. This project generally takes the position that technology itself is neither good or bad; it’s how it’s used that is either. And here we have a use of technology that seems extremely problematic.Continue reading »

When I was in first grade I got beaten up on the way home from school. It wasn’t too horrible as these things go: a kid came up from behind, grabbed the hood of my jacket, and swung me to the ground. He was in second grade and, as I look back on it, apparently having some issues with impulse control. But it was clearly unacceptable and I found it fairly traumatic (it was an absurdly safe neighborhood, so it wasn’t as if I was expecting trouble). So the school helped me identify the kid responsible and then addressed his behavior with him. At the time, and perhaps even in retrospect, all that seemed an appropriate role for the school to have played.

However, technically I wasn’t on school grounds anymore, and it didn’t take place while school was in session. The only connection to the school was that we had all just left it to walk home, and the kid was a fellow student there. And we were all so young, still learning how to get along with people as much as we were learning reading and math. These were life skills the school was trying to teach us too, and here was a very tangible teaching moment for the school to weigh in on.

But I do not find this logic compelling when it comes to the overreaching some schools have been making with regards to student speech, including off-campus, online speech. Schools have been justifying their punishment of this speech with similar rationales that my elementary school had for punishing my attacker: it’s disruptive to the school community, and people who attack others need to learn not to.

A week later he was arrested by five police officers, questioned for eight hours, had his computers and phones seized and was subsequently charged and convicted of causing a menace under the Communications Act 2003.

Section 127 of the Communications Act 2003 provides that “[a] person is guilty of an offence if he sends by means of a public electronic communications network a message or other matter that is grossly offensive or of an indecent, obscene or menacing character.” This case appears to boil down to whether or not the tweet was truly menacing, and by whom, and under what standard, its menacing character should be judged.

Chambers insists he meant it as a joke. There is also no evidence that anyone actually took the tweet to be a credible threat. In fact there is evidence that the authorities themselves did not take it to be a credible threat. However because there was the possibility that it could have been taken as a threat, thus far the conviction has held.

In addition to the First Amendment, in the US free speech on the Internet is also advanced by 47 U.S.C. Section 230, an important law that generally serves to immunize web hosts for liability in user generated content. (See Section 230(c)(1): “No provider . . . of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”). Note that this law doesn’t absolve the content of any intrinsic liability; it just means that the host can’t be held liable for it. Only the user who posted it can be.

This small rule has a big impact: if hosts could be held liable for everything their users’ posted, they would be forced to police and censor it. True, the effect of this immunity means that sometimes some vile content can make its way online and linger there, potentially harmfully. But it also means that by not forcing hosts to be censorious middlemen, they are not finding themselves tempted to kill innocuous, or even abjectly good, content. As a result all sorts of vibrant communities and useful information have been able to take root on the Web.

But for this immunity to really be meaningful, it’s not enough that it protect the host from a final award on damages. It’s extremely expensive to have to be dragged into court at all. If hosts legitimately fear needing to go through the judicial process to answer for users’ content, they may find it more worth their while to become censorious middlemen with respect to that content, in order to ensure they never need go down this road.