The Nerfherder

Editorials on the cross-section where politics and culture meet cyberspace

Thursday, February 26, 2015

Blogger Will No Longer Allow Sexually Explicit Content. Here's Why This Is So Problematic...

Blogger - Google's blogging web service - just posted a message to all its users that on March 23rd it will "no longer allow certain sexually explicit content". When you read the details, they state that if a blog does have sexually explicit material, then on March 23rd the entire blog will be made private, only visible to individuals who have accepted an invitation from the administrator. It further states that, "We'll still allow nudity if the content offers a substantial public benefit".

Second, in a similar vein, how will they determine what "offers a substantial public benefit"? It's a definitional problem once again. And it's worth pointing out that the new policy doesn't indicate whether it will be human beings or an algorithm making the final judgment. Both methodologies have their flaws, so how comfortable should we be with either of them?

Third, the practical question for millions of Blogger users with a long posting history is: How will I know ahead of time if my entire blog will suddenly be taken offline? For example, The Nerfherder has been using Blogger since its inception in 2006. This is clearly not a blog dealing in sexually explicit content, however occasionally this blog has reported on news events related to the regulation of such material. For instance, we once wrote a post about the outing of a troll on Reddit named "Violentacrez". In our post, we reported on how he had created forums titled "Jailbait" and "Rapebait", to name only a few. PLEASE, read the post for yourself and decide whether, in any way, shape, or form, you believe this should be considered "sexually explicit content". Should The Nerfherder now fear that this entire blog is about to be taken offline by Blogger because an algorithm might discover those phrases located in a post? It would at least be helpful to know if it was going to be taken offline ahead of time.

Private companies certainly have the right to remove sexually explicit content from their service. There's no problem there. The problem is that Blogger should have 1) provided more detailed criteria for what would be deemed "sexually explicit", 2) offered additional criteria for what would be considered as having "substantial public benefit", 3) been transparent in whether this new policy was being implemented by algorithm or by human beings (in order to know who should be held accountable for egregious overreaches), and finally, 4) informed its users ahead of time if their blog was about to be taken offline so that they could take preemptive steps in order to avoid the takedown, as Blogger itself suggests.

Monday, February 23, 2015

Internet Governance and the New HTTP2 Protocol...

Proof that the Internet is, in fact, governed can most easily be found in the adoption of its technical standards and protocols. Think about it: despite the Internet's decentralization, certain protocols have to be designed and adopted by nearly everyone just to ensure that the Internet remains interoperable and functional. Not only does virtually everyone need to agree on these protocols, but clearly identifiable institutions have to make decisions, resolve conflicts, and maintain control over them. This authority is the very definition of governance.

Which brings us to last week's big news that the HTTP2 protocol has officially been completed. There is a single institution known as the Internet Engineering Task Force (IETF) - an international and non-profit organization - which is single-handedly responsible for making decisions over the Internet's standards and protocols. HTTP2, as its name suggests, is the next evolutionary leap forward for the classic HTTP protocol which has been the Web's main standard for data communication since at least 1999, and a previous version since 1996.

So let's all celebrate! After all, this is the Web's open democratic process in action, right? Without the intervention of any national government, the Web has once again initiated an open participatory process, issued a Request for Comments (RFC), and ultimately built a rough consensus upon which it made a binding decision about its own future development. Is this not the self-governance and autonomy that early Internet evangelists predicted?

Well... There is one notable observation weakening the utopian self-governance argument. HTTP2 is based on SPDY, which was invented by Google, and later supported by Apple, Microsoft, Amazon, Facebook, and others. In fact, those companies pushed hard in order to get the IETF to formally adopt it. Some may argue that corporate influence has decreased the level of democratization in the process, rendering the IETF as a mere agent of such corporations and institutionalizing their self-interested preferences. However, others will correctly point out that such corporate involvement has been a part of the IETF's standards-setting processes from the beginning, so it's really nothing new, and may even be considered crucial to a new protocol's widespread adoption.

Regardless of the power relationships involved in this aspect of Internet governance, the question many of you will undoubtedly have relates to relevancy. How will this momentous development of the HTTP2 protocol affect your life? Mainly by speeding up your web browsing. And there's certainly not going to be a grassroots movement protesting that.

Wednesday, February 18, 2015

Why Google's Research Study on Data Localization and Cybersecurity Shouldn't Be Taken Seriously...

Earlier this week, Google announced the release of a research study - conducted by Leviathan Systems, but commissioned by Google - which sought to compare the security of cloud-based versus localized systems.

Many countries around the world have recently proposed laws that would require companies to keep the data about that country's users within national borders. For example, if a website in France was saving the personal data of French citizens, then the law would require the website to save that data somewhere within France's borders, as opposed to, say, California. The logic is two-fold: first, information about a country's citizens would stay out of the hands of spying foreign governments and, second, it would better enable countries to design and implement their own privacy laws (to that point, privacy laws are much stronger in the European Union than in the United States).

Predictably, Google and many other high-tech firms have come out against such laws requiring data localization. For them, it's an added expense. Google would need to backup and store user data within each such country in which it operates, rather than using Silicon Valley as its central hub for everything.

Because of this opposition, one has to be somewhat skeptical of a research study paid for by Google concluding that data localization is so clearly negative. Their argument is that cloud-based systems are more secure than localized ones, and that there would be a shortage of expertise within many countries to put stronger cybersecurity measures into effect.

It's not that there's no truth in that claim, it's just that we can be forgiven for being a little skeptical. This has become the modus operandi within the tech industry: lobby elected representatives, lobby regulatory agencies within the Executive Branch, and pay for-profit think-tanks to conduct research studies which, often, lead to predetermined results favorable to its sponsor.

From a purely economic point of view, of course Google wants to avoid data localization requirements. But there are non-economic arguments for why localization might be considered a positive - namely, the better protection of privacy rights. Google can hardly be considered unbiased, and thus, this study's conclusions shouldn't be considered authoritative, by any stretch.

Tuesday, February 10, 2015

Creating a Constitution with Open Data...

Most national or state constitutions aren't written from scratch, but rather are derivative works based off of other national and state constitutions. For example, the constitution of Japan looks remarkably similar to that of the U.S. (largely because it was written in 1946 when the U.S. occupied Japan after World War II). In fact, on average, 5 new constitutions are written every year, and even more are amended.

Could modern data-driven technologies help in the constitution-drafting process? Furthermore, could any individual potentially create a constitution that would govern some type of entity using such tools as well? What would be the consequences of this?

Google Ideas launched a website called Constitute in 2013 which allows people to not only view and download every national constitution in the world, but also has a feature that enables easy comparisons between them. Furthermore, Constitute let you mashup different excerpts from different sources so that, in effect, you can embed your own constitutional ideas in a single document and share it on social media.
Going yet another step further, Constitute also makes all of their underlying data freely available through an open data portal, complete with its own API for programmers and research developers.

It's an interesting exercise to think about what type of constitution would you create for governing Internet use in the United States. What ideas would it embody? What values and/or rights and liberties would it be designed to protect? This is not as hypothetical as you might imagine. Brazil actually passed such an Internet Constitution last year. How might an open data approach affect outcomes?