Today, the Congressional Executive China Commission conducted a hearing titled Google and Internet Control in China: A Nexus Between Human Rights and Trade? They had originally invited me to testify in a similarly titled hearing, “China, the Internet and Google,” which was postponed and rescheduled twice: the first attempt was foiled by the Great Snowcalypse; the second attempt scheduled for March 1st was postponed again at the last minute for some reason that isn’t entirely clear. Meanwhile I had already gone and written my testimony, improved by very helpful input from the CITP community. Unfortunately, when they rescheduled the hearing they said I was no longer invited. They wanted the hearing to have different witnesses from recent related hearings in both the House and Senate. Given that I appeared in both hearings it seems reasonable that they’d want to hear from some other people.

Given the effort that went into my testimony, however, and since it drills down in a lot more detail on China than my testimony for the other hearings, I think there is some value in my sharing it with the world. Here is the PDF and here it is as a web page. Some highlights:

From the introduction:

China is pioneering a new kind of Internet-age authoritarianism. It is demonstrating how a non-democratic government can stay in power while simultaneously expanding domestic Internet and mobile phone use.In China today there is a lot more give-and-take between government and citizens than in the pre-Internet age, and this helps bolster the regime’s legitimacy with many Chinese Internet users who feel that they have a new channel for public discourse. Yet on the
other hand, as this Commission’s 2009 Annual Report clearly outlined, Communist Party control over the bureaucracy and courts has strengthened over the past
decade, while the regime’s institutional commitments to protect the universal rights and freedoms of all its citizens have weakened.

Google’s public complaint about Chinese cyber-attacks and censorship occurred against this backdrop.It reflects a recognition that China’s status quo – at least when it comes to censorship, regulation,and manipulation of the Internet – is unlikely to improve any time soon, and
may in fact continue to get worse.

Overview of Chinese Internet controls

Chinese government attempts to control online speech began in the late 1990’s with a focus on the filtering or “blocking” of Internet content. Today, the government deploys an expanding repertoire of tactics.

In other words, filtering is just one of many ways that the Chinese government limits and controls speech on the Internet. The full text then gives descriptions and explanations of the other tactics, but in brief they include:

deletion or removal of content at the source

device and local-level controls

domain name controls

localized disconnection or restriction

self-censorship due to surveillance

cyber-attacks

government “astro-turfing” and “outreach”

targeted police intimidation

I then describe a number of efforts by Chinese netizens to push back against these tactics, which include (see the full text for further explanation):

informal anti-censorship support networks

distributed web-hosting assistance networks

crowdsourced “opposition research”

preservation and redistribution of censored content

humorous “viral” protests

public persuasion efforts

I end with a set of recommendations. Once again, see the full text for explanations, but here is the basic list:

anti-censorship tools – including outreach and education in their use

anonymity and security tools – to help people better defend against cyber-attacks, spyware, and surveillance

platforms and networks for the capture, storage, and redistribution of content that gets deleted from domestic social networking and publishing services

support for “opposition research” – remember the Chinese netizens who deconstructed Green Dam?

corporate responsibility – see Global Network Initiative, but also appropriate legislation if American and other Western Internet companies fail to accept the idea that they have some obligations as far as free expression and privacy are concerned

private right of action – so that Chinese victims can sue U.S. companies in U.S. courts

incentives for innovation by the private sector that helps Chinese Internet users access blocked sites as well as protect themselves from attacks and surveillance.

My conclusion:

Many of China’s 384 million Internet users are engaged in passionate debates about their communities’ problems, public policy concerns, and their nation’s future. Unfortunately these public discussions are skewed, blinkered, and manipulated – thanks to political censorship and surveillance. The Chinese people are proud of their nation’s achievements and generally reject critiques by outsiders even if they agree with some of them. A democratic alternative to China’s Internet-age authoritarianism will only be viable if it is conceived and built by the Chinese people from within. In helping Chinese “netizens” conduct an un-manipulated and un-censored discourse about their future, the United States will not imposing its will on the Chinese people, but rather helping the Chinese people to take ownership over their own future.

Over the past two weeks I’ve testified in both the Senate and the House on how the U.S. should advance “Internet freedom.” I submitted written testimony for both hearings which can be downloaded in PDF form here and here. Full transcripts will become available eventually but meanwhile you can click here to watch the Senate video and here to watch the House video. In both hearings I advocated a combination of corporate responsibility through the Global Network Initiative backed up by appropriate legislation given that some companies seem reluctant to hold themselves accountable voluntarily; revision of export controls and sanctions; and finally, funding and support for tools, and technologies and activism platforms that will counter-act suppression of online speech.

(1) award competitive, merit-reviewed grants, cooperative aggreements, or contracts to private industry, universities, and other research and development organizations to develop deployable technologies to defeat Internet suppression and censorship; and(2) award incentive prizes to private industry, universities, and other research and development organizations to develop deployable technologies to defeat Internet suppression and censorship.

(b) LIMITATION ON AUTHORITY. – Nothing in this Act shall be interpreted to authorize any action by the United States to interfere with foreign national censorship in furtherance of law enforcement aims that are consistent with the International Covenant on Civil and Political Rights.

Whoever runs this foundation will have their work cut out for them in sorting out its strategies, goals, and priorities – and dealing with a great deal of thorny politics. The Falun Gong-affiliated Global Internet Freedom Consortium have been arguing that they were unfairly passed over for recent State Department grants which were given to other groups working on different tools that help you get around Internet blocking – “circumvention tools” as the technical community call them. For the past year they’ve been engaged in an aggressive campaign to lobby congress and the media to ensure they’ll get a slice of future funds. (For examples of the fruits of their media lobbying effort see here, here, and here).

But the unfortunate bickering over who deserves government funding more than whom has distracted attention from the larger question of whether circumvention on its own is sufficient to defeat Internet censorship and suppression of online speech. In his recent blog post, Internet Freedom: Beyond Circumvention my friend and former colleague Ethan Zuckerman warns against an over-focus on circumvention: “We can’t circumvent our way around internet censorship.” In short, he summarizes his main points:

– Internet circumvention is hard. It’s expensive. It can make it easier for people to send spam and steal identities.– Circumventing censorship through proxies just gives people access to international content – it doesn’t address domestic censorship, which likely affects the majority of people’s internet behavior.– Circumventing censorship doesn’t offer a defense against DDoS or other attacks that target a publisher.

While circumvention tools remain worthy of support as part of a basket of strategies, I agree with Ethan that circumvention is never going to be the silver bullet that some people make it out to be, for all the reasons he outlines in his blog post, which deserves to be read in full. As Ethan points out, as I pointed out in my own testimony, and as my research on Chinese blog censorship published last year has demonstrated, circumvention does nothing to help you access content that has been removed from the Internet completely – which is the main way that the Chinese government now censors the Chinese-language Internet. In my testimony I suggested several other tools and activities that require equal amount of focus:

Tools and training to help people evade surveillance, detect spyware, and guard against cyber-attacks.

Mechanisms to preserve and re-distribute censored content in various languages.

Platforms through which citizens around the world can share “opposition research” about what different governments are doing to suppress online speech, and collaborate across borders to defeat censorship, surveillance, and attacks in ad-hoc, flexible ways as new problems arise during times of crisis.

As Ethan puts it:

– We need to shift our thinking from helping users in closed societies access blocked content to helping publishers reach all audiences. In doing so, we may gain those publishers as a valuable new set of allies as well as opening a new class of technical solutions.

– If our goal is to allow people in closed societies to access an online public sphere, or to use online tools to organize protests, we need to bring the administrators of these tools into the dialog. Secretary Clinton suggests that we make free speech part of the American brand identity – let’s find ways to challenge companies to build blocking resistance into their platforms and to consider internet freedom to be a central part of their business mission. We need to address the fact that making their platforms unblockable has a cost for content hosts and that their business models currently don’t reward them for providing service to these users.

Which brings us to the issue of corporate responsibility for free expression and privacy on the Internet. I’ve been working with the Global Network Initiative for the past several years to develop a voluntary code of conduct centered on a set of basic principles for free expression and privacy based on U.N. documents like the Universal Declaration of Human Rights, the International Covenant on Civil and Political Rights, and other international legal conventions. It is bolstered by a set of implementation guidelines and evaluation and accountability mechanisms, supported by a multi-stakeholder community of human rights groups, investors, and academics all dedicated to helping companies do the right thing and avoid making mistakes that restrict free expression and privacy on the Internet.

So far, however, only Google, Microsoft, and Yahoo have joined. Senator Durbin’s March 2nd Senate hearing focused heavily on the question of why other companies have so far failed to join, what it would take to persuade them to join, and if they don’t join whether laws should be passed that induce greater public accountability by companies on free expression and privacy. He has written letters to 30 U.S. companies in the information and communications technology (ICT) sector. He expressed great displeasure in the hearing with most of their responses, and further disappointment that no company (other than Google which is already in the GNI) even had the guts to send a representative to testify in the hearing. Durbin announced that he will “introduce legislation that would require Internet companies to take reasonable steps to protect human rights or face civil or criminal liability.” It is my understanding that his bill is still under construction, and it’s not clear when he will introduce it (he’s been rather preoccupied with healthcare and other domestic issues, after all). Congressman Howard Berman (D-CA), who convened Wednesday’s House Foreign Affairs Committee hearing is also reported to be considering his own bill. Rep. Chris Smith (R-NJ), the ranking Republican at that hearing, made a plug for the Global Online Freedom Act of 2009, a somewhat revised version of a bill that he first introduced in 2006.

I said at the hearing that the GNI probably wouldn’t exist if it hadn’t been for the threat of Smith’s legislation. I was not, however, asked my opinion on GOFA’s specific content. Since GOFA’s 2006 introduction I have critiqued it a number of times (see for example here, here, and here). As the years have passed – especially in the past year as the GNI got up and running yet most companies have still failed to engage meaningfully with it – I have come to see the important role legislation could play in setting industry-wide standards and requirements, which companies can then tell governments they have no choice but to follow. That said, I continue to have concerns about parts of GOFA’s approach. Here is a summary of the current bill written by the Congressional Research Service (I have bolded the parts of concern):

5/6/2009–Introduced. Global Online Freedom Act of 2009 – Makes it U.S. policy to: (1) promote the freedom to seek, receive, and impart information and ideas through any media; (2) use all appropriate instruments of U.S. influence to support the free flow of information without interference or discrimination; and (3) deter U.S. businesses from cooperating with Internet-restricting countries in effecting online censorship. Expresses the sense of Congress that: (1) the President should seek international agreements to protect Internet freedom; and (2) some U.S. businesses, in assisting foreign governments to restrict online access to U.S.-supported websites and government reports and to identify individual Internet users, are working contrary to U.S. foreign policy interests. Amends the Foreign Assistance Act of 1961 to require assessments of electronic information freedom in each foreign country. Establishes in the Department of State the Office of Global Internet Freedom (OGIF). Directs the Secretary of State to annually designate Internet-restricting countries. Prohibits, subject to waiver, U.S. businesses that provide to the public a commercial Internet search engine, communications services, or hosting services from locating, in such countries, any personally identifiable information used to establish or maintain an Internet services account. Requires U.S. businesses that collect or obtain personally identifiable information through the Internet to notify the OGIF and the Attorney General before responding to a disclosure request from an Internet-restricting country. Authorizes the Attorney General to prohibit a business from complying with the request, except for legitimate foreign law enforcement purposes. Requires U.S. businesses to report to the OGIF certain Internet censorship information involving Internet-restricting countries. Prohibits U.S. businesses that maintain Internet content hosting services from jamming U.S.-supported websites or U.S.-supported content in Internet-restricting countries. Authorizes the President to waive provisions of this Act: (1) to further the purposes of this Act; (2) if a country ceases restrictive activity; or (3) if it is the national interest of the United States.

My biggest concern has to do with the relationship GOFA would create between U.S. companies and the U.S. Attorney General. If the AG is made arbiter of whether content or account information requested by local law enforcement is for “legitimate law enforcement purposes” or not, that means the company has to share the information – which in the case of certain social networking services may include a great deal of non-public information about the user, who his or her friends are, and what they’re saying to each other in casual conversation. Letting the U.S. AG review the insides of this person’s account would certainly violate that user’s privacy. It also puts companies at a competitive disadvantage in markets where users – even those who don’t particularly like their own government – would consider an overly close relationship between a U.S. service provider and the U.S. government not to be in their interest. Take this hypothetical situation for example: An Egyptian college student decides to use a social networking site to set up a group protesting the arrest and torture of his brother. The Egyptian government demands the group be shut down and all account information associated with it handed over. In order to comply with GOFA, the company shares this student’s account information and all content associated with that protest group with the U.S. Attorney General. What is the oversight to ensure that this information is not retained and shared with other U.S. government agencies interested in going on a fishing expedition to explore friendships among members of different Egyptian opposition groups? Why should we expect that user to be ok with such a possibility?

Another difficult issue to get right – which gets even harder with the advent of cloud computing – is the question of where user data is physically housed. The Center for Democracy and Technology,(PDF), Jonathan Zittrain and others have discussed some of the regulatory difficulties of personally identifiable information and its location. In 2008 Zittrain wrote:

As Internet law rapidly evolves, countries have repeatedly and successfully demanded that information be controlled or monitored, even when that information is hosted outside their borders. Forcing US companies to locate their servers outside IRCs [Internet Restricting Countries] would only make their services less reliable; it would not make them less regulable.

If the goal of GOFA is to discourage US companies from violating human rights, then it will probably be successful. But if the goal of the Act is to make the Internet more free and more safe, and not just push rights violations on foreign companies, then more must be done.

Then there is the problem of Internet Restricting Country designations themselves. I have long argued that it is problematic to divide the world into “internet restricting countries” and countries where we can assume everything is just fine, not to worry, no human rights concerns present. First of all I think that the list itself is going to quickly turn into a political and diplomatic football which will be subject to huge amounts of lobbying and politics, and thus will be very difficult to add new countries to the list. Secondly, regimes can change fast: in between annual revisions of the list you can have a coup or a rigged election whose victors demand companies to hand over dissident account information and censor political information, but companies are off the hook – having “done nothing illegal.” Finally, while I am not drawing moral equivalence between Italy and Iran I do believe there is no country on earth, including the United States, where companies are not under pressure by government agencies to do things that arguably violate users’ civil rights. Policy that acknowledges this honestly is less likely to hurt U.S. companies in many parts of the world where the last thing they need is for people to be able to provide “documentary proof” that they are extensions of the U.S. government’s geopolitical agendas.

Therefore a more effective, ethically consistent and less hypocritical approach to the three problems I’ve described above would be to codify strict global privacy standards absolutely everywhere U.S. companies operate. Companies should be required by law to notify all users anywhere in the world in a clear, culturally and linguistically understandable way (not by trained lawyers but by normal people), exactly how and where their personally-identifying information is being stored and used and who has access to it under what circumstances. If users are better informed about how their data is being used, they can use better judgment about how or whether to use different commercial services – and seek more secure alternatives when necessary, perhaps even using some of the new tools and platforms run by non-profit activist organizations that Congress is hoping to fund. Congress could further bolster the privacy of global users of U.S. services by adopting something akin to the Council of Europe Privacy Convention.

Regarding censorship: again, as the Internet evolves further with semi-private social networking sites and mobile services we need to make sure that the information companies are required to share with the U.S. government doesn’t end up violating user privacy. I am doubtful that government agenices in some of the democracies unlikely to be put on the “internet restricting countries” list can really be trusted not to abuse the systems of censorship and intermediary liability that a growing number of democracies are implementing in the name of legitimate law enforcement purposes. Thus on censorship I also prefer global standards. There is real value in making companies retain internal records of the censorship requests that they receive all around the world in the event of a challenge in U.S. court regarding the lawfulness of a particular act of censorship – a private right of action in U.S. court which GOFA or its equivalent would potentially enable. It’s also good to make companies establish clear and uniform procedures for how they handle censorship requests, so that they can prove if challenged in court that they are only responding to requests made in writing through official legal channels, rather than responding to requests that have no basis even in local law, despite claiming vaguely to the public that “we are only following local law.” Companies should be required to exercise maximum transparency with users about what is being censored, at whose behest, and according to which law exactly. Congress could, for example, mandate that the Chilling Effects Clearinghouse mechanism or something similar should be utilized globally for all content takedowns.
(Originally posted at my blog, RConversation.)

The launch of Google Buzz, the new social networking service tied to GMail, was a fiasco to say the least. Its default settings exposed people’s e-mail contacts in frightening ways with serious privacy and human rights implications. Evgeny Morozov, who specializes in analyzing how authoritarian regimes use the Internet, put it bluntly last Friday in a blog post: “If I were working for the Iranian or the Chinese government, I would immediately dispatch my Internet geek squads to check on Google Buzz accounts for political activists and see if they have any connections that were previously unknown to the government.”

According to the BBC, the Buzz development team bypassed Google’s standard trial and testing procedures in order to launch the product quickly. Apparently, the company only tested it internally with Google employees and failed to test the product with a more diverse range of users who are more likely to have brought up the issues which were so glaringly obvious after launch. Google has apologized and moved to correct the most eggregious privacy flaws, though problems – including security issues – continue to be raised. PC World has a good overview of Buzz’s evolution since launch.

Meanwhile, damage has been done not only to Google’s reputation but also to an unknown number of users who found themselves and their contacts exposed in ways they did not choose or want. Exposing vulnerable users without their knowledge or choice even for a few hours can potentially have irreversible consequences. While Google is scoring some points around the tech policy world for reacting quickly and earnestly to the strident user outcry, the Electronic Information Privacy Center (EPIC) has filed an official complaint with the FTC, and that Canada’s Privacy Commissioner has expressed disappointment and asked Google to explain itself. (UPDATE: A class complaint has been filed in San Jose, claiming that Google broke the law by sharing personal data without users’ consent.)

Earlier this week I asked people in my Twitter network how they’re feeling about Buzz after the fixes they’ve made. Some are now reassured but others aren’t. Joe Hall wrote:

@rmack totally lost me for good.. I just can’t believe that they won’t do it again. It will have to be very useful/different to get me back

I was so concerned about exposing people in my GMail network during the first week after launch that I stayed off Buzz entirely until Monday afternoon. By then I felt that the worst privacy problems had been fixed, and I understood well enough how to tweak the settings that I could at least go in without doing harm to others. After playing with it a bit and poking around I posted some initial observations and invited the people in my network to respond. There are still plenty of issues – some people who claimed in Twitter that they had turned off Buzz are still there, and I think Buzz should make it easier for people to use pseudonyms or nicknames not tied to their email address if they prefer. From Beijing, Jeremy Goldkorn of the influential media blog Danwei responded: “I like the way Buzz works now, and it seems to me that the privacy concerns have been addressed.”

I’ve noticed that some Chinese Buzz users have been using it to post and discuss material that has been censored by Chinese blog-hosting platforms and social networking sites. If Buzz becomes useful as a way to preserve and spread censored information around quickly, it seems to me that’s a plus as long as people aren’t being exposed in ways they don’t want. My friend Isaac Mao wrote:

It’s more important to Chinese to make information flowing rather than privacy concern this moment. With more hibernating animals in cave, we can’t tell too much on the risks about identity, but more on how to wake up them.

Buzz has unleashed some potentials on sharing which just follows my Sharism theory, people actually have much more stuff to share before they realize them.

But I agree with any conerns on privacy, including the risks that authority may trace publishers in China. It’s very much possible to be targeted once they were notified how profound the new tool is.

The “Great Firewall” is already at work on Buzz, at least in Beijing. While most people seem to be able to access Buzz through GMail on Chinese Internet connections, numerous people report from Beijing that at least some Google profiles – including mine and Isaac’s – are blocked, though people in Shanghai and Guangzhou say they’re not blocked. Others in China report having trouble posting comments to Buzz, though it’s unclear whether this is a technical issue with Buzz or a Chinese network blocking issue, or some strange combination of the two.

It will be interesting to see how things evolve, and whether activists in various countries find Buzz to be a useful alternative to Facebook and other platforms – or not. Whatever happens, I do think that Google fully deserves the negative press it has gotten and continues to get for the thoughtless way in which Buzz was rolled out. There are senior people at Google whose job it is to focus on free expression issues, and others who work full time on privacy issues. Either the Buzz development team completely failed to consult with these people or were allowed to ignore them. I am inclined to believe the former instead of the latter, based on my interactions with the company through the Global Network Initiative and Google’s support for Global Voices. Call me biased or sympathetic if you want, but I don’t think that the company made a conscious dec
ision to ignore the risks it was creatin
g for human rights activists or people with abusive spouses – or anybody else with privacy concerns. However, if we do give Google the benefit of the doubt, then the only logical conclusion is that in this case, something about the company’s management and internal communications was so broken that the company was unable to prevent a new product from unintentionally doing evil. Nick Summers at Newsweek thinks the problem is broader:

Google is so convinced of the righteousness of its mission statement that it launches products heedlessly. Take Google Books—the company was so in thrall with its plan to make all hardbound knowledge searchable that it did not anticipate a $125 million legal challenge from publishers. With Google Wave, engineers got high on their own talk that they had invented a means of communication superior to e-mail—until Wave launched and users laughed at its baffling un-usability. Last week, with Buzz, Google seemed so bewitched by the possibilities of a Google-y take on social networking that it went live without thinking through the privacy implications.

Whatever the case may be in terms of Google’s internal thinking or intentions, we have a right to be concerned. Too many of us depend on Google for too many things. As I’ve written before, I believe Google has a responsibility to netizens around the world to develop more effective mechanisms to ensure that the concerns, interests, and rights of the world’s netizens are adequately incorporated into the development process.

I’d very much like to hear your ideas for how netizens’ concerns around the world – particularly from at-risk and marginalized communities who have the most to lose when Google gets things wrong – might be channeled to Google’s development teams and product managers. Rather than wait for Google to figure this out, are there mechanisms that we as netizens might be able to build? Are there things we can proactively do to help companies like Google avoid doing evil? Can we help them to avoid hurting us – and also help them to maximize the amount of good they can do?

Freedom to Tinker is hosted by Princeton's Center for Information Technology Policy, a research center that studies digital technologies in public life. Here you'll find comment and analysis from the digital frontier, written by the Center's faculty, students, and friends.