from the golden-goose-preservation dept

It's become quite fashionable these days to gripe about the Internet. Even some of its staunchest allies in Congress have been getting cranky. Naturally there are going to be growing pains as humanity adapts to the unprecedented ability for billions of people to communicate with each other easily, cheaply, and immediately for the first time in world history. But this communications revolution has also brought some extraordinary benefits that we glibly risk when we forget about them and instead only focus the challenges. This glass is way more than half full but, if we're not careful to protect it, soon it will be empty.

As the saying too often goes, you don't know what you've got till it's gone. But this time let's not wait to lose it; let's take the opportunity to appreciate all the good the Internet has given us, so we can hold on tight to it and resist efforts to take it away.

Towards that end, we want to encourage the sharing and collection of examples of how the Internet has made the world better: how it made it better for everyone, and how it even just made it better for you, and whether it made things better for good, or for even just one moment in one day when the Internet enabled some connection, discovery, or opportunity that could not have happened without it. It is unlikely that this list could be exhaustive: the Internet delivers its benefits too frequently and often too seamlessly to easily recognize them all. But that's why it's all the more important to go through the exercise of reflecting on as many as we can, because once they become less frequent and less seamless they will be much easier to miss and much harder to get back.

from the connecting-rights dept

In countries that put far less an emphasis on expanding human rights and personal liberty, it's become somewhat common for them to use strong-arm tactics to stifle dissent. One aspect of that is often times the suspension or shutdown of mobile networks, the theory being that the messaging and social media apps dissenters use on their phones allow them to organize far better than they otherwise could and therefore cause more trouble. Frankly, this has become something more expected out of Middle East authoritarian regimes than in other places, but they certainly do not have a monopoly on this practice.

However, there are governments with the ability to reverse course and go back in the right direction. One Pakistani court in Islamabad recently ruled that government shutdown of mobile networks, even if done under some claims for national security, are illegal. The news comes via a translation of a bytesforall.pk report. As a heads up, you will notice that the translation is imperfect.

Bytes for All, Pakistan welcomes the decision by the Honorable Judge of Islamabad High Court, Justice Athar Minallah whereby he declared the network shutdown as illegal and a disproportionate response to security threats.

The judgement reads as, "For what has been discussed above, the instant appeal and the connected petitions are allowed. Consequently, the actions, orders and directives issued by the Federal Government or the Authority, as the case may be, which are inconsistent with the provisions of section 54(3) are declared as illegal, ultra vires and without lawful authority and jurisdiction. The Federal Government or the Authority are, therefore, not vested with the power and jurisdiction to suspend or cause the suspension of mobile cellular services or operations on the ground of national security except as provided under section 54(3)."

If the government follows the directions of the ruling -- a very real question, to be sure -- no longer will federal authorities be able to respond to civil unrest or political activism by simply shutting off phone and data service to large swaths of citizens. It should be obvious that this is the wrong course of action anyway. Taking on a group decrying government oppression by further oppressing large numbers of people doesn't seem like a great formula for political tranquility. Still, it's good to see a government being held to account by the court system.

It seems that the folks at Bytes For All are hoping that this action in Pakistan will be the start of a wider shift in government policy across the region.

"This will set [a precedent], not only in the country but also for the external world wherever States use network disconnections as a tool to suppress fundamental rights in the name of security. Disconnecting people from communication networks is tantamount to denying a set of fundamental rights, including access to information, emergency services, expression and other associated rights," says Shahzad Ahmad, Country Director, Bytes For All, Pakistan.

Whether that actually happens remains to be seen, but if it can be done in Pakistan then surely it can be done elsewhere.

from the it's-not-that-easy dept

A few weeks back, following the DOJ's indictment of various Russians for interfering in the US election, we noted that the indictment showed just how silly it was to blame various internet platforms for not magically stopping these Russians because in many cases, they bent over backwards to appear to be regular, everyday Americans. And now, with pressure coming from elected officials to regulate internet platforms if they somehow fail to catch Russian bots, it seems worth pointing out the flip side of the "why couldn't internet companies catch these guys" question: which is why couldn't the government?

In the bowels of Washington officialdom, despite billion-dollar intelligence budgets and a peerless global surveillance apparatus, very little appears to have been done. No Russian nationals associated with the disinformation campaign were deported from the United States. (Three were improvidently granted U.S. visas.) No official warnings appear to have been sent to social networks or payment processors. And no indictments were made until a few weeks ago.

Facebook notified the FBI about Russian activity in June 2016, but no U.S. law enforcement or intelligence officials visited the social media company to compare notes. During the 2016 presidential campaign, the State Department pulled the plug on a project to combat Russian disinformation. The New Yorkerconcluded that the FBI, despite its $9 billion budget and 35,000 employees, simply "is not up to the job of detecting and countering Russian disinformation." The Washington Postsummarized the bureaucratic failures: "Top U.S. policymakers didn't appreciate the dangers, then scrambled to draw up options to fight back. In the end, big plans died of internal disagreement."

So it's a surprise to see senior members of the House and Senate intelligence committees, which are charged with providing "vigilant legislative oversight" of the nation's spy and counter-espionage agencies, pointing fingers approximately 2,800 miles westward instead.

Of course, you can argue that now, way after the fact, the DOJ has brought out this indictment. But, so too, have most of the internet platforms now been able to research and investigate what happened. But looking back retrospectively is quite different from proactively determining any of this on the fly.

McCullagh notes, correctly, that this doesn't mean internet platforms should do nothing. They obviously all are scrambling to figure out what to do going forward. But it does raise questions as to why the government seems to think the internet platforms can magically figure all of this out when they themselves could not. And, it's particularly telling that it's the two Congressional Intelligence Committees, which are supposed to oversee the intelligence community -- but usually just bolster or shield the intelligence community from criticism -- that are doing the most finger pointing. Perhaps it's more because they want to distract from the failures of the intelligence community.

I'm sure that some will argue some version of the "nerd harder" excuse for why internet companies should be better at detecting foreign influence than the NSA, but (1) any "nerd harder" argument is automatically void for being specious and (2) come on, the NSA has much great ability to connect these threads than any internet platform, no matter what some people will tell you.

from the Winnie-the-Pooh-strikes-back dept

The Communist Party of China Central Committee proposed to remove the expression that the President and Vice-President of the People's Republic of China "shall serve no more than two consecutive terms" from the country's Constitution.

As analysts were quick to point out, that seemingly minor change effectively allows China's current leader, Xi Jinping. to become "dictator for life". It wasn't just Westerners who were taken aback by the unexpected boldness of the move. One of the leading China experts, Victor Mair, explained in a post on his Language Log blog:

for as long as I've been studying China and observing Chinese affairs, I've never witnessed so much opposition to the CCP [Chinese Communist Party] as what I've been seeing and hearing during the last couple of days -- except for the months leading up to the Tiananmen Massacre of June 4, 1989.

But as he points out, the big difference between then and now is that today people can turn to the Internet to express their feelings. Naturally, that has led China's huge censorship machine to go into overdrive, leading to some really weird bans:

since the Roman alphabet is part of the Chinese writing system, it's only fair for letters to be subject to censorship the way Sinographs are. Comments like this Twitter thread show the letter N being censored on Weibo (a microblogging website that is one of the most popular social media platforms in China). This is probably out of fear on the part of the government that "N" = "n terms in office", where possibly n > 2

China Digital Times notes that the block on the letter "N" was only temporary, but that doesn't mean the censorship was eased. On the contrary. The same post lists dozens of words and phrases that are forbidden because they contain direct references or even just allusions to the move to make Xi Jinping's rule life-long. Here are some of the less obvious ones:

Winnie the Pooh -- Images of Winnie the Pooh have been used to mock Xi Jinping since as early as 2013. The animated bear continues to be sensitive in China. Weibo users shared a post from Disney's official account that showed Pooh hugging a large pot of honey along with the caption "find the thing you love and stick with it."

The Emperor's Dream -- The title of a 1947 animated puppet film.

trust this woman is willing to be a vegetarian for the rest of her life -- Allusion to a meme inspired by the popular historical drama Empresses in the Palace. A screenshot of this line, being said by an empress as she pledges lifelong vegetarianism in return for the imminent death of the emperor, has been shared online.

Durex in China reminding followers not to trust any advertising that isn't from their official accounts after "Two rounds just aren't enough" poster goes viral #XiJinping

The characters on the fake Durex ad are rather more explicit than "two rounds", which seems appropriate given what many people think Xi Jinping is doing to the Chinese people with this proposed change to the country's constitution.

from the it's-not-tech-v.-hollywood dept

Just last week we announced our new site EveryoneCreates.org, in which we showcase stories of people who rely on the open internet and various internet platforms to create artwork of all kinds -- from music to books to movies to photographs and more. It appears that we're not the only ones to be thinking about this. The Re:Create coalition has just now released some fantastic economic research about the large and growing population of people who use internet platforms to create and to make money from their creations. It fits right in with the point that we made, that contrary to the RIAA, MPAA and its front groups like "Creative Future," the internet is not harming creators, it's enabling them by the millions (and allowing them to make much more money as well).

Indeed, the report almost certainly significantly undercounts the number of content creators making money on the internet these days, as it only explores nine platforms: Amazon Publishing, eBay, Etsy, Instagram, Shapeways, Tumblr, Twitch, WordPress and YouTube. Those are all great, and probably cover a decent subset of creators and how they make money -- but it leaves off tons of others, including Kickstarter, Patreon, IndieGogo, Wattpad, Bandcamp, Apple, Spotify and many other platforms that have increasingly become central to the way in which creators make their money. Still, even with this smaller subset of creative platforms, the study is impressive.

14.8 million people used those platforms to earn approximately $5.9 billion in 2016.

Let's repeat that. The internet -- which some legacy entertainment types keep insisting are "killing" content creators and making it "impossible" to make money -- enabled nearly 15 million people to earn nearly $6 billion in 2016. And, again, that doesn't even include things like Kickstarter or Patreon (in 2016 alone, Kickstarter had $580 million in pledges...). In short, just as we've been saying for years, while those who rely on the old legacy gatekeeper system of waiting until you're "discovered" by a label/studio/publisher and then hoping they'll do all the work to make you rich and famous, maybe that's a bit more difficult these days. But, for actual creators, today is an astounding, unprecedented period of opportunity.

This does not mean that everyone discussed here is making a full-time living. Indeed, the report notes clearly that many people are using these platforms to supplement their revenue. But they're still creating and they're still making money off of their creations -- something that would have been nearly impossible not too long ago. And, just as the report likely undercounts the size of this economy due to missing some key platforms, it also misses additional revenue streams even related to the platforms it did count:

It is impossible to determine an average income for members of the new creative economy, because
earnings vary so widely for each platform. As previously stated, this analysis includes only a single source of
income for each of the nine platforms. For instance, based on the current data, we include a YouTube star’s
earnings from YouTube but not revenues as influencers or advertisements on other social media platforms.

Also interesting is how the report found that creators are spread all over the US. While California, New York and Texas have the most creators, even those with the "smallest" numbers of creators (Wyoming and the Dakotas) still had tens of thousands of people using these platforms to make money. And, yes, in case you're wondering, the study excluded big time stars like Kim Kardashian using platforms like Instagram to make money, focusing instead on truly independent creators.

This is especially important, as it's coming at a time when the RIAA, MPAA and their friends continue their nonsensical claims that these very same internet platforms are somehow "harming" content creators, and that laws need to change to make it harder for everyday people to use these platforms to express their artwork and to make money off of it. It's almost as if those legacy gatekeepers don't like the competition or the fact that people are realizing they don't need to work with a gatekeeper to create and to make money these days.

So, once again, it's time to dump the ridiculous myth of "tech v. content." That's not true at all. As this report shows, these tech platforms have enabled many millions of people to earn billions of dollars that's only possible because they're open platforms that get past the old gatekeeper system.

from the pioneer dept

I was in a meeting yesterday, when the person I was meeting with mentioned that John Perry Barlow had died. While he had been sick for a while, and there had been warnings that the end might be near, it's still somewhat devastating to hear that he is gone. I had the pleasure of interacting with him both in person and online multiple times over the years, and each time was a joy. He was always, insightful, thoughtful and deeply empathetic.

I can't remember for sure, but I believe the last time I saw him in person was a few years back at a conference (I don't even recall what conference), where he was on a panel that had no moderator, and literally seconds before the panel was to begin, I was asked to moderate the panel with zero preparation. Of course, it was easy to get Barlow to talk, and to make it interesting, even without preparation. But that day the Grateful Dead's Bob Weir (for whom Barlow wrote many songs -- after meeting as roommates at boarding school) was in the audience -- and while the two were close, they disagreed on issues related to copyright, leading to a public debate between the two (even though Weir was not on the panel). It was fascinating to observe the discussion, in part because of the way in which Barlow approached it. Despite disagreeing strongly with Weir, the discussion was respectful, detailed and consistently insightful.

Lots of people are, quite understandably, pointing to Barlow's famous Declaration of the Independence of Cyberspace (which was published 22 years ago today). Barlow later admitted that he dashed most of that off in a bar during the World Economic Forum, without much thought. And that's why I'm going to separately suggest two other things by Barlow to read as well. The first was his Wired piece, The Economy of Ideas from 1994, the second year of Wired's existence, and where Barlow's wisdom was found in every issue. Despite being written almost a quarter of a century ago, The Economy of Ideas is still fresh and relevant today. It is more thoughtful and detailed than his later "Declaration" and, if anything, I would imagine that Barlow was annoyed that the piece is still so relevant today. He'd think we should be way beyond the points he was making in 1994, but we are not.

The other piece is more recent I've seen a few people pointing to is his Principles of Adult Behavior, which are a list of 25 rules to live by -- rules that we should be reminded of constantly. Rules that many of us (and I'm putting myself first on this list) fail to live up to all too frequently. Update I stupidly assumed that was a more recent writing by Barlow, but as noted in the comments (thanks!) it's actually from 1977 when Barlow turned 30.

Cindy Cohn, who is now the executive director of EFF, which Barlow co-founded, mentions in her writeup how unfair it is that Barlow (and, specifically his Declaration) are often held up as the kind of prototype for the "techno-utopian" vision of the world that has become so frequently mocked today. Yet, as Cohn points out, that's not at all how Barlow truly viewed the world. He saw the possibilities of that utopia, while recognizing the potential realities of something far less good. The utopianism that Barlow presented to the world was not -- as many assume -- him claiming these things were a sort of manifest destiny, but rather by presenting such a utopia, we might all strive and push and fight to actually achieve it.

Barlow was sometimes held up as a straw man for a kind of naive techno-utopianism that believed that the Internet could solve all of humanity's problems without causing any more. As someone who spent the past 27 years working with him at EFF, I can say that nothing could be further from the truth. Barlow knew that new technology could create and empower evil as much as it could create and empower good. He made a conscious decision to focus on the latter: "I knew it’s also true that a good way to invent the future is to predict it. So I predicted Utopia, hoping to give Liberty a running start before the laws of Moore and Metcalfe delivered up what Ed Snowden now correctly calls 'turn-key totalitarianism.'”

Just yesterday, before I learned of Barlow's passing, we officially launched a new website, EveryoneCreates.org, which discusses just how ridiculous the myth -- pushed by the RIAA and MPAA and their friends -- that there's some sort of "war" between "content and tech." According to that narrative, the internet has done much to harm content creators. Yet, everywhere we look, we see the opposite. How content creators have been enabled by these technologies to create, to share, to distribute and, yes, to make money from their creations. Barlow was one of the first, if not the first, content creators from the "old" world, to wholeheartedly see the promise of the internet, and spent his life dedicated to making the internet such a powerful place for all of us content creators.

Either way, this is an end of an era. We're in an age now where the general narrative making the rounds is, once again, touching on the moral panic of how terrible everything in technology is. Barlow spent decades teaching us about the possibilities of a better world on the internet, and nudging us, sometimes gently, sometimes forcefully, in that direction. And, now, just at a point where that vision is most at risk, he's left us to continue that fight on our own. The internet world has many challenges ahead of it -- and we should all strive to be guided both by Barlow's principles and his vision of constantly pushing to mold the technology world into that world we want it to be -- not ignoring the negatives, but looking for ways to get beyond them and expand the opportunities for the good to come out. It will be harder without him being there to help guide us.

from the it's-not-a-broadcast-medium dept

One theme that we've covered on Techdirt since its earliest days is the power of the internet as an open platform for just about anyone to create and communicate. Simultaneously, one of our greatest fears has been how certain forces -- often those disrupted by the internet -- have pushed over and over again to restrict and contain the internet, and turn it into something more like a broadcast platform controlled by gatekeepers, where only the chosen few can use it to create and share. This is one of the reasons we've been so adamant over the years that in so many policy fights, "Silicon Valley v. Content" is a false narrative. It's almost never true -- because the two go hand in hand. The internet has made it so that everyone can be a creator. Internet platforms have made it so that anyone can create almost any kind of content they want, they can promote that content, they can distribute it, they can build a fan base, and they can even make money. That's in huge contrast to the old legacy way of needing a giant gatekeeper -- a record label, a movie studio, or a book publisher -- to let you into the exclusive club.

And yet, those legacy players continue to push to make the internet into more of a broadcast medium -- to restrict that competition, to limit the supply of creators and to push things back through their gates under their control. For example, just recently, the legacy recording and movie industries have been putting pressure on the Trump administration to undermine the internet and fair use in NAFTA negotiations. And, much of their positioning is that the internet is somehow "harming" artists, and needs to be put into check.

This is a false narrative. The internet has enabled so many more creators and artists than it has hurt. And to help make that point, today we're launching a new site, EveryoneCreates.org which features stories and quotes from a variety of different creators -- including bestselling authors, famous musicians, filmmakers, photographers and poets -- all discussing how important an open internet has been to building their careers and creating their art. On that same page, you can submit your own stories about how the internet has helped you create, and why it's important that we don't restrict it. Please add your own stories, and share the site with others too!

The myth that this is "internet companies v. creators" needs to be put to rest. Thanks to the internet, everyone creates. And let's keep it that way.

from the not-a-bandage dept

Last week, Santa Clara University hosted a gathering of tech platform companies to discuss how they actually handle content moderation questions. Many of the participants in the event have written essays about the questions that were discussed at the event, which we are publishing here. This one is excerpted from Custodians of the internet: Platforms, Content Moderation, and the Hidden Decisions that Shape Social Media. forthcoming, Yale University Press, May 2018.

Content moderation is such a complex and laborious undertaking that, all things considered, it's amazing that it works at all, and as well as it does. Moderation is hard. This should be obvious, but it is easily forgotten. It is resource intensive and relentless; it requires making difficult and often untenable distinctions; it is wholly unclear what the standards should be, especially on a global scale; and one failure can incur enough public outrage to overshadow a million quiet successes. And we are partly to blame for having put platforms in this untenable situation, by asking way too much of them. We sometimes decry the intrusion of platform moderation, and sometimes decry its absence. Users probably should not expect platforms to be hands-off and expect them to solve problems perfectly and expect them to get with the times and expect them to be impartial and automatic.

Even so, as a society we have once again handed over to private companies the power to set and enforce the boundaries of appropriate public speech for us. That is an enormous cultural power, held by a few deeply invested stakeholders, and it is being done behind closed doors, making it difficult for anyone else to inspect or challenge. Platforms frequently, and conspicuously, fail to live up to our expectations—in fact, given the enormity of the undertaking, most platforms' own definition of success includes failing users on a regular basis.

The companies that have profited most from our commitment to platforms have done so by selling back to us the promises of the web and participatory culture. But as those promises have begun to sour, and the reality of their impact on public life has become more obvious and more complicated, these companies are now grappling with how best to be stewards of public culture, a responsibility that was not evident to them at the start.

It is time for the discussion about content moderation to shift, away from a focus on the harms users face and the missteps platforms sometimes make in response, to a more expansive examination of the responsibilities of platforms. For more than a decade, social media platforms have presented themselves as mere conduits, obscuring and disavowing the content moderation they do. Their instinct has been to dodge, dissemble, or deny every time it becomes clear that, in fact, they produce specific kinds of public discourse. The tools matter, and our public culture is in important ways a product of their design and oversight. While we cannot hold platforms responsible for the fact that some people want to post pornography, or mislead, or be hateful to others, we are now painfully aware of the ways in which platforms invite, facilitate, amplify, and exacerbate those tendencies: weaponized and coordinated harassment; misrepresentation and propaganda buoyed by its algorithmically-calculated popularity; polarization as a side effect of personalization; bots speaking as humans, humans speaking as bots; public participation emphatically figured as individual self-promotion; the tactical gaming of platforms in order to simulate genuine cultural participation and value. In all of these ways, and others, platforms invoke and amplify particular forms of discourse, and they moderate away others, all in the name of being impartial conduits of open participation. The controversies around content moderation over the last half decade have helped spur this slow recognition, that platforms now constitute powerful infrastructure for knowledge, participation, and public expression.

~ ~ ~

All this means that our thinking about platforms must change. It is not just that all platforms moderate, or that they have to moderate, or that they tend to disavow it while doing so. It is that moderation, far from being occasional or ancillary, is in fact an essential, constant, and definitional part of what platforms do. I mean this literally: moderation is the essence of platforms, it is the commodity they offer.

First, moderation is a surprisingly large part of what they do, in a practical, day-to-day sense, and in terms of the time, resources, and number of employees they devote to it. Thousands of people, from software engineers to corporate lawyers to temporary clickworkers scattered across the globe, all work to remove content, suspend users, craft the rules, and respond to complaints. Social media platforms have built a complex apparatus, with innovative workflows and problematic labor conditions, just to manage this—nearly all of it invisible to users. Moreover, moderation shapes how platforms conceive of their users—and not just the ones who break the rules or seek their help. By shifting some of the labor of moderation back to us, through flagging, platforms deputize users as amateur editors and police. From that moment, platform managers must in part think of, address, and manage users as such. This adds another layer to how users are conceived of, along with seeing them as customers, producers, free labor, and commodity. And it would not be this way if moderation were handled differently.

But in an even more fundamental way, content moderation is precisely what platforms offer. Anyone could make a website on which any user could post anything he pleased, without rules or guidelines. Such a website would, in all likelihood, quickly become a cesspool of hate and porn, and then be abandoned. But it would not be difficult to build, requiring little in the way of skill or financial backing. To produce and sustain an appealing platform requires moderation of some form. Content moderation is an elemental part of what makes social media platforms different, what distinguishes them from the open web. It is hiding inside every promise social media platforms make to their users, from the earliest invitations to "join a thriving community" or "broadcast yourself," to Mark Zuckerberg's promise to make Facebook "the social infrastructure to give people the power to build a global community that works for all of us."

Content moderation is part of how platforms shape user participation into a deliverable experience. Platforms moderate (removal, filtering, suspension), they recommend (news feeds, trending lists, personalized suggestions), and they curate (featured content, front page offerings). Platforms use these three levers together to, actively and dynamically, tune the participation of users in order to produce the "right" feed for each user, the "right" social exchanges, the "right" kind of community. ("Right" here may mean ethical, legal, and healthy; but it also means whatever will promote engagement, increase ad revenue, and facilitate data collection.)

Too often, social media platforms discuss content moderation as a problem to be solved, and solved privately and reactively. In this "customer service" mindset, platform managers understand their responsibility primarily as protecting users from the offense or harm they are experiencing. But now platforms find they must answer also to users who find themselves implicated in and troubled by a system that facilitates the reprehensible—even if they never see it. Whether I ever saw, clicked on, or ‘liked' a fake news item posted by Russian operatives, I am still worried that others have; I am troubled by the very fact of it and concerned for the sanctity of the political process as a result. Protecting users is no longer enough: the offense and harm in question is not just to individuals, but to the public itself, and to the institutions on which it depends. This, according to John Dewey, is the very nature of a public: "The public consists of all those who are affected by the indirect consequences of transactions to such an extent that it is deemed necessary to have those consequences systematically cared for." What makes something of concern to the public is the potential need for its inhibition.

So, despite the safe harbor provided by U.S. law and the indemnity enshrined in their terms of service contracts as private actors, social media platforms now inhabit a new position of responsibility—not only to individual users, but to the public they powerfully affect. When an intermediary grows this large, this entwined with the institutions of public discourse, this crucial, it has an implicit contract with the public that, whether platform management likes it or not, may be quite different from the contract it required users to click through. The primary and secondary effects these platforms have on essential aspects of public life, as they become apparent, now lie at their doorstep.

~ ~ ~

If content moderation is the commodity, if it is the essence of what platforms do, then it makes no sense for us to treat it as a bandage to be applied or a mess to be swept up. Rethinking content moderation might begin with this recognition, that content moderation is part of how they tune the public discourse they purport to host. Platforms could be held responsible, at least partially so, for how they tend to that public discourse, and to what ends. The easy version of such an obligation would be to require platforms to moderate more, or more quickly, or more aggressively, or more thoughtfully, or to some accepted minimum standard. But I believe the answer is something more. Their implicit contract with the public requires that platforms share this responsibility with the public—not just the work of moderating, but the judgment as well. Social media platforms must be custodians, not in the sense of quietly sweeping up the mess, but in the sense of being responsible guardians of their own collective and public care.

Tarleton Gillespie is a Principal Researcher at Microsoft Research and an Adjunt Associate Professor in the Department of Communications at Cornell University.

from the just-a-blip-or-the-start-of-something-bigger? dept

We recently reported how China continues to turn the online world into the ultimate surveillance system, which hardly comes as a surprise, since China has been relentlessly moving in this direction for years. What is rather more surprising is that Chinese citizens are beginning to push back, at least in certain areas. For example, The New York Times reports on an "outcry" provoked by a division of the Alibaba behemoth when it assumed that its users wouldn't worry too much if they were enrolled automatically in one of China's commercially-run tracking systems:

Ant Financial, an affiliate of the e-commerce giant Alibaba Group, apologized to users on Thursday after prompting an outcry by automatically enrolling in its social credit program those who wanted to see the breakdown [of their spending made via Ant Financial's online payment system]. The program, called Sesame Credit, tracks personal relationships and behavior patterns to help determine lending decisions.

When one of China's business leaders complained publicly about the lack of privacy in China, and how Tencent's hugely-popular WeChat program spied on users, the company's denials were met with another outcry:

Tencent said that the company did not store the chat history of users and that it would never use chat history for big data analytics. The comments were met with widespread disbelief: WeChat users have been arrested over what they've said on the app, conversations have turned up as evidence in court proceedings, and activists have reported being followed based on WeChat conversations.

Baidu Inc., China’s largest search-engine operator, is being sued by a consumer-protection organization that claims it collected users' information without consent, in the latest privacy dispute involving the country's tech giants.

Two mobile apps operated by New York-listed Baidu, a search engine and a web browser, could access a user's calls, location data, messages and contacts without notifying the user, the Jiangsu Consumer Council, a government-backed consumer rights association, claimed in a statement on its website.

As well as government acquiescence in these moves, there's another reason why Chinese companies may well start to take online privacy more seriously. Аn article in the South China Morning Post points out that if Chinese online giants want to move beyond their fast-saturating home market, and start operating in the US and EU, they will need to pay much more attention to privacy to satisfy local laws. As Techdirt reported, an important partnership between AT&T and Huawei, China's biggest hardware company, has just been blocked because of unproven accusations that data handled by Huawei's products might make its way back to the Chinese government.

from the probably-just-a-coincidence dept

We recently wrote about an interesting comment from Vladimir Putin's Press Secretary that Russia had no intention of cutting itself off from the rest of the Internet. But there's another side to the disconnection story, as this Guardian news item reveals:

Russia could pose a major threat to the UK and other Nato nations by cutting underwater cables essential for international commerce and the internet, the chief of the British defence staff, Sir Stuart Peach, has warned.

Russian ships have been regularly spotted close to the Atlantic cables that carry communications between the US and Europe and elsewhere around the world.

In other words, although Russia says it won't cut itself off from the Internet, it could probably cut off many NATO countries. A new report, entitled "Undersea Cables: Indispensable, insecure", emphasizes the importance and vulnerability of the underwater cables that provide much of the Internet's global wiring:

97% of global communications and $10 trillion in daily financial transactions are transmitted not by satellites in the skies, but by cables lying deep beneath the ocean. Undersea cables are the indispensable infrastructure of our time, essential to our modern life and digital economy, yet they are inadequately protected and highly vulnerable to attack at sea and on land, from both hostile states and terrorists.

US intelligence officials have spoken of Russian submarines "aggressively operating" near Atlantic cables as part of its broader interest in unconventional methods of warfare. When Russia annexed Crimea, one of its first moves was to sever the main cable connection to the outside world.

Traffic sent to and from Google, Facebook, Apple, and Microsoft was briefly routed through a previously unknown Russian Internet provider Wednesday under circumstances researchers said was suspicious and intentional.

large chunks of network traffic belonging to MasterCard, Visa, and more than two dozen other financial services companies were briefly routed through a Russian government-controlled telecom under unexplained circumstances that renew lingering questions about the trust and reliability of some of the most sensitive Internet communications.

These events are a reminder that the online world depends on technologies where trust is an important element. That approach is now looking increasingly shaky as nation states wage attacks not just by means of the Internet, but even against it. This may explain why Russia says it wants alternative DNS servers for the BRICS nations: they could come in quite handy if -- by any chance -- the rest of the Internet goes down.