Mike Godwin’s Techdirt Profile

About Mike Godwin

from the deadline-is-november-15 dept

Australia's controversial and clumsy rollout of its "My Health Record" program this summer didn't cause the "spill" -- what Australians call an abrupt turnover of party leadership in Parliament — that gave the country a new Prime Minister in August. But it didn't improve public trust in the government either. The program — which aims to create a massive nationally administered database of more or less every Australian's health care records — will pose massive privacy and security risks for the citizens it covers, with less-than-obvious benefits for patients, the medical establishment, and the government.

Citizen participation in the new program isn't quite mandatory, but it's nearly so, thanks to the government's recent shift of the program from purely voluntary to "opt-out." Months before the planned rollout, which began June 16, at least one poll suggested that a sizable minority of Australians don't want the government to keep their health information in a centralized health-records database.

In response to ongoing concern about the privacy impact of the program (check out #MyHealthRecord on Facebook and Twitter), the new government is pushing for legislative changes aimed at addressing the growing public criticism of the program. But many privacy advocates and health-policy experts say the proposed fixes, while representing some improvements on particular privacy issues, don't address the fundamental problem. Specifically, the My Health Record program, which originally was designed as a voluntary program, is becoming an all-but-mandatory health-record database for Australian citizens, held (and potentially exploited) by the government.

Australia's shifting of its electronic-health-records program to "opt-out" — which means citizens are automatically included in the program unless they take advantage of a short-term "window" to halt automatic creation of their government-held health records — is a textbook example of how to further undermine trust in a government that already has trust issues when it comes to privacy. Every government that imposes record-keeping requirements that impact citizen privacy should view Australia's abrupt shift to "opt-out" health-care records as an example of What Not To Do.

And yet: supporters of My Health Record have persisted in their commitment to "opt out" during the shift from Malcolm Turnbull's administration to that of his successor, Scott Morrison. This means that if an Australian doesn't invest time and energy into invoking her right not to be included in the database — within the less-than-one-month window that citizens currently have to make this choice — she will be included by default.

In other words, any citizen's health-care records in the program will be held by the government permanently throughout that citizen's and will persist for 30 years after that citizen's death. Even if an Australian chose later to opt out of the program, the record might still (theoretically) accessible to health-care providers and government officials. Health Minister Greg Hunt introduced legislation last summer that would address some of these complaints about the program, but it's unclear whether the Australian Parliament, which has weathered several leadership shifts over the past decade, has the focus or will to implement the changes.

The fact is, the automatic creation of your My Health Record could still result in a permanent health-care record that's outside of any individual Australian's control because the government can always repeal any law or regulation requiring deletion or limiting access. In effect, "My Health Record" is a misnomer: a more accurate name for the program would be "The Government's Health Records About You."

A great deal of Australian media coverage of the rollout has been critical of the Turnbull government's -– and later the Morrison government's -- "full steam ahead" approach. The pushback against My Health Record has been immense. Worse, citizens who have rushed to opt out of the program have found the system less than easy to navigate — whether on the Web or through a government call center. The flood of Australians who attempted to opt out of the program on the first day they were allowed to do so, found that they were unwitting beta testers, stress-testing the opt-out system. After the first-day opt-out numbers, the government has either declined or been unable to disclose how many Australians are opting out. But a Sydney Morning Herald report in July said the number of opt-outs might "run into the millions."

In kind of a weird mirror-universe adventure, Australia has managed to reproduce the same kind of public concern that sank a similar health-care effort in the United Kingdom just a few years ago. Phil Booth of the UK's Medconfidential privacy-advocacy group told the Guardian that "[t]he parallels are incredible" and that "this system seems to be the 2018 replica of the 2014 care.data." After a government-appointed commission underscored privacy and security concerns, the UK's "care.data" program was abandoned in 2016. Unfortunately for Australians, in the Australian version of the UK's "care.data" scheme, Spock has a beard.

The UK's experience suggests that the policy problem signaled by the opposition to the My Health Record initiative is bigger than Australia. That shouldn't be a surprise. After all, a developed country may provide a "universal health care" program like the United Kingdom's National Health Service, or a more "mixed" system (a public health care program supplemented by private insurers like that of Australia) or even an insurance-centric public-health program like Obamacare. But whatever the system, the appeal of "big data" approaches to create efficiencies in health care is broad, in the abstract.

But despite the theoretical appeal of #MyHealthRecord there's a paucity of actual economic research that shows that centralized health-care databases will actually provide benefits that recoup the costs of investment. (Australia's program has been estimated to cost more than $2 billion AUD so far, and it's not yet fully implemented.) No one, in or out of government, has made a business case for My Health Record that uses actual numbers. Instead, the chief argument in favor MHR is that it will enable health-care providers to share patient data more easily — which supposedly will save money — but health-care workers, much as they hate the paperwork associated with it, mostly know that there's no substitute for taking a fresh patient history at the point of intake.

The push for a national database of personal health information has been a fairly recent development, even though the country's current health-care system has been in place in more or less its current form since 1984. The Australian Department of Health announced in 2010 that the government would be spending nearly half a billion Australian dollars to build a system of what then were called Personally Controlled Electronic Health Records. The primary idea was to make it more efficient to share critical patient information among health-care providers treating the same person.

Another purported benefit would be standardization. Like the United States (where proposals to for a national health-records system have sometimes been promoted) Australia is a federal system of states and territories, each of which has its own government. The concern was that a failure to set national standards for digital health records would lead to the states and territories developing their own, possibly mutually incompatible systems. The distance among the states and territories (mostly on the coasts surrounding Australia's dry, unpopulated Outback) makes integration harder because of the distances separating different pockets of its population (now 25 million).

The 2010 announcement of the Personally Controlled Electronic Health Records program stated expressly "[a] personally controlled electronic health record will not be mandatory to receive health care." The basic model was opt-in — starting in 2012, Australians had to actively choose to create their shared digital health records. If you didn't register for the program, however, you didn't create a PCEHR. If you did register, you had the assurance that, under the government-promulgated Australian Privacy Principles, your personal health information would be strongly protected.

In practice, the PCEHR program, eventually rebranded as My Health Record, has never had much appeal to most citizens. The government burned somewhere near or past $2 billion AUD and yet, years into the program, the total number of citizens who had volunteered to "opt in" to have their health records shared and available in the program was only about 6 million. According to a March report in Australia's medical-news journal, the Medical Republic, Australia's physicians also seem to be less than sold on the value in the program either.

Prior to the latest push for a shift to "opt-out," only a few citizens saw much benefit (much less any fun or personal return) of investing the time it takes to master producing a complete and useful health record, and even those who did only rarely ended up using its key features. (Some health-fashion-forward citizens who do want to share their health-care records easily have opted to invest in more private solutions rather than rely on a centralized database that may be less controllable and less complete.)

By 2014 it was clear that the Australian government (control of which had shifted to the more conservative of the two major parties) wanted to move in closer-to-mandatory direction. It did so by announcing a wholesale conversion of the My Health Record database from opt-in to opt-out. This meant that, if you were an Australian citizen, a health record would be created automatically for you—unless you explicitly said you didn't want one. But the possibility of opting out hasn't quelled these ongoing complaints from the general public:

The still-too-short, too-limited opt-out window. Australians were originally given a three-month window, starting July 16, to opt out of My Health Record. (It was later extended to November 15. Of course, critics regard the one-month extension as something less than stellar.) If you don't opt out in the approved window, an electronic health record will be created for you. By default, program provides that the government will keep the record for 30 years after your death. And the government will have the right to access the record—whether you've died or not— "for maintenance, audit and other purposes required or authorised[sic] by law."

This goes on your permanent record. The law already authorizes a lot of government access (for law-enforcement agencies, court proceedings, and other non-health-related purposes). And of course the laws can be amended to authorize even more access. Were you ever treated for alcohol poisoning? Did you ever have an abortion? You may be able to limit access somewhat by tweaking the privacy controls of "My Health Record," but (unless you take strong, affirmative steps otherwise) it's never erased. And it may be demanded by a range of government authorities for all sorts of reasons under current or future laws or regulations.

The disputed warrant requirement. The Australian Digital Health Agency, the relatively new government agency in charge of the program, said a warrant would be required—but that claim was contradicted by Australia's Parliamentary Library, whose analysis found that access by non-health government agencies with few if any procedural or privacy safeguards. Disturbingly, the Parliamentary Library's report was abruptly removed and revised after pushback from the Turnbull government. (The removed report has been reproduced here.) A subsequent Senate inquiry—with a report issued October 12—shows growing consensus behind adding a warrant requirement before law enforcement gets health record access, but the Australian Labor Party and the Australian Greens have dissented on the question of whether a warrant requirement fixes the problems: Per the Greens, the warrant requirement is "an improvement on the status quo, but it is an insufficient and disappointing one."

Then-Prime Minister Malcolm Turnbull was dismissive of privacy concerns early on arguing that "there have been no privacy complaints or breaches with My Health Record in six years and there are over 6 million people with My Health Records." But many prominent health-care and privacy experts argue that the government's new promises to patch the system are inadequate. For example, requiring government agencies to get a warrant does nothing to protect patients from unauthorized access to their records by health-care workers with access to the My Health Record system. And the Labor members have argued that the new system needs a statutory provision that prevents health-care insurers from accessing My Health Record's data.

Typical of the external critics is former Australian Medical Association President Kerryn Phelps, who views the promises as "minor concessions" that are "woefully inadequate." Phelps, who cites a survey showing that 75 percent of doctors are themselves planning to opt out, called for "full parliamentary review" of the My Health Record program. Other critics have argued the government has painted itself into a corner due to the "sunk costs" of $2 billion AUD. Bernard Robertson-Dunn of the Australian Privacy Foundation argues that the whole problem, despite the fact that the government has spent those billions, is that Australia needs to reboot its digital-health initiative entirely.

But many of the critics of My Health Record in Parliament seem to be maneuvering to lessen the privacy harms likely to ensure from the shift to near-mandatory participation in My Health Record. In this, they may be driven by the fear that writing off the Australian health-care-records program may look too much like the abject failure that was the UK's "care.data" program. But Robertson-Dunn views the unwillingness of some members or Parliament to cut their losses as short-sighted, given the likely long-term harms the system poses to citizens' health privacy. Better to scrap My Health Record and write off the costs so far, he argues. Once that's done, he says, Australia can "[s]tart with a problem patients and doctors have and go from there."

from the the-list-keeps-growing dept

I've written two installments in this series (part 1 is here and part 2 is here). And while I could probably turn itemizing complaints about social-media companies into a perpetual gig somewhere — because there's always going to be new material — I think it's best to list only just a few more for now. After that, we ought to step back and weigh what reforms or other social responses we really need. The first six classes of complaints are detailed in Parts 1 and 2, so we begin here in Part 3 with Complaint Number 7.

(7) Social media are bad for us because they're so addictive to us that they add up to a kind of deliberate mind control.

As a source of that generalization we can do no better than to begin with Tristan Harris's July 28, 2017 TED talk, titled "How a handful of tech companies control billions of minds every day."

Harris, a former Google employee, left Google in 2015 to start a nonprofit organization called Time Well Spent. That effort has now been renamed the Center for Humane Technology ( http://www.timewellspent.io now resolves to https://humanetech.com). Harris says his new effort — which also has the support of former Mozilla interface designer Aza Raskin and early Facebook funder Roger McNamee — represents a social movement aimed at making us more aware of the ways in which technology, including social media and other internet offerings, as well as our personal devices, are continually designed and redesigned to make them more addictive.

Yes, there's that notion of addictiveness again — we looked in Part 2 at claims that smartphones are addictive and talked about how to address that problem. But regarding the "mind control" variation of this criticism, it's worth examining Harris's specific claims and arguments to see how they compare to other complaints about social media and big tech generally. In his June 2017 TED talk. Harris begins with the observation that social-media notifications on your smart devices, may lead you to have thoughts you otherwise wouldn't think:

"If you see a notification it schedules you to have thoughts that maybe you didn't intend to have. If you swipe over that notification, it schedules you into spending a little bit of time getting sucked into something that maybe you didn't intend to get sucked into."

But, as I've suggested earlier in this series, this feature of continually tweaking content to attract your attention isn't unique to internet content or to our digital devices. This is something every communications company has always done — it's why ratings services for traditional broadcast radio and TV exist. Market research, together with attempts to deploy that research and to persuade or manipulate audiences, has been at the heart of the advertising industry for far longer than the internet has existed, as Vance Packard's 1957 book THE HIDDEN PERSUADERS suggested decades ago.

One major theme of Packard's THE HIDDEN PERSUADERS is that advertisers increasingly relied less on consumer surveys (derisively labeled "nose-counting") but on "motivational research" — often abbreviated by 1950s practitioners as "MR" — to look past what consumers say they want. Instead, the goal is to how they actually behave, and then gear their advertising content to shape or leverage consumers' unconscious desires. Packard's narratives in THE HIDDEN PERSUADERS are driven by revelations of the disturbing and even scandalous agendas of MR entrepreneurs and the advertising companies that hire them. Even so, Packard is careful in his book, in its penultimate chapter, to address what he calls "the question of validity" — that is, the question of whether "hidden persuaders'" strategies and tactics for manipulating consumers and voters are actually scientifically grounded. Quite properly, Packard acknowledges that the claims of the MR companies may have been oversold, or may have been adopted by companies who simply lack any other strategy for figuring out how to reach and engage consumers.

In spite of Packard's scrupulous efforts to make sure that no claims of advertising's superpowers to sway our thinking are accepted uncritically, our culture nevertheless has accepted at least provisionally the idea that advertising (and its political cousin, propaganda), affects human beings at pre-rational levels. It is this acceptance of the idea that content somehow takes us over that Tristan Harris invokes consistently in his writings and presentations about how social media, the Facebook newsfeed, and internet advertising work on us.

Harris prefers to describe how these online phenomena affect us in deterministic ways:

"Now, if this is making you feel a little bit of outrage, notice that that thought just comes over you. Outrage is a really good way also of getting your attention. Because we don't choose outrage — it happens to us."

"The race for attention [is] the race to the bottom of the brainstem."

Nothing Harris says about the Facebook newsfeed would have seemed foreign to a Madison Avenue advertising executive in, say, 1957. (Vance Packard includes commercial advertising as well as political advertising as centerpieces of what he calls "the large-scale efforts being made, often with impressive success, to channel our unthinking habits, our purchasing decisions, and our thought processes by the use of insights gleaned from psychiatry and the social sciences.") Harris describes Facebook and other social media in ways that reflect time-honored criticisms of advertising generally, and mass media generally.

But remember that what Harris says about internet advertising or Facebook notifications or the Facebook news feed is true of all communications. It is the very nature of communications among human beings that they give us thoughts we would not otherwise have. It is the very nature of hearing things or reading things or watching things that we can't unhear them, or unread them, or unwatch them. This is not something uniquely terrible about internet services. Instead it is something inherent in language and art and all communications. (You can find a good working definition of "communications" in Article 19 of the United Nations' Universal Declaration of Human Rights, which states that individuals have the right "to seek, receive, or impart information.") That some people study and attempt to perfect the effectiveness of internet offerings — advertising or Facebook content or anything else — is not proof that they're up to no good. (They arguably are exercising their human rights!) Similarly, the fact that writers and editors, including me, try to study how words can be more effective when it comes to sticking in your brain is not an assault on your agency.

It should give us pause that so many complaints about Facebook, about social media generally, about internet information services, and about digital devices actively (if maybe also unconsciously) echo complaints that have been made about any new mass medium (or mass-media product). What's lacking in modern efforts to criticize social media in particular — and especially when it comes to big questions like whether social media are damaging to democracy — is the failure of most critics to be looking at their own hypotheses skeptically, seeking falsification (which philosopher Karl Popper rightly notes is a better test of the robustness of a theory) rather than verification.

As for all the addictive harms that are caused by combining Facebook and Twitter and Instagram and other internet services with smartphones, isn't it worth asking critics whether they've considered turning notifications off for the social-media apps?

(8) Social media are bad for us because they get their money from advertising, and advertising — especially effective advertising — is inherently bad for us.

Harris's co-conspirator Roger McNamee, whose authority to make pronouncements on what Facebook and other services are doing wrong derives primarily from his having gotten richer from them, is blunter in his assessment of Facebook as a public-health menace:

"Relative to FB, the combination of an advertising model with 2.1 billion personalized Truman Shows on the ubiquitous smartphone is wildly more engaging than any previous platform ... and the ads have unprecedented effectiveness."

There's a lot to make fun of here--the presumption that 2.1 billion Facebook users are just creating "personalized Truman Shows," for example. Only someone who fancies himself part of an elite that's immune to what Harris calls "persuasion" would presume to draw that conclusion about the hoi polloi. But let me focus instead on the second part--the bit about the ads with "unprecedented effectiveness." Here the idea is, obviously, that advertising may be better for us when it's less effective.

Let's allow for a moment that maybe that claim is true! Even if that's so, advertising has played a central role in Western commerce for at least a couple of centuries, and in world commerce for at least a century, and the idea that we need to make advertising less effective is, I think fairly clearly, a criticism of capitalism generally. Now, capitalism may very well deserve that sort of criticism, but it seems like an odd critique coming from someone who's already profited immensely from that capitalism.

And it also seems odd that it's focused particularly on social media when, as we have the helpful example of THE HIDDEN PERSUADERS to remind us, we've been theoretically aware of the manipulations of advertising for all of this century and at least half of the previous one. If you're going to go after commercialism and capitalism and advertising, you need to go big--you can't just say that advertising suddenly became a threat to us because it's more clearly targeted to us based on our actual interests. (Arguably that's a feature rather than a bug.)

In responding to these criticisms, McNamee says "I have no interest in telling people how to live or what products to use." (I think the meat of his and Harris's criticisms suggests otherwise.) He explains his concerns this way:

"My focus is on two things: protecting the innocent (e.g., children) from technology that harms their emotion development and protecting democracy from interference. I do not believe that tech companies should have the right to undermine public health and democracy in the pursuit of profits."

As is so often the case with entrepreneurial moral panics, the issue ultimately devolves to "protecting the innocent" — some of whom surely are children but some other proportion of whom constitute the rest of us. In an earlier part of his exploration of these issues on the venerable online conferencing system The WELL, McNamee makes clear, in fact, that he really is talking about the rest of us (adults as well as children):

"Facebook has 2.1 billion Truman Shows ... each person lives in a bubble tuned to their emotions ... and FB pushes emotional buttons as needed. Once it identifies an issue that provokes your emotions, it works to get you into groups of like-minded people. Such filter bubbles intensify pre-existing beliefs, making them more rigid and extreme. In many cases, FB helps people get to a state where they are resistant to ideas that conflict with the pre-existing ones, even if the new ideas are demonstrably true."

These generalizations wouldn't need much editing to fit 20th-century criticisms of TV or advertising or comic books or 19th-century criticisms of dime novels or 17th-century criticisms of the theater. What's left unanswered is the question of why this new mass medium is going to doom us when none of the other ones managed to do it.

(9) Social media need to be reformed so they aren't trying to make us do anything or get anything out of us.

It's possible we ultimately may reach some consensus on how social media and big internet platforms generally need to be reformed. But it's important to look closely at each reform proposal to make sure we understand what we're asking for and also that we're clear on what the reforms might take away from us. Once Harris's TED talk gets past the let-me-scare-you-about-Facebook phase, it gets better — Harris has a program for reform in mind. Specifically, he calls for what he calls "three radical changes to our society," which I will paraphrase and summarize here.

First, Harris says, "we need to acknowledge that we are persuadable." Here, unfortunately, he elides the distinction between being persuaded (which involves evaluation and crediting of arguments or points of view) and being influenced or manipulated (which may happen at an unconscious level). (In fairness, Vance Packard's THE HIDDEN PERSUADERS is guilty of the same elision.) But this first proposition isn't radical at all — even if we're sticks-in-the-mud, we normally believe we are persuadable. It may be harder to believe that we are unconsciously swayed by how social media interact with us, but I don't think it's exactly a radical leap. We can take it as a given, I think, that internet advertising and Facebook's and Google's algorithms try to influence us in various ways, and that they sometimes succeed. The next question then becomes whether this influence is necessarily pernicious, but Harris finds passes quickly over this question, assuming the answer is yes.

Second, Harris argues, we need new models and accountability systems, guaranteeing accountability and transparency for the ways in which our internet services and digital devices try to influence us. Here there's very little to argue with. Transparency about user-experience design that makes us more self-aware is all to the good. So that doesn't seem like a particularly radical goal either.

It's in Harris's third proposal — "We need a design renaissance" — that you actually do find something radical. As Harris explains it, we need to redesign our interactions with services and devices so that we're never persuaded to do something that we may not initially want to do. He states, baldly, that "the only form of ethical persuasion that exists is when the goals of the persuader are aligned with the goals of the persuadee." This is a fascinating proposition that, so far as I know, is not particularly well-grounded in fact or in the history of rhetoric or in the history of ethics. It seems clear that sometimes it's necessary to persuade people of ideas that they may be predisposed not to believe, and that, in fact, they may be more comfortable not believing.

Given that fact, it follows that If we are worried about whether Facebook's algorithms lead to "filter bubbles," we should call for (or design) a system around the idea of never persuading anyone whose goals aren't already aligned with yours. Arguably, such a social-media platform might be more prone to filter bubbles rather than less so. One doesn't get the sense, reviewing Harris's presentations or other public writings and statements from his allies like Roger McNamee, either that they've compared current internet communications with previous revolutions driven by new mass-communications platforms, or analyzed their theories in light of the centuries of philosophical inquiry regarding human autonomy, agency, and ethics.

Moving past Harris's TED talk, we next must consider McNamee's recent suggestion that Facebook move from an advertising-supported to for-pay model. In a February 21 Washington Post op-ed, McNamee wrote the following:

"The indictments brought by special counsel Robert S. Mueller III against 13 individuals and three organizations accused of interfering with the U.S. election offer perhaps the most powerful evidence yet that Facebook and its Instagram subsidiary are harming public health and democracy. The best option for the company — and for democracy — is for Facebook to change its business model from one based on advertising to a subscription service."

In a nutshell, the idea here is that the incentives of advertisers, who want to compete for your attention, will necessarily skew how even the most well-meaning version of advertising-supported Facebook interacts with you, and not for the better. So the fix, he argues, is for Facebook to get rid of advertising altogether. "Facebook's advertising business model is hugely profitable," he writes, "but the incentives are perverse."

It's hard to escape the conclusion that McNamee believes either (a) advertising is inherently bad, or (b) advertising made more effective by automated internet platforms is particularly bad. Or both. And maybe advertising is, in fact, bad for us. (That's certainly a theme of Vance Packard's THE HIDDEN PERSUADERS, as well as of more recent work such as Tim Wu's book 2016 book THE ATTENTION MERCHANTS.) But it's hard to escape the conclusion that McNamee, troubled by Brexit and by President Trump's election, wants to kick the economic legs out from under Facebook's (and, incidentally, Google's and Bing's and Yahoo's) economic success. Algorithm-driven serving of ads is bad for you! It creates perverse incentives! And so on.

It's true, of course, that some advertising algorithms have created perverse incentives (so that Candidate Trump's provocative ads were seen as more "engaging" and therefore were sold cheaper — or, alternatively, more expensively — than Candidate Clinton's. I think the criticism of that particular algorithmicapproach to pricing advertising is valid. But there are other ways to design algorithmic ad service, and it seems to me that the companies that have been subject to the criticisms are being responsive to them, even in the absence of regulation. This, I think, is the proper way to interpret Mark Zuckerberg's newfound reflection (and maybe contrition) over Facebook's previous approach to its users' experience, and his resolve — honoring without mentioning Tristan Harris's longstanding critique — that "[o]ne of our big focus areas for 2018 is making sure the time we all spend on Facebook is time well spent."

Some Alternative Suggestions for Reform and/or Investigation

It's not too difficult, upon reflection, to wonder whether the problem of "information cocoons" or "filter bubbles" is really as terrible as some critics have maintained. If hyper-addictive filter-bubbles have historically unprecedented power to overcome our free will, surely presumably have this effect even on most assertive, independently thinking, strong-minded individuals — like Tristan Harris or Roger McNamee. Even six-sigma-degree individualists might not escape! But the evidence that this is, in fact, the case, is less than overwhelming. What seems more likely (especially in the United States and in the EU) is that people who are dismayed by the outcome of the Brexit referendum or the U.S. election are trying to find a Grand Unifying Theory to explain why things didn't work out they way they'd expected. And social media are new, and they seem to have been used by mischievous actors who want to skew political processes, so it follows that the problem is rooted in technology generally or in social media or in smartphones in particular.

But nothing I write here should be taken as arguing that social media definitely aren't causing or magnifying harms. I can't claim to know for certain. And it may well be the case, in fact, that some large subset of human beings create "filter bubbles" for themselves regardless of what media technologies they're using. That's not a good thing, and it's certainly worth figuring out how to fix that problem if it's happening, but focusing on how that problem as a presumed phenomenon specific to social media perhaps focuses on a symptom of the human condition rather than a disease grounded in technology.

In this context, then, the question is, what's the fix? There are some good suggestions for short-term fixes, such as the platforms' adopting transparency measures regarding political ads. That's an idea worth exploring. Earlier in this series I've written about other ideas as well (e.g., using grayscale on our iPhones).

There are, of course, more general reforms that aren't specific to any particular platform. To start with, we certainly need to address more fundamental problems — meta-platform problems, if you will — of democratic politics, such as teaching critical thinking. We actually do know how to teach critical thinking — thanks to the ancient Greeks we've got a few thousand years of work done already on that project — but we've lacked the social will to teach it universally. It seems to me that this is the only way by which a cranky individualist minority that's not easily manipulated by social media, or by traditional media, can become the majority. Approaching all media (including radio, TV, newspapers, and other traditional media — not just internet media, or social media) with appropriate skepticism has to be part of any reform policy that will lead to lasting results.

It's easy, however, to believe that education — even the rigorous kind of education that includes both traditional critical-thinking skills and awareness of the techniques that may be used in swaying our opinions — will not be enough. One may reasonably believe that education can never be enough, or that, even when education is sufficient to change behavior (consider the education campaigns that reduced smoking or led to increased use of seatbelts), education all by itself simply takes too long. So, in addition to education reforms, there probably are more specific reforms — or at least a consensus as to best practices — that Facebook, other platforms, advertisers, government, and citizens ought to consider. (It seems likely that, to the extent private companies don't strongly embrace public-spirited best-practices reforms, governments will be willing to impose such reforms in the absence of self-policing.)

One of the major issues that deserve more study is the control and aggregation of user information by social-media platforms and search services. It's indisputable that online platforms have potentiated a major advance in market research — it's trivially easy nowadays for the platforms to aggregate data as to which ads are effective (e.g., by inspiring users to click through to the advertisers' websites). Surely we should be able to opt out, right?

But there's an unsettled public-policy question about what opting out of Facebook means or could mean. In his testimony earlier this year at Senate and House hearings on Facebook, Mark Zuckerberg has consistently stressed that individual users do have some high degree of control over the data (pictures, words, videos, and so on) that they've contributed to Facebook, and that users can choose to remove the data they've contributed. Recent updates in Facebook's privacy policy seem to underscore users' rights in this regard.

It seems clear that Facebook is committing itself at least to what I call Level 1 Privacy: you can erase your contributions from Facebook altogether and "disappear," at least when it comes to information you have personally contributed to the platform. But does it also mean that even other people who've shared my stuff no longer can share it (in effect, allowing me to depart and punch holes in other people's sharing of my stuff when I depart?

If Level 1 Privacy relates to the information (text, pictures, video, etc., that I've posted), that's not the end of the inquiry. There's also what I have called Level 2 Privacy, centering on what Facebook knows about me, or can infer from my having been on the service, even after I've gone. Facebook has had a proprietary interest in drawing inferences from how we interact with their service and using that to inform what content (including but not limited to ads) that Facebook serves to us. That's Facebook's data, not mine, because FB generated it, not me. If I leave Facebook, surely Facebook retains some data about me based on my interactions on the platform. (We also know, in the aftermath of Zuckerberg's testimony before Congress, that Facebook manages to collect data about people who themselves are not users of the service.)

And then there's Level 3 Privacy, which is the question of what Facebook can and should do with this inferential data that it has generated. Should Facebook share it with third parties? What about sharing it with governments? If I depart and leave a resulting hole in Facebook content, are there still ways to connect the dots so that not just Facebook itself, but also third-party actors, including governments, can draw reliable inferences about the now-absent me? In the United States, there arguably may be Fourth Amendment issues involved, as I've pointed out in a different context elsewhere. We may reasonably conclude that there should be limits on how such data can be used and on what inferences can be drawn. This is a public-policy discussion that needs to happen sooner rather than later.

Apart from privacy and personal-data concerns, we ought to consider what we really think about targeted advertising. If the criticism of targeted advertising, "motivational research," and the like historically has been that the ads are pushing us, then the criticism of internet advertising seems to be that internet-based ads are pulling us or even seducing us, based on what can be inferred about our inclinations and preferences. Here I think the immediate task has to be to assess whether the claims made by marketers and advertisers regarding the manipulative effects ads have on us are scientifically rigorous and testable. If the claims stand up to testing, then we have some hard public-policy questions we need to ask about whether and how advertising should be regulated. But if they aren't — if, in fact, our individual intuitions that we retain freedom and autonomy even in the face of internet advertising and all the data that can be gathered about us — then we need to assert that that freedom and autonomy and acknowledge that, just maybe, there's nothing categorically oppressive about being invited to engage in commercial transactions or urged to vote for a particular candidate.

Both the privacy questions and the advertising questions are big, complex questions that don't easily devolve to traditional privacy talk. If in fact we need to tackle these questions pro-actively, I think we must begin by defining what the problems are in ways that all of us (or at least most of us) agree on. Singling out Facebook is the kind of single-root-cause theory of what's wrong with our culture today may appeal to us as human beings — we all like straightforward storylines — but that doesn't mean it's correct. Other internet services harvest our data too. And non-internet companies have done so (albeit in more primitive ways) for generations. It is difficult to say they never should do so, and it's difficult to frame the contours of what best practices should be.

But if we're going to grapple with the question of regulating social-media platforms and other internet services, thinking seriously about what best practices should be, generally speaking, is the task that lies before us now. Offloading the public-policy questions to the platforms themselves — by calling on Facebook or Twitter or Google to censor antisocial content, for example — is the wrong approach, because it dodges the big questions that we need to answer. Plus, it would likely entrench today's well-moneyed internet incumbents.

Nobody elected Mark Zuckerberg or Jack Dorsey (or Tim Cook or Sundar Pichai) to do that for us. The theory of democracy is that we decide the public-policy questions ourselves, or we elect policymakers to do that for us. But that means we each have to do the heavy lifting of figuring out what kinds of reforms we think we want, and what kind of commitments we're willing to make to get the policies right.

from the everything's-turning-up-facebook dept

Imagine that you're a new-media entrepreneur in Europe a few centuries back, and you come up with the idea of using moveable type in your printing press to make it easier and cheaper to produce more copies of books. If there are any would-be media critics in Europe taking note of your technological innovation, some will be optimists. The optimists will predict that cheap books will hasten the spread of knowledge and maybe even fuel a Renaissance of intellectual inquiry. They'll predict the rise of newspapers, perhaps, and anticipate increased solidarity of the citizenry thanks to shared information and shared culture.

Others will be pessimists—they'll foresee that the cheap spread of printed information will undermine institutions, will lead to doubts about the expertise of secular and religious leaders (who are, after all, better educated and better trained to handle the information that's now finding its way into ordinary people's hands). The pessimists will guess, quite reasonably, that cheap printing will lead to more publication of false information, heretical theories, and disruptive doctrines, which in turn may lead, ultimately, to destructive revolutions and religious schisms. The gloomiest pessimists will see, in cheap printing and later in the cheapness of paper itself—making it possible for all sorts of "fake news" to be spread--the sources of centuries of strife and division. And because the pain of the bad outcomes of cheap books is sharper and more attention-grabbing than contemplation of the long-term benefits of having most of the population know how to read, the gloomiest pessimists will seem to many to possess the more clear-eyed vision of the present and of the future. (Spoiler alert: both the optimists and the pessimists were right.)

Fast-forward to the 21st century, and this is just where we're finding ourselves when we look at public discussion and public policy centering on the internet, digital technologies, and social media. Two recent books written in the aftermath of recent revelations about mischievous and malicious exploitation of social-media platforms—especially Facebook and Twitter—exemplify this zeitgeist in different ways. And although both of these books are filled with valuable information and insights, they also yield (in different ways) to the temptation to see social media as the source of more harm than good. Which leaves me wanting very much both to praise what's great in these two books (which I read back-to-back) and to criticize them where I think they've gone too far over to the Dark Side.

The first book is Clint Watts's MESSING WITH THE ENEMY: SURVIVING IN A SOCIAL MEDIA WORLD OF HACKERS, TERRORISTS, RUSSIANS, AND FAKE NEWS. Watts is a West Point graduate and former FBI agent who's an expert on today's information warfare, including efforts by state actors (notably Russia) and non-state actors (notably Al Qaeda and ISIS) to exploit social media both to confound enemies and to recruit and inspire allies. I first heard of the book when I attended a conference at Stanford this spring where Watts—who has testified several times on these issues—was a presenter. His presentation was an eye-opening, erasing whatever lingering doubt I might have had about the scope and organization of those who want to use today's social media for malicious or destructive ends.

In MESSING WITH THE ENEMY Watts relates in a bracing yet matter-of-fact tone not only his substantive knowledge as a researcher and expert in social-media information warfare but also his first-person experiences in engaging with foreign terrorists active on social-media platforms and in being harassed by terrorists (mostly virtually) for challenging them in public exchanges. "The internet brought people together," Watts writes, "but today social media is tearing everyone apart." He notes the irony of social media's receiving premature and overgenerous credit for democratic movements against various dictatorships but later being exploited as platforms for anti-democratic and terrorist initiatives:

"Not long after many across the world applauded Facebook for toppling dictators during the Arab Spring revolutions of 2010 and 2011, it proved to be a propaganda platform and operational communications network for the largest terrorist mobilization in world history, bringing tens of thousands of foreign fighters under the Islamic State's banner in Syria and Iraq."

And it wasn't just non-state terrorists who learned quickly how to leverage social-media platforms; an increasingly activist and ambitious Russia, under the direction of Russian President Vladimir Putin, did so as well. Watts argues persuasively that Russia not only assisted and sponsored relatively inexpensive disinformation and propaganda campaigns using the social-media platforms to encourage divisiveness and lack of faith in government institutions (most successfully with the Brexit vote and the 2016 American elections) but also actively supported the hacking of the Democratic National Committee computer network which led to email dumps (using Wikileaks as a cutout). The security breaches, together with "computational propaganda"—social-media "bots" that mimicked real users in spreading disinformation and dissension—played an important role in the U.S. election, Watts writes, helping "the race remain close at times when Trump might have fallen completely out of the running." Even so, Watts doesn't believe Russian propaganda efforts alone would have tilted the outcome of the election—what it did instead was hobble support for Clinton so much that when, when FBI Director James Comey announced, one week before the election, that the Clinton email-server investigation had reopened, the Clinton campaign couldn't recover. "Without the Comey letter," he writes, "I believe Clinton would have won the election." Later in the book he connects the dots more explicitly: "Without the Russian influence effort, I believe Trump would not have been within striking distance of Clinton on Election Day. Russian influence, the Clinton email investigation, and luck brought Trump a victory—all of these forces combined."

Where Watts's book focuses on bad actors who exploit the openness of social-media platforms for various malicious ends, Siva Vaidhyanathan's ANTISOCIAL MEDIA: HOW FACEBOOK DISCONNECTS US AND UNDERMINES DEMOCRACY argues that the platforms—and especially the Facebook platform—is inherently corrosive to democracy. (Full disclosure: I went to school with Vaidhyanathan, worked on our student newspaper with him, and I consider him a friend.) Acknowledging his intellectual debt to his mentor, the late social critic Neil Postman, Vaidhyanathan blames the negative impacts of various exploitations of Facebook and other platforms on the platforms themselves. Postman was a committed technopessimist, and Vaidhyanathan takes time to chart in ANTISOCIAL MEDIA how Postman's general skepticism about new information technologies ultimately led his younger colleague to temper his originally optimistic view of the internet and digital technologies generally. If you read Vaidhyanathan's work over time, you find in his writing a progressively darker view of the internet and its ongoing evolution, taking a significantly more pessimistic turn around the time of his 2011 book, THE GOOGLIZATION OF EVERYTHING (AND WHY WE SHOULD WORRY). In his earlier book, Vaidhyanathan took pains to be as fair-minded as he could in raising questions about Google and whether it can or should be trusted to play such an outsized role in our culture as the mediator of so much of our informational resources. He was skeptical (not unreasonably) about whether Google's confidence in both its own good intentions and its own expertise is sufficient reason to trust the company—not least because a powerful company can stay around as a gatekeeper for the internet long past the time its well-intentioned founders depart or retire.

With ANTISOCIAL MEDIA, Vaidhyanathan cuts Mark Zuckerberg (and his COO, Sheryl Sandberg) rather less of a break. Facebook's leadership, as I read Vaidhyanathan's take, is both more arrogant than Google's and more heedless of the consequences of its commitment to connect everyone in the world through the platform. Synthesizing a full range of recent critiques of Facebook's design as a platform, he relentlessly characterizes Facebook as driving us to shallow, reactive reactions to one another rather than promoting reflective discourse that might improve or promote our shared values. Facebook, in his view, distracts us instead of inspiring us to think. It's addictive for us in something like the same way gambling or potato chips can be addictive for us. Facebook privileges the visual (photographs, images, GIFs, and the like), he insists, over the verbal and discursive.

And of course even the verbal content is either filter-bubbly—as when we convene in private Facebook groups to share, say, our unhappiness about current politics—or divisive (so that we share and intensify our outrage about other people's bad behavior, maybe including screenshots of something awful someone has said elsewhere on Facebook or on Twitter). Vaidhyanathan suggests that at one point our political discourse as ordinary citizens was more rational and reflective, but now is more emotion- and rage-driven and divisive. Me, I think the emotionalism and rage was always there.

Even when Vaidhyanathan allows that there may be something positive about one's interactions on Facebook, he can't quite help himself from being reductive and dismissive about it:

"Nor is Facebook bad for everyone all the time. In fact, it's benefited millions individually. Facebook has also allowed people to find support and community despite being shunned by friends and family or being geographically isolated. Facebook is still our chief source of cute baby and puppy photos. Babies and puppies are among the things that make life worth living. We could all use more images of cuteness and sweetness to get us through our days. On Facebook babies and puppies run in the same column as serious personal appeals for financial help with medical care, advertisements for and against political candidates, bogus claims against science, and appeals to racism and violence."

In other words, Facebook may occasionally make us feel good for the right reasons (babies and puppies) but that's about the best most people can hope for from the platform. Vaidhyanathan has a particular antipathy towards Candy Crush, which you can connect to your Facebook account—a video game that certainly seems vacuous, but also seems innocuous to me. (I've never played it myself.)

Given his antipathy towards Facebook, you might think that Vaidhyanathan's book is just another reworking of the moral-panic tomes that we've seen a lot of in the last year or two, which decry the internet and social media much the same way previous generations of would-be social critics complained about television, or the movies, or rock music, or comic books. (Hi, Jonathan Taplin! Hi, Franklin Foer!) But that's a mistake, primarily because Vaidhyanathan digs deep into choices—some technical and some policy-driven—that Facebook has made that facilitated bad actors' using the platform maliciously and destructively. Plus, Vaidhyanathan, to his credit, gives attention to how oppressive governments have learned to use the platform to stifle dissent and mute political opposition. (Watts notes this as well.) I was particularly pleased to see his calling out how Facebook is used in India, in the Philippines, and in Cambodia—all countries where I've been privileged to work directly with pro-democracy NGOs.

What I find particularly valuable is Vaidhyanathan's exploration of Facebook's advertising policies and their effect on political ads—I learned plenty from ANTISOCIAL MEDIA about the company's "Custom Audiences from Customer Lists," including this disturbing bit:

"Facebook's Custom Audiences from Customer Lists also gives campaigns an additional power. By entering email addresses of those unlikely to support a candidate or those likely to support an opponent, a campaign can narrowly target groups as small as twenty people and dissuade them from voting at all. 'We have three major voter suppression operations under way,' a campaign official told Bloomberg News just weeks before the election. The campaign was working to convince white leftists and liberals who had supported socialist Bernie Sanders in his primary bid against Clinton, young women, and African American voters not to go to the polls on election day. The campaign carefully targeted messages on Facebook to each of these groups. Clinton's former support for international trade agreements would raise doubts among leftists. Her husband's documented affairs with other women might soften support for Clinton among young women...."

What one saw in Facebook's deployment of the Custom Audiences feature is something fundamentally new and disturbing:

"Custom Audiences is a powerful tool that was not available to President Barack Obama and Governor Mitt Romney when they ran for president in 2012. It was developed in 2014 to help Facebook reach the takeoff point in profits and revenue. Because Facebook develops advertising tools for firms that sell shoes and cosmetics and only later invites political campaigns to use them, 'they never worried about the worst-case abuse of this capability, unaccountable, unreviewable political ads,' said Professor David Carroll of the Parsons School of Design. Such ads are created on a massive scale, targeted at groups as small as twenty, and disappear, so they are never examined or debated."

Vaidhyanathan quite properly criticizes Mark Zuckerberg's late-to-the-party recognition that perhaps Facebook may much more of a home to divisiveness and political mischief (and general unhappiness) than he previously had been willing to admit. And he's right to say that some of Zuckerberg's framing of new design directions for Facebook may be as likely to cause harm (e.g., more self-isolation in filter bubbles) than good. "The existence of hundreds of Facebook groups devoted to convincing others that the earth is flat should have raised some doubt among Facebook's leaders that empowering groups might not enhance the information ecosystem of Facebook," he writes. "Groups are as likely to divide us and make us dumber as any other aspect of Facebook."

But here I have to take issue with my friend Siva, because he overlooks or dismisses the possibility that Facebook's increasing support for "groups" of like-minded users may ultimately add up to a net social positive. For example, the #metoo groups seem to have enabled more women (and men) to come forward and talk frankly about their experiences with sexual assault and to begin to hold perpetrators of sexual assault and sexual harassment accountable. The fact that some folks also use Facebook groups for more frivolous or wrongheaded reasons (like promoting flat-earthism) strikes me as comparatively inconsequential.

Vaidhyanathan's also too quick, it seems to me, to dismiss the potential for Facebook and other platforms to facilitate political and social reform in transitional democracies and developing countries. Yes, bad governments can use social media to promote support for their regimes, and I don't think it's particularly remarkable that oppressive governments (or non-state actors like ISIS) learn to use new communications media maliciously. Governments may frequently be slow, but they're not invariably stupid—so it's no big surprise, for example that Cambodian prime minister Hun Sen has figured out how to use his Facebook page to drum up support for his one-party rule, which has driven out opposition press and the opposition Cambodia National Rescue Party.

But Vaidhyanathan overlooks how some activists are using Facebook's private groups to organize reform or opposition activities. In researching this review, I reached out to friends and colleagues in Cambodia, the Philippines and elsewhere to confirm whether the platform is useful to them—certainly they're cautious about what they say in public on Facebook, but they definitely use private groups for some organizational purposes. What makes the platform useful to activists is that it's accessible, easy to use, and amenable to posting multimedia sources (like pictures and videos of police and soldiers acting brutally towards protestors). And it's not just images--when I worked with activists in Cambodia on developing a citizen-rights framework as a response to their government's abrupt initiation of "cybercrime" legislation (really an effort to suppress dissenting speech), I suggested they work collaboratively in the MediaWiki software that Wikipedia's editors use. But the Cambodian activists quickly discovered that Facebook was an easier platform for technically less proficient users to learn quickly and use to review draft texts together. I was surprised at this, but also encouraged. Even though I had my own doubts whether Facebook was the right tool for the job, I figured they didn't need yet another American trying to tell them how to manage their own collaborations.

Like Watts's book, Vaidhyanathan's is strongest where it's built on independent research that doesn't merely echo what other critics have said. And both books are weakest when they uncritically import notions like Eli Pariser's "filter bubble" hypothesis or the social-media-makes-us-depressed hypothesis. (Both these notions are echoes of previous moral panics about previous new media, including broadcasting in the 20th century and cheap paper in the 19th. And both have been challenged by researchers.) Vaidhyanathan's so certain of the meme that Facebook's Free Basics program is an assault on network neutrality that he mostly doesn't investigate the program itself in any detail. The result is that his book (to this reader, anyway) seems to conflate Free Basics (a collection of low-bandwidth resources that Facebook provided a zero-rated platform for) with Facebook Zero (a zero-rated low-bandwidth version of Facebook by itself). In contrast, the Wikipedia articles on Free Basics and Facebook Zero lead off with warnings not to confuse the two.

In addition to the strengths and weaknesses the two books share, they also have a certain rhetorical approach in common—largely, in my view, because both authors want to push for reform, and because they want to challenge with the sunny-yet-unwarranted optimism with which Zuckerberg and Sandberg and other boosters have characterized social media. In effect, both authors seem to take the approach that, as we learn to be much more critical of social-media platforms, we don't need to worry about throwing out the baby with the bathwater—because, really, there is no baby. (If we bail on Facebook altogether, it's only the frequent baby pictures that we'd lose.)

Even so, both books also share an unwillingness to call for simple opposition to Facebook and other social-media platforms merely because they're misused. Watts argues persuasively instead for more coherent and effective positive messaging about American politics and culture—of the sort that used to be the province of the United States Information Agency. (I think he'd be happy if the USIA were revived; I would be too.) He also calls for an "equivalent of Consumer Reports" to "be created for social media feeds," which also strikes me as a fine idea.

Vaidhyanathan's reform agenda is less optimistic. For one thing, he's dismissive of "media literacy" as a solution because he doubts "we could even agree on what that term means and that there would be some way to train nearly two billion people to distinguish good from bad content." He has some near-term suggestions—for example, he'd like to see an antitrust-type initiative to break up Facebook, although it's unclear to me whether multiple competing Facebooks or a disassembled Facebook would be less hospitable to the kind of shallowness and abuses he sees in the platform's current incarnation. But mostly he calls for a kind of cultural shift driven by social critics and researchers like himself:

"This will be a long process. Those concerned about the degradation of public discourse and the erosion of trust in experts and institutions will have to mount a campaign to challenge the dominant techno-fundamentalist myth. The long, slow process of changing minds, cultures, and ideologies never yields results in the short term. It sometimes yields results over decades or centuries."

I agree that it frequently takes decades or even longer to truly assess how new media affect our culture for good or for ill. But as long as we're contemplating all those years of effort, I see no reason not to put media literacy on the agenda as well. I think there's plenty of evidence that people can learn to read what they see on the internet critically and do better than simply cherry-pick sources that agree with them—a vice that, it must be said, predates social media and the internet itself. The result of increasing skepticism about media platforms and the information we find in them may also lead (as Watts warns us) to more distrust of "experts" and "expertise," with the result that true expertise is more likely to be unfairly and unwisely devalued. But my own view is that skepticism and critical thinking—even about experts with expertise—is generally positive. For example, it may be annoying to today's physicians that patients increasingly resort to the internet about their real or imagined health problems—but engaged patients, even if they have to be walked back from foolish ideas again and again, are probably better off than the more passive health-care consumers of previous generations.

I think Vaidhyanathan is right, ultimately, to urge that we continue to think about social media critically and skeptically, over decades—and, you know, forever. But I think Watts offers the best near-term tactical solution:

"On social media, the most effective way to challenge a troll comes from a method that's taught in intelligence analysis. To sharpen an analyst's skills and judgment, a supervisor or instructor will ask the subordinate two questions when he or she provides an assessment: 'What do those who disagree with your assessment think, and why?' The analyst must articulate a competing viewpoint. The second question is even more important: 'Under what conditions, specifically, would your assessment be wrong?' [...] When I get a troll on Facebook, I'll inquire, 'Under what circumstance would you admit you were wrong?' or 'What evidence would convince you otherwise?" If they don't answer or can't articulate their answer, then I disregard them on that topic indefinitely."

Watts's heuristic strikes me as the perfect first entry in the syllabus for media literacy in particular and for criticism of social media in general.

In sum, I think both MESSING WITH THE ENEMY and ANTISOCIAL MEDIA deserve to be on every internet-focused policymaker's must-read list this season. I also think it's best that readers honor these books by reading them with the same clear-eyed skepticism that their authors preach.

from the not-with-the-fbi dept

When the FBI sued Apple a couple of years ago to compel Apple's help in cracking an iPhone 5c belonging to alleged terrorist Syed Rizwan Farook, the lines seemed clearly drawn. On the one hand, the U.S. government was asserting its right (under an 18th-century statutory provision called the All Writs Act) to force Apple to develop and implement technologies enabling the Bureau to gather all the evidence that might possibly be relevant in the San Bernardino terrorist-attack case. On the other, a leading tech company challenged the demand that it help crack the digital-security technologies it had painstakingly developed to protect users — a particularly pressing concern given that these days we often have more personal information on our handheld devices than we used to keep in our entire homes.

What a difference a couple of years has made. The Department of Justice's Office of Inspector General (OIG) released a report in March on the FBI's internal handling of issue of whether the Bureau truly needed Apple's assistance. The report makes clear that, despite what the Bureau said in its court filings, the FBI hadn't explored every alternative, including consultation with outside technology vendors, in cracking the security of the iPhone in question. The report also seemed to suggest that some department heads in the government agency were less concerned with the information that might be on that particular device than they were with setting a general precedent in court. Their goal? To establish as a legal precedent that Apple and other vendors have a general obligation to develop and apply technologies to crack the very digital security measures they so painstakingly implemented to protect their users.

In the aftermath of that report, and in heartening display of bipartisanship, Republican and Democratic members of Congress came together last week to introduce a new bill, the Secure Data Act of 2018, aimed at limiting the ability of federal agencies to seek court orders broadly requiring Apple and other technology vendors to help breach their own security technologies. (The bill would exclude court orders based on the comparatively narrow Communications Assistance to Law Enforcement Act—a.k.a. CALEA, passed in 1994--which requires telecommunications companies to assist federal agencies in implementing targeted wiretaps.)

This isn't the first time members of Congress in both parties have tried to limit the federal government's ability to demand that tech vendors build "backdoors" into their products. Bills similar to this year's Secure Data Act have been introduced a couple of times before in recent years. What makes this year's bill different, though, is the less-than-flattering light cast by the OIG report. (The bill's sponsors have expressly said as much.) At the very least the report makes clear that the FBI's own bureaucratic handling of the research into whether technical solutions were available to hack the locked iPhone led to both confusion as to what was possible and to delays in resolving that confusion.

But worse than that is the report's suggestion that some technologically challenged FBI department heads didn't even know how to frame (or parse) the questions about whether the agency possessed, or had access to, technical solutions to crack the iPhone's problem. And even worse is the report's account that at least some Bureau leaders may not even have wanted to discover such a technical was already available—because that discovery could undermine litigation they hoped would establish Apple's (and other vendors') general obligation to hack their own digital security if a court orders them to. As the report puts it:

After the outside vendor successfully demonstrated its technique to the FBI in late March, [Executive Assistant Director Amy] Hess learned of an alleged disagreement between the CEAU [Cryptographic and Electronic Analysis Unit] and ROU [Remote Operations Unit] Chiefs over the use of this technique to exploit the Farook iPhone – the ROU Chief wanted to use capabilities available to national security programs, and the CEAU Chief did not. She became concerned that the CEAU Chief did not seem to want to find a technical solution, and that perhaps he knew of a solution but remained silent in order to pursue his own agenda of obtaining a favorable court ruling against Apple. According to EAD Hess, the problem with the Farook iPhone encryption was the "poster child" case for the Going Dark challenge.

There's a lot to unpack here, and one key question is whether "capabilities available to national security programs" — that is, technologies used for FBI's counterintelligence programs — can and should be used in pursing criminal investigations and prosecutions. (If such technologies are used in criminal cases, the technologies may have to be revealed as part of court proceedings, which would bother the counterintelligence personnel in the FBI who don't want to publicize the tools they use.) But the case against Apple Inc. was based on a blanket assertion by FBI that neither its technical divisions nor the vendors the agency works with had access to any technical measures to break into Farook's company-issued iPhone. (Farook had destroyed his personal iPhones, and the FBI's eventually successful unlocking of his employer-issued phone apparently produced no evidence relating to the terrorist plot.)

Was the problem just bureaucratic miscommunication? The OIG report concludes that this was the fundamental source of internal misunderstandings about whether FBI did have access to technical solutions that didn't require drafting Apple into compelled cooperation to crack their own security. (The report recommends some structural reforms to address this.) And certainly there's evidence in the report that miscommunication plus the occasional lack of technical understanding did create problems within the Bureau.

But the OIG report also suggests that some individuals within the Bureau actually may have preferred to be able to argue that the FBI didn't have any alternative but to seek to compel Apple's technical assistance:

The CEAU Chief told the OIG that, after the outside vendor came forward [with a technical solution], he became frustrated that the case against Apple could no longer go forward, and he vented his frustration to the ROU Chief. He acknowledged that during this conversation between the two, he expressed disappointment that the ROU Chief had engaged an outside vendor to assist with the Farook iPhone, asking the ROU Chief, "Why did you do that for?" According to the CEAU Chief, his unit did not ask CEAU's partners to check with their outside vendors. CEAU was only interested in knowing what their partners had in hand – indicating that checking with "everybody" did not include OTD's trusted vendors, at least in the CEAU Chief's mind.

I have to note here, of course, that the FBI has consistently opposed strong encryption and other essential digital-security technologies since the "Crypto Wars" of the 1990s. This isn't due to any significant failures of the agency to acquire evidence it needs; instead, it's due to the FBI's fears that its ability to capture digital evidence of any sort may someday be significantly hindered by encryption and other security tech. That opposition to strong security tech has been baked into FBI culture for a while, and it's at the root of agency's fears of "the Going Dark challenge."

Let's be real: it's not clear that encryption will ever be the problem the FBI thinks it is, given that we live in what law professor Peter Swire has called "The Golden Age of Surveillance." But if the day that digital-security technology significantly hinders criminal investigations ever does come, then it would be appropriate for Congress to consider whether CALEA should be updated, or whether a new CALEA-like framework for technology companies like Apple should be enacted.

But that day hasn't come yet. That's why I favor passage of the Secure Data Act of 2018 — it would limit federal agencies' ability to impose general-purpose technology mandates through the courts' interpretation of a two-century-old ambiguous statute. (Among other features, the Act also would effectively clarify that that the All Writs Act, general-purpose statutory provision from 18th century can't be invoked all by itself to compel technology companies to undermine the very digital security measures they've been working so hard to strengthen.) In the long term, our security (in both cyberspace and meatspace) is going to depend much more on whether we all have technical tools that protect our information and data than it will depend on the FBI's has a legal mandate compelling Apple to hack into our iPhones.

Of course, I may be wrong about this. But I share Apple CEO Tim Cook's argument that this public-policy issue ought to be fully debated by our lawmakers, which is a better venue for policy development than a lawsuit filed based on a single dramatic incident like the terrorist attack in San Bernardino.

from the going-back dept

Mike Godwin (you know who he is) was recently going through some of his earlier writings, and came across an essay (really an outline) he had written to the Cypherpunks email list 25 years ago, in April of 1993 concerning the Clipper Chip and early battles on encryption and civil liberties. If you don't recall, the Clipper Chip was an early attempt by the Clinton administration to establish a form of backdoored encryption, using a key escrow system. What became quite clear in reading through this 25-year-old email is just how little has changed in the past 25 years. As we are in the midst of a new crypto war, Godwin has suggested republishing this essay from so long ago to take a look back at what was said back then and compare it to today.

Note: These notes were a response to a question during Saturday's Cypherpunks meeting about the possible implications of the Clipper Chip initiative on Fourth Amendment rights. Forward to anyone else who might think these interesting.

--Mike

Notes on Cryptography, Digital Telephony, and the Bill of Rights By Mike Godwin

I.Introduction

A. The recent announcement of the federal government's "Clipper Chip" has started me thinking again about what the principled "pure Constitutional" arguments a) opposed to Digital Telephony and b) in favor of the continuing legality of widespread powerful public-key encryption.

B. These notes do *not* include many of the complaints that have already been raised about the Clipper Chip initiative, such as:

(1) Failure of the Administration to conduct an inquiry before embracing a standard,
(2) Refusal to allow public scrutiny of the chosen encryption algorithm(s), which is the normal procedure for testing a cryptographic scheme, and
(3) Failure of the administration to address the policy questions raised by the Clipper Chip, such as whether the right balance between privacy and law-enforcement needs has been struck.

C. In other words, they do not address complaints about the federal government's *process* in embracing the Clipper Chip system. They do, however, attempt to address some of the substantive legal and Constitutional questions raised by the Clipper Chip and Digital Telephony initiatives.

II. Hard Questions from Law Enforcement

A. In trying to clarify my own thinking about the possible Constitutional issues raised by the government's efforts to guarantee access to public communications between individuals, I have spoken and argued with a number of individuals who are on the other side of the issues from me, including Dorothy Denning and various respresentatives of the FBI, including Alan McDonald.

B. McDonald, like Denning and other proponents both of Digital Telephony and of a standard key-escrow system for cryptography, is fond of asking hard questions: What if FBI had a wiretap authorization order and couldn't implement it, either because it was impossible to extract the right bits from a digital-telephony data stream, or because the communication was encrypted? Doesn't it make sense to have a law that requires the phone companies to be able to comply with a wiretap order?

C. Rather than respond to these questions, for now at least let's ask a different question. Suppose the FBI had an authorization order for a secret microphone at a public restaurant. Now suppose it planted the bug, but couldn't make out the conversation it was authorized to "seize" because of background noise at the restaurant. Wouldn't it make sense to have a law requiring everyone to speak more softly in restaurants and not to clatter the dishes so much?

D. This response is not entirely facetious. The Department of Justice and the FBI have consistently insisted that they are not seeking new authority under the federal wiretap statutes ("Title III"). The same statute that was drafted to outline the authority for law enforcement to tap telephonic conversations was also drafted to outline law enforcement's authority to capture normal spoken conversations with secret or remote microphones. (The statute was amended in the middle '80s by the Electronic Communications Privacy Act to protect "electronic communications," which includes e-mail, and a new chapter protecting _stored_ electronic communications was also added.)

E. Should we understand the law the way Digital Telephony proponents insist we do--as a law designed to mandate that the FBI (for example) be guaranteed access to telephonic communications? Digital Telephony supporters insist that it merely "clarifies" phone company obligations and governmental rights under Title III. If they're right, then I think we have to understand the provisions regarding "oral communications" the same way. Which is to say, it would make perfect sense to have a law requiring that people speak quietly in public places, so as to guarantee that the government can bug an oral conversation if it needs to.

F. But of course I don't really take Digital Telephony as an initiative to "clarify" governmental prerogatives. It seems clear to me that Digital Telephony, together with the "Clipper" initiative, prefigure a government strategy to set up an information regime that precludes truly private communications between individuals who are speaking in any way other than face-to-face. This I think is an expansion of government authority by almost any analysis.

III. Digital Telephony, Cryptography, and the Fourth Amendment

A. In talking with law enforcement representatives such as Gail Thackeray, one occasionally encounters the view that the Fourth Amendment is actually a _grant_ of a Constitutional entitlement to searches and seizures. This interpretation is jolting to those who have studied the history of the Fourth Amendment and who recognize that it was drafted as a limitation on government power, not as a grant of government power. But even if one doesn't know the history of this amendment, one can look at its language and draw certain conclusions.

B. The Fourth Amendment reads: "The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized."

C. Conspicuously missing from the language of this amendment is any guarantee that the government, with properly obtained warrant in hand, will be _successful_ in finding the right place to be searched or persons or things to be seized. What the Fourth Amendment is about is _obtaining warrants_--similarly, what the wiretap statutes are about is _obtaining authorization_ for wiretaps and other interceptions. Neither the Fourth Amendment nor Title III nor the other protections of the ECPA constitute an law-enforcement _entitlement_ for law enforcement.

D. It follows, then, that if digital telephony or widespread encryption were to create new burdens for law enforcement, this would not, as some law-enforcement representatives have argued, constitute an "effective repeal" of Title III. What it would constitute is a change in the environment in which law enforcement, along with the rest of us, has to work. Technology often creates changes in our social environment--some, such as the original innovation of the wiretap, may aid law enforcement, while others, such as powerful public-key cryptography, pose the risk of inhibiting law enforcement. Historically, law enforcement has responded to technological change by adapting. (Indeed, the original wiretaps were an adaptation to the widespread use of the telephone.) Does it make sense for law enforcement suddenly to be able to require that the rest of society adapt to its perceived needs?

IV. Cryptography and the First Amendment

A. Increasingly, I have come to see two strong links between the use of cryptography and the First Amendment. The two links are freedom of expression and freedom of association.

B. By "freedom of expression" I mean the traditionally understood freedoms of speech and the press, as well as freedom of inquiry, which has also long been understood to be protected by the First Amendment. It is hard to see how saying or publishing something that happens to be encrypted could not be protected under the First Amendment. It would be a very poor freedom of speech indeed that dictated that we could *never* choose the form in which we speak. Even the traditional limitations on freedom of speech have never reached so far. My decision to encrypt a communication should be no more illegal than my decision to speak in code. To take one example, suppose my mother and I agree that the code "777", when sent to me through my pager, means "I want you to call me and tell me how my grandchild is doing." Does the FBI have a right to complain because they don't know what "777" means? Should the FBI require pager services never to allow such codes to be used? The First Amendment, it seems to me, requires that both questions be answered "No."

C. "Freedom of association" is a First Amendment right that was first clearly articulated in a Supreme Court case in 1958: NAACP v. Alabama ex rel. Patterson. In that case, the Court held that Alabama could not require the NAACP to disclose a list of its members residing in Alabama. The Court accepted the NAACP's argument that disclosure of its list would lead to reprisals on its members; it held such forced disclosures, by placing an undue burden on NAACP members' exercise of their freedoms of association and expression, effectively negate those freedoms. (It is also important to note here that the Supreme Court in effect recognized that anonymity might be closely associated with First Amendment rights.)

D. If a law guaranteeing disclosure of one's name is sufficiently "chilling" of First Amendment rights to be unconstitutional, surely a law requiring that the government be able to read any communications is also "chilling," not only of my right to speak, but also of my decisions on whom to speak to. Knowing that I cannot guarantee the privacy of my communications may mean that I don't conspire to arrange any drug deals or kidnapping-murders (or that I'll be detected if do), but it also may mean that I choose not to use this medium to speak to a loved one, or my lawyer, or to my psychiatrist, or to an outspoken political activist. Given that computer-based communications are likely to become the dominant communications medium in the next century, isn't this chilling effect an awfully high price to pay in order to keep law enforcement from having to devise new solutions to new problems?

V. Rereading the Clipper Chip announcements

A. It is important to recognize that the Clipper Chip represents, among other things, an effort by the government to pre-empt certain criticisms. The language of announcements makes clear that the government wants us to believe it has recognized all needs and come up with a credible solution to the dilemma many believe is posed by the ubiquity of powerful cryptography.

B. Because the government is attempting to appear to take a "moderate" or "balanced" position to the issue, its initiative will tend to pre-empt criticisms of the government's proposal on the grounds of *process* alone.

C. But there is more to complain about here than bad process. My rereading of the Clipper Chip announcements will reveal that the government hopes to develop a national policy that includes limitations on some kinds of cryptography. Take the following two statements, for example:

D. 'We need the "Clipper Chip" and other approaches that can both provide law-abiding citizens with access to the encryption they need and prevent criminals from using it to hide their illegal activities.'

E. 'The Administration is not saying, "since encryption threatens the public safety and effective law enforcement, we will prohibit it outright" (as some countries have effectively done); nor is the U.S. saying that "every American, as a matter of right, is entitled to an unbreakable commercial encryption product." '

F. It is clear that neither Digital Telephony nor the Clipper Chip make any sense without restrictions on other kinds of encryption. Widespread powerful public-key encryption, for example, would render useless any improved wiretappability in the communications infrastructure, and would render superfluous any key-escrow scheme.

G. It follows, then, that we should anticipate, consistent with these two initiatives, an eventual effort to prevent or inhibit the use of powerful private encryption schemes in private hands.

H. Together with the Digital Telephony and Clipper Chip initiatives, this effort would, in my opinion, constitute an attempt to shift the Constitutional balance of rights and responsibilities against private entities and individuals and in favor of law enforcement. They would, in effect, create _entitlements_ for law enforcement where none existed before.

I. As my notes here suggest, these initiatives may be, in their essence, inconsistent with Constitutional guarantees of expression, association, and privacy.

It’s the nature of having known John Perry Barlow, and having been his friend, that you can’t write about what it means to have lost him Wednesday morning (he died in his sleep at the too-young age of 70) without writing about how he changed your life. So, I ask your forgiveness in advance if I say too much about myself here on the way to saying more about John.

I can and will testify that I had a life before I met John Perry Barlow. At the beginning of 1990 I was finishing up law school in Texas (only one more semester and then the bar exam!) and was beginning to think about my professional future (how about being a prosecutor in Houston?) and my personal future (should my long-term girlfriend and I get married?).

That was the glide path I was on before Grateful Dead lyricist John Perry Barlow, together with software entrepreneur Mitch Kapor and Sun Microsystems pioneering programmer John Gilmore, decided to start what would shortly be known as the Electronic Frontier Foundation (EFF). EFF disrupted all my inertial, half-formed plans and changed my life forever. (I didn’t, for example, become a prosecutor.) And John Perry Barlow was the red-hot beating heart of EFF.

I’d been feeling tremors in the Force before EFF even had a name, though. For reasons I can’t quite explain, I’d found ways to persuade people, including my university, to give me access to internet-capable accounts and services so that I could see the rest of the digital world as it was then represented in Usenet. I’d been a BBS hobbyist in the 1980s, but I thought I’d exhausted the BBS scene in Austin and wanted to know more of the larger digital world. Thanks to Usenet, over the Christmas break before my last semester of law school I’d become friends online with Clifford Stoll, whose book “The Cuckoo’s Egg” detailed how he had detected and helped thwart a foreign plot to hack into U.S. academic and research computers. Cliff had included his email address in the book and, as we so often did in those days, I just fired off a note to him and got to know him.

At about the same time, at my girlfriend’s urging, we spent a couple of days in San Francisco at MacWorld Expo, where I first met Mitch Kapor, who wore a Hawaiian shirt and demo’d what became for years my favorite Mac application, On Location. Other things were happening as well, and my computer-hobbyist nature— never too far in the background during my law-student years—kept me attuned to what seemed to be happening in the larger world which, as I would have framed it back then, seemed to reflect a convergence of my interests in constitutional law and cyberspace.

Just a month or two later, I came across the March 1990 issue of Harper’sMagazine, and there on the cover was this colloquy edited by Jack Hitt and Paul Tough titled, “Is Computer Hacking a Crime? (Harper’s theoretically makes a download of that old article available, but the links don’t work. You can find a transcribed version here). I wasn’t a subscriber, but I knew I had to read this. And there was Barlow – whose name I didn’t recognize – along with luminaries like Stewart Brand (former Merry Prankster, later the founder of The Whole Earth Catalog and The Whole Earth Review), Richard Stallman (founder and chief visionary of the Free Software movement that gave birth to the Linux operating system) and my new friend Cliff Stoll. They all had lots of opinions about computer hacking, but the participant whose words spoke most clearly to me was Barlow:

“BARLOW [Day 1, 11:54 A.M.]: Hackers hack. Yeah, right, but what's more to the point is that humans hack and always have. Far more than just opposable thumbs, upright posture, or excess cranial capacity, human beings are set apart from all other species by an itch, a hard-wired dissatisfaction. Computer hacking is just the latest in a series of quests that started with fire hacking. Hacking is also a collective enterprise. It brings to our joint endeavors the simultaneity that other collective organisms -- ant colonies, Canada geese -- take for granted. This is important, because combined with our itch to probe is a need to connect. Humans miss the almost telepathic connectedness that I've observed in other herding mammals. And we want it back. Ironically, the solitary sociopath and his 3:00 A.M. endeavors hold the most promise for delivering species reunion.”

This was a guy who really got it! A guy who recognized the itchiness in my brain compelling me to stay up nights finding ways to get into campus mainframes back in the 1970s, that had me tinkering with Apple II computers, with PCs and with Macs in the 1980s, and that had driven me to join the global Usenet conversation in just the last few months. Barlow saw that what we were doing with computers now (that is, in the 1980s and 1990s, at the dawn of the public internet) was essentially human—that human beings, being what they are, couldn’t stop themselves from doing it. And look at the line Barlow draws in this contribution (his first in the public colloquy in Harper’s)--it’s a line connecting human beings’ invention/discovery of fire (or “fire hacking”) with our use of computers to communicate with one another. “This is important, because combined with our itch to probe is the need to connect.” We miss our “almost telepathic connectedness.” And, as Barlow wrote, “we want it back.”

During my law school years—as well as the year I took off to serve as editor of the University of Texas student newspaper, The Daily Texan—I’d relied on computer BBSes to stay connected with people outside my studies, outside my work. Yet I’d begun to recognize that computer communications were just the same kinds of speech that our Constitution and Bill of Rights were meant to protect. I tried to persuade a favorite professor to let me write a research paper, for credit, on the First Amendment and computer bulletin boards. The professor (an immensely well-regarded First Amendment scholar, and deservedly so) shut me down, essentially saying that First Amendment doctrine was all settled, and that computer bulletin-board systems didn’t really alter fundamental questions about, say, publisher liability or what counts as speech or the press. Barlow, speaking in the Harper’s-sponsored forum on the WELL’s conferencing system, had seen something in the nascent online world that my professor had missed, and that I’d already had inklings about.

You also see in Barlow’s participation in that Harper’s forum certain long-term traits that sometimes bugged those of us who loved him. Barlow frequently yielded to the temptation to utter oracular pronouncements, to jump to conclusions before he’d done the reading. In what started out as a minor contretemps with “Acid Phreak” and “Phiber Optik,” participants who championed the exploratory hacking of computer systems—especially those of corporate giants—Barlow wrote this:

“BARLOW [Day 19, 9:48 P.M.]: Let me define my terms. Using hacker in a midspectrum sense (with crackers on one end and Leonardo da Vinci on the other), I think it does take a kind of genius to be a truly productive hacker. I'm learning PASCAL now, and I am constantly amazed that people can string those prolix recursions into something like PageMaker. It fills me with the kind of awe I reserve for splendors such as the cathedral at Chartres. With crackers like Acid and Optik, the issue is less intelligence than alienation. Trade their modems for skateboards and only a slight conceptual shift would occur. Yet I'm glad they're wedging open the cracks. Let a thousand worms flourish.”

To which Phiber Optik responded with this:

“OPTIK [Day 10, 10:11 P.M.]: You have some pair of balls comparing my talent with that of a skateboarder. Hmm... This was indeed boring, but nonetheless: [Editor's note: At this point in the discussion, Optik -- apparently having hacked into TRW's computer records -- posted a copy of Mr. Barlow's credit history. In the interest of Mr. Barlow's privacy -- at least what's left of it -- Harper's Magazine has not printed it.] I'm not showing off. Any fool knowing the proper syntax and the proper passwords can look up credit history. I just find your high-and-mighty attitude annoying and, yes, infantile.”

Barlow was stunned, just as you or I would have been, to see TRW’s version of his credit history—including its errors—published online. But the next thing he did was brilliant, and it’s not something anyone else would necessarily do. As Barlow recounts it in an article he wrote later that spring:

“I've been in redneck bars wearing shoulder-length curls, police custody while on acid, and Harlem after midnight, but no one has ever put the spook in me quite as Phiber Optik did at that moment. I realized that we had problems which exceeded the human conductivity of the WELL's bandwidth. If someone were about to paralyze me with a spell, I wanted a more visceral sense of him than could fit through a modem.

“I e-mailed him asking him to give me a phone call. I told him I wouldn't insult his skills by giving him my phone number and, with the assurance conveyed by that challenge, I settled back and waited for the phone to ring. Which, directly, it did.

“In this conversation and the others that followed I encountered an intelligent, civilized, and surprisingly principled kid of 18 who sounded, and continues to sound, as though there's little harm in him to man or data. His cracking impulses seemed purely exploratory, and I've begun to wonder if we wouldn't also regard spelunkers as desperate criminals if AT&T owned all the caves.”

This is where you see one of Barlow’s great gifts, fully as much of a talent as his lyrical wordsmithing. Barlow saw past his own feelings of fear and uncertainty and reached out to the human being behind the hacker handle, and found, in Phiber Optik, someone who deserved, in Barlow’s view, more admiration than fear. As he wrote about it in 1990, “The terrifying poses which Optik and Acid had been striking on screen were a media-amplified example of a human adaptation I'd seen before: One becomes as he is beheld. They were simply living up to what they thought we and, more particularly, the editors of Harper's, expected of them. Like the televised tears of disaster victims, their snarls adapted easily to mass distribution.”

Barlow also wrote this:

“Months later, Harper's took Optik, Acid and me to dinner at a Manhattan restaurant which, though very fancy, was appropriately Chinese. Acid and Optik, as material beings, were well-scrubbed and fashionably-clad. They looked to be dangerous as ducks.”

They looked to be dangerous as ducks. I’d have given a toe, or maybe even finger, to have written a sentence that apt.

Barlow’s larger insight—that maybe our sense of the threats of computers and the internet and the first generation of human beings to grow up with super-duper computer skills was just another iteration of our human fear of change and the new—informed Barlow’s co-founding of a new civil-liberties organization, originally pitched as the “the Computer Liberty Foundation.” The other co-founders—Mitch Kapor and John Gilmore, themselves breathtakingly remarkable people just as much as Barlow was (here I first typed “as Barlow is” because he still feels so present)—recognized that “Computer Liberty Foundation” was a bit clunky. Barlow, the poet who’d also been a rancher in Pinedale, Wyoming, coined the name that stuck: Electronic Frontier Foundation.

EFF, which was then primarily just Barlow, Kapor, and Gilmore, eventually decided they needed an in-house lawyer to help with the legal cases that were bubbling up with increasing frequency. I’d already been active in publicizing those cases, starting as a law student, then as a recent law graduate, even as I was studying for the Texas bar exam. Marc Rotenberg, then head of the Washington office of the Computer Professionals for Social Responsibility, had reached out to me as a possible staff member; CPSR was the recipient of EFF’s first grant for cyberspace legal research, and they needed to staff up. Rotenberg flew me to EFF’s first big press conference—this at the Washington Press Club—and it was there that I met Kapor (again) and Barlow for the first time. I got to hang out with these guys not just that day but also in the evening at a dinner meeting that included other people who’d later be EFF supporters and even board members. The main thing I remember from the dinner meeting is talking to Barlow—he’d called himself an “information mystic” (I think he was just trying out the term for size), and I piped up about Claude Shannon and information theory and my understanding of information as something more scientific than mystical. Of course, Barlow already knew about Shannon, about Teilhard de Chardin’s notion of the “noosphere,” about Aristotle’s precursor concept of “substantial form.” I knew instantly that I would get along with this guy.

I got recruited, not just by CPSR, but by EFF, and I became EFF’s first staff counsel (and, in fact, EFF’s first full-time employee). The nine years I spent at EFF were my first nine years as a lawyer, and every single one of those years was a year of revelation, always informed by Barlow’s openness, adventurousness and willingness to grapple with new problems and new ideas.

Ultimately, Barlow didn’t think every looming problem in cyberspace was no more dangerous than a duck. Like the rest of us at EFF (which began to expand in the following years), Barlow recognized the fear of encryption technology, the fear of computer-facilitated copyright infringement, the fear of “cyberporn” as the kind of neophobia so common in eras of technological change. When Congress passed the Communications Decency Act in 1996, which would have imposed massive censorship on the now-blooming internet, he channeled the anxiety all of us were feeling into his crafting of “A Declaration of the Independence of Cyberspace.”

I confess I didn’t much like this Declaration when Barlow shared it and later published it. With what Barlow admitted was “characteristic grandiosity,” the Declaration asserted that traditional, terrestrial governments “have no sovereignty where we gather” (that is, in cyberspace), and that “the global social space we are building” is “naturally independent of the tyrannies you seek to impose upon us.” By then I was already deep in my work for EFF on the constitutional challenge to the Communications Decency Act, and the hard fact that haunted my days was how fragile this new global social space was, and how little independence of the tyrannies it might ultimately have.

I was missing the forest for the trees. The simple fact is this: Barlow inspired a new generation of lawyers and activists to devote time and energy into preserving the great new world the internet and other digital technologies was giving us. As I wrote earlier this year in an essay for Cato Unbound:

“Here I must share some late-breaking news from the 1990s: the actual cyber-activists of that period (and here I must include myself) did not interpret Barlow’s cri de coeur as political philosophy. Barlow, best known prior to his co-founding of the Electronic Frontier Foundation as a songwriter for the Grateful Dead, was writing to inspire activism, not to prescribe a new world order, and his goal was to be lyrical and aspirational, not legislative. Barlow wrote and published his “Declaration” in the short days and weeks after Congress passed, and President Clinton signed into law, a telecommunications bill that aimed, in part, to censor the internet. No serious person – and certainly not the Electronic Frontier Foundation and other organizations that successfully challenged the Communications Decency Act provisions of that bill – believed that cyberspace would be “automagically” independent of the terrestrial world and its governments. Barlow’s “Declaration” is best understood, as Wired described it two decades later, as a “rallying cry.” Similarly, nobody thinks “The Star-Spangled Banner” or “America the Beautiful” or “This Land Is Your Land” is a constitution. (And of course the original Declaration of Independence isn’t one either.)”

Barlow had written his own inspirational anthem, and I’d like to think he’d particularly appreciate my comparing it to Woody Guthrie’s great song.

I can say one more thing about Barlow—about seeing him once again, for the last time in person, when a couple of friends and I visited him in spring of 2016 at John Gilmore’s house, where Barlow was continuing his long efforts at recovery from a heart attack and other problems that had reduced his mobility and energy but had not diminished his fundamentally optimistic outlook—optimism not just for himself and those he loved but for all of us. It was good to talk to John Perry Barlow that evening, to chat about nothing in particular, to reminisce a little. I had loved the man pretty much from the start and, circumstances being what they were, it was not the simple love of hero-worship from an adoring fan. Instead, it was the complicated, tricky love for someone with whom I got to share so many great moments of my life over many great (and not-so-great) years. It’s the love you end up having for lifelong friends, or for family members you’ve occasionally quarreled with over the years, but with whom you’ve shared so much, and with whom you’ve been able to do so much good work, that even when you disagree with them, you know ultimately all will be forgiven.

I can tell you what it felt like to sit down and catch up a bit with John Perry Barlow that last time. It felt like coming home.

from the the-list-is-growing dept

Late last year I published Part I of a project to map out all the complaints we hear about social media in particular and about internet companies generally. Now, here's Part 2.

This Part should have come earlier; Part 1 was published in November. I'd hubristically imagined that this is a project that might take a week or a month. But I didn't take into account the speed with which the landscape of the criticism is changing. For example, just as you're trying to do more research into whether Google really is making us dumber, another pundit (Farhad Manjoo at the New York Times) comes along and argues that Apple -- a tech giant no less driven by commercial motives than Google and its parent company, Alphabet -- ought to redesign its products to make us smarter (by making them less addictive). That is, it's Apple's job to save us from Gmail, Facebook, Twitter, Instagram, and other attention-demanding internet media — which we connect to through Apple's products, as well as many others.

In these same few weeks, Facebook has announced it's retooling the user experience for Facebook users in ways aimed at making the experience more personal and interactive and less passive. Is this an implicit admission that Facebook, up until now, has been bad for us? If so, is it responding to the charges that many observers have leveled at social-media companies — that they're bad for us and that they're bad for democracy.

And only this last week, social-media companies have responded to concerns about political extremists (foreign and domestic) in Senate testimony. Although the senators had broad concerns (ISIS recruitment, bomb-making information on YouTube), there was, of course, some allocation of time on the ever-present question of Russian "misinformation campaigns," which may not have altered the outcome of 2016's elections but still may aim to affect 2018 mid-terms and beyond.

These are recent developments, but coloring them all is a more generalized social anxiety about social media and big internet companies that is nowhere better summarized than in Senator Al Franken's last major public policy address. Whatever you think of Senator Franken's tenure, I think his speech was a useful accumulation of the growing sentiment among commentators that there's something out of control with social media and internet companies that needs to be brought back into control.

Now, let's be clear: even if I'm skeptical here about some claims that social media and internet giants are bad for us, that doesn't mean these criticisms necessarily lack any merit at all. But it's always worth remembering that, historically, every new mass medium (and mass-medium platform) has been declared first to be wonderful for us, and then to be terrible for us. So it's always important to ask whether any particular claim about the harms of social media or internet companies is reactive, reflexive... or whether it's grounded in hard facts.

(4) Social media (and maybe some other internet services) are bad for us because they're super-addictive, especially on our sweet, slick handheld devices.

"It's Time for Apple to Build a Less Addictive iPhone," according to New York Times tech columnist Farhad Manjoo, who published a column to that effect recently. To be sure, although "Addictive" is in the headline, Manjoo is careful to say upfront that, although iPhone use may leave you feeling "enslaved," it's not "not Apple's fault" and it "isn't the same as [the addictiveness] of drugs or alcohol." Manjoo's column was inspired by an open letter from an ad-hoc advocacy group that included an investment-management firm and the California State Teachers Retirement System (both of which are Apple shareholders). The letter, available here at ThinkDifferentlyAboutKids.com (behind an irritating agree-to-these-terms dialog) calls for Apple to add more parental-control choices for its iPhones (and other internet-connected devices, one infers). After consulting with experts, the letter's signatories argue, "we note that Apple's current limited set of parental controls in fact dictate a more binary, all or nothing approach, with parental options limited largely to shutting down or allowing full access to various tools and functions." Per the letter's authors: "we have reviewed the evidence and we believe there is a clear need for Apple to offer parents more choices and tools to help them ensure that young consumers are using your products in an optimal manner."

Why Apple in particular? Obviously, the fact that two of the signatories own a couple of billion dollars' worth of Apple stock explains this choice to some extent. But one hard fact is that Apple's share of the smartphone market mostly stays in the 12-to-20-percent range. (Market leader Samsung has held 20-30 percent of the market since 2012.) Still, the implicit argument is that Apple's software and hardware designs for the iPhone will mostly lead the way for other phone-makers going forward, as they mostly have for the first decade of the iPhone era.

Still, why should Apple want to do this? The idea here is that Apple's primarily a hardware-and-devices company — which distinguishes Apple from Google, Facebook, Amazon, and Twitter, all of which primarily deliver an internet-based service. Of course, Apple's an internet company too (iTunes, Apple TV, iCloud, and so on), but the company's not hooked on the advertising revenue streams that are the primary fuel for Google, Facebook, and Twitter, or on the sales of other, non-digital merchandise (like Amazon). The ad revenue for the internet-service companies creates what Manjoo argues are "misaligned incentives" — when ad-driven businesses' economic interests lie in getting more users clicking on advertisements, he reasons, he's "skeptical" that (for example) Facebook is the going to offer any real solution to the "addiction" problem. Ultimately, Manjoo agrees with the ThinkDifferentlyAboutKids letter -- Apple's in the best position to fix iPhone "addiction" because of their design leadership and independence from ad revenue.

It's worth remembering that the idea technology is addictive is itself an addictive idea — not that long ago, it was widely (although not universally) believed that television was addictive. This New York Times story from 1990 advances that argument, although the reporter does quote a psychiatrist who cautions that "the broad definition" of addiction "is still under debate." (Manjoo's "less addictive iPhone" column inoculates itself, you'll recall, by saying iPhone addiction is "not the same.")

"Addiction" of course is an attractive metaphor, and certainly those of us who like using our electronics to stay connected can see the appeal of the metaphor. And Apple, which historically has been super-aware of the degree to which its products are attractive to minors, may conclude—or already have concluded, as the ThinkDifferentlyAboutKids folks admit — that more parental controls are a fine idea.

But is it possible that smartphones maybe already incorporate a solution for addictiveness? Just the week before Manjoo's column, another Times writer, Nellie Bowles asked whether we can make our phones less addictive just by playing with the settings. (The headline? "Is the Answer to Phone Addiction a Worse Phone?") Bowles argues, based on interviews with researchers, that simply setting your phone to use grayscale instead of color inclines users to respond less emotionally and impulsively—in other words, more mindfully—when deciding whether to respond to their phones. Bowles says she's trying the experiment herself: "I've gone gray, and it's great."

At first it seems odd to focus on the device's user interface (parental settings, or color palette) if the real problem of addictiveness is internet content (social media, YouTube and other video, news updates, messages). One can imagine a Times columnist in 1962—in the opening years of widespread color TV— responding to Newt Minow's famous "vast wasteland" speech by arguing that TV-set manufacturers should redesign sets so that they're somewhat more inconvenient—no remote controls, say—and less colorful to watch. (So much for NBC's iconic Peacock opening logo)

In the interests of science, I'm experimenting with some of these solutions myself. For years already I've configured my iDevices not to bug me with every Facebook and Twitter update or new-email notice. Plus, I was worried about this grayscale thing on my iPhone X—one of the major features of which is a fantastic camera. But it turns out that you can toggle between grayscale and color easily once you've set gray as the default. I kind of like the novelty of all-gray—no addiction-withdrawal syndrome yet, but we'll see how that goes.

(5) Social media are bad for us because they make us feel bad, alienating us from one another and causing is to be upset much of the time.

Manjoo says he's skeptical whether Facebook is going to fix the addictiveness of its content and interactions with users, thanks to those "misaligned incentives." It should be said of course that Facebook's incentives—to use its free services to create an audience for paying advertisers—at least have the benefit of being straightforward. (Apple's not dependent on ads, but they still want new products to be attractive enough for users to want to upgrade.) Still, Facebook's Mark Zuckerberg has announced that the company is redesigning Facebook's user experience, (focusing first on its news feed) to emphasize quality time ("time well spent") over more "passive" consumption of the Facebook ads and video that may generate more hits for some advertisers. Zuckerberg maintains that Facebook, even as it has operated over the last decade-plus of general public access, had been good for many and maybe for most users:

"The research shows that when we use social media to connect with people we care about, it can be good for our well-being. We can feel more connected and less lonely, and that correlates with long term measures of happiness and health."

Even so, Zuckerberg writes (translating what Facebook has been hearing from some social-science researchers), "passively reading articles or watching videos -- even if they're entertaining or informative -- may not be as good." This is a gentler way of characterizing what some researchers have recently been arguing, which is that, for some people at least, using Facebook causes depression. This article for example, relies on sociologist Erving Goffman's conceptions of how we distinguish between our public and private selves as we navigate social interactions. Facebook, it's argued, "collapses" our public and private presentations—the result is what social-media researcher danah boyd calls "context collapse." A central idea here is that, because what we publish on Facebook for our circle is also to some high degree public, we are stressed by the need (or inability) to switch between versions of how we present ourselves. In addition context collapse, the highly curated pages we see from other people on Facebook may suggest that their lives are happy in ways that ours are not.

I think both Goffman's and boyd's contributions to our understanding of the sociology of identity (both focus on how we present ourselves in context) are extremely useful, but it's important to think clearly about any links between Facebook (and other social media) and depression. To cut to the chase: there may in fact be strong correlations between social-media use and depression, at least for some people. But it's unclear whether social media actually cause depression; it seems just as likely that causation may go in the other direction. Consider that depression has also been associated with internet use generally (prior to the rise of social-media platforms), with television watching, and even, if you go back far enough, with what is perceived to be excessive consumption of novels and other fiction. Books, of course, are now regarded as redemptive diversions that may actually cure your depression.

So here's a reasonable alternative hypothesis: when you're depressed you seek diversion from depression—which may be Facebook, Twitter, or something else, like novels or binge-watching quality TV. It may be things that are genuinely good for you (books! Or The Wire!) or things that are unequivocally bad for you. (Don't try curing your depression with drinking!) Or it may be social media, which at least some users will testify they find energizing and inspiring rather than enervating and dispiriting.

As a longtime skeptic regarding studies of internet usage (a couple of decades ago I helped expose a fraudulent article about "cyberporn" usage), I don't think the research on social media and its potential harmful side-effects is any more conclusive than Facebook's institutional belief that its social-media platforms are beneficial. But I do think Facebook as a dominant, highly profitable social-media platform is under the gun. And, as I've written here and elsewhere, its sheer novelty may be generating a moral panic. So it's no wonder—especially now that the U.S. Congress (as well as European regulators) are paying more attention to social media—that we're seeing so many Facebook announcements recently that are aimed at showing the company's responsiveness to public criticism.

Whether you think anxiety about social-media is merited or otherwise, you may reasonably be cynical about whether a market-dominant for-profit company will refine itself to act more consistently in the public interest—even in the face of public criticism or governmental impulses to regulate. But such a move is not unprecedented. The key question is whether Facebook's course corrections -- steering us towards personal interactions over "passive" consumption of things like news reports -- really do help us. (For example, if you believe in the filter-bubble hypothesis, it seems possible that Facebook's privileging of personal interactions over news may make filter bubbles worse.) This brings us to Problem Number 6, below.

(6) Social media are bad for us because they're bad for democracy.

There are multiple arguments that Facebook and other social media (Twitter's another frequent target) are bad for democracy. The Verge provides a good beginning list here. The article notes that Facebook's own personnel—including its awesomely titled "global politics and government outreach director"
— are acknowledging the criticisms by publishing a series of blog postings. The first one is from the leader of Facebook's "civic engagement team," and the others are from outside observers, including Harvard law professor Cass Sunstein (who's been a critic of "filter bubbles" since long before that term was invented—his preferred term is "information cocoons.").

I briefly mentioned Sunstein's work in Part 1. Here in Part 2 I'll note mainly that Sunstein's essay for Facebook begins by listing ways in which social-media platforms are actually good for democracy. In fact, he writes, "they are not merely good; they are terrific." In spite of their goodness, Sunstein writes, they also exacerbate what he's discussed earlier (notably in a 1999 paper) as "group polarization." In short, he argues, the filter bubble makes like-minded people hold their shared opinions more extremely. The result? More extremism generally, unless deliberative forums are properly designed with appropriate "safeguards."

Perhaps unsurprisingly, given that Facebook is hosting his essay, Sunstein credits Facebook with taking steps to provide those such safeguards, which in his view includes Facebook chief Mark Zuckerberg's declaration that the company is working to fight misinformation in its news feed. But I like Sunstein's implicit recognition that political polarization, while bad, may be no worse as a result of social media in particular, or even this century's modern media environment as a whole:

"By emphasizing the problems posed by knowing falsehoods, polarization, and information cocoons, I do not mean to suggest that things are worse now than they were in 1960, 1860, 1560, 1260, or the year before or after the birth of Jesus Christ. Information cocoons are as old as human history."

Just as important, I think, is Sunstein's admission that that we don't really have unequivocal data showing that social media are a particular problem even in relation to other modern media:

"Nor do I mean to suggest that with respect to polarization, social media are worse than newspapers, television stations, social clubs, sports teams, or neighborhoods. Empirical work continues to try to compare various sources of polarization, and it would be reckless to suggest that social media do the most damage. Countless people try to find diverse topics, and multiple points of view, and they use their Facebook pages and Twitter feeds for exactly that purpose. But still, countless people don't."

Complementing Sunstein's essay is a piece by Facebook's Samidh Chakrabarti, who underscores the company's new initiative to make News Feed contributions more transparent (so you can see who's funding a political ad or seemingly authentic "news story). Chakrabarti also expresses the company's hope that its "Trust Project for News On Facebook" will help users "sharpen their social media literacy." And Facebook's just announced its plan to use user rankings to rate media sources' credibility.

I'm all for more media literacy, and I love crowd-sourcing, and I support efforts to encourage both. But I share CUNY journalism professor Jeff Jarvis's concern that other components of Facebook's comprehensive response to public criticism may unintentionally undercut support, financial and otherwise, for trustworthy media sources.

Now, I'm aware that some critics are arguing that the data really are solidly showing that social media are undermining democracy. But I'm skeptical whether "fake news" on Facebook or elsewhere in social media changed the outcome of the 2016 election, not least because the Pew Research Center's study a year ago suggests that digital news sources weren't nearly as important as traditional media sources. (Notably, Fox News was hugely influential among Trump voters; there was no counterpart news source for Clinton voters.)

That said, there's no reason to dismiss concerns about social media, which may play an increasing role—as Facebook surely has—as an intermediary of the news. Facebook's Chakrabarti may want to promote "social media literacy," and the company has been forced to acknowledge that "Russian entities" tried to use Facebook as an "information weapon." But Facebook doesn't want in the least to play the rule a social-media-literate citizenry should be playing for itself. Writes Chakrabart:

"In the public debate over false news, many believe Facebook should use its own judgment to filter out misinformation. We've chosen not to do that because we don't want to be the arbiters of truth, nor do we imagine this is a role the world would want for us."

Of course some critics may disagree. As I've said above, the data are equivocal, but that hasn't made its interpreters equivocal. Take for example a couple of recent articles—one academic and another aimed at popular audience—that cast doubt on whether the radical democratization of internet access is a good thing—or at least, whether it's as good a thing as we hoped for a couple of decades ago. One is UC Irvine professor Richard Hasen's law-review article published last year (set for formal publication in the First Amendment Law Review this year), which he helpfully distilled to an LA Times op-ed here. The other is Wired's February 2018 cover story: "It's the (Democracy-Poisoning) Golden Age of Free Speech." (The Wired article is also authored by an academic, UNC Chapel Hill sociology professor Zeynep Tufekci.)

Both Hasen's and Tufekci's articles underscore that internet access has inverted an assumption that long informed free-speech law—that the ability to reach mass audiences is necessarily going to be expensive and scarce. In the internet era, what we have instead is what UCLA professor Eugene Volokh memorably labelled, in a Yale Law Journal law-review article more than 20 years ago, as "cheap speech." Volokh correctly anticipated back then that internet-driven changes in the media landscape would lead some social critics to conclude that the First Amendment's broad protections for speech would need to be revised:

"As the new media arrive, they may likewise cause some popular sentiment for changes in the doctrine. Today, for instance, the First Amendment rules that give broad protection to extremist speakers-Klansmen, Communists, and the like-are relatively low-cost, because these groups are politically rather insignificant. Even without government regulation, they are in large measure silenced by lack of funds and by the disapproval of the media establishment. What will happen when the KKK becomes able to conveniently send its views to hundreds of thousands of supporters throughout the country, or create its own TV show that can be ordered from any infobahn-connected household?"

There, in a nutshell, is a prediction of the world we're living in now (except that we, fortunately, failed to adopt the term "infobahn"). Hasen believes "non-governmental actors"—that is, Facebook and Twitter and Google and the like — may be "best suited to counter the problems created by cheap speech." I think that's a bad idea, not least because corporate decision-making may be less accountable than public law and regulation and, as Manjoo puts it, they are "misaligned incentives." Tufekci, I think, has the better approach. "[I]n fairness to Facebook and Google and Twitter," she writes in Wired, "while there's a lot they could do better, the public outcry demanding that they fix all these problems is mistaken." Because there are "few solutions to the problems of digital discourse that don't involve huge trade-offs," Tufekci insists that deciding what those solutions may be is necessarily a "deeply political decision"—involving difficult discussions what we ask the government to do... or not to do.

She's got that right. She's also right that we haven't had those discussions yet. And as we begin them, we need to remember radically democratic empowerment (all that cheap speech) may be part of the problem, but it's also got to be part of the solution.

from the and-there's-more-to-come dept

Some of today's anxiety about social-media platforms is driven by the concern that Russian operatives somehow used Facebook and Twitter to affect our electoral process. Some of it's due a general perception that big American social-media companies, amorally or immorally driven by the profit motive, are eroding our privacy and selling our data to other companies or turning it over to the government—or both. Some of it's due to the perception that Facebook, Twitter, Instagram, and other platforms are bad for us—that maybe even Google's or Microsoft's search engines are bad for us—and that they make us worse people or debase public discourse. Taken together, it's more than enough fodder for politicians or would-be pundits to stir up generalized anxiety about big tech.

But regardless of where this moral panic came from, the current wave of anxiety about internet intermediaries and social-media platforms has its own momentum now. So we can expect many more calls for regulation of these internet tools and platforms in the coming months and years. Which is why it's a good idea to itemize the criticisms we've already seen, or are likely to see, in current and future public-policy debates about regulating the internet. We need to chart the kinds of arguments for new internet regulation that are going to confront us, so I've been compiling a list of them. It's a work in progress, but here are three major claims that are driving recent expressions of concern about social media and internet companies generally.

(1) Social media are bad for you because they use algorithms to target you, based on the data they collect about you.

It's well-understood now that Facebook and other platforms gather data about what interests you in order to shape what kinds of advertising you see and what kind of news stories you see in your news feed (if you're using a service that provides one). Some part of the anxiety here is driven by the idea (more or less correct) that an internet company is gathering data about your likes, dislikes, interests, and usage patterns, which means it knows more about you in some ways than perhaps your friends (on social media and in what we now quaintly call "real life") know about you. Possibly more worrying than that, the companies are using algorithms—computerized procedures aimed at analyzing and interpreting data—to decide what ads and topics to show you.

It's worth noting, however, that commercial interests have been gathering data about you since long before the advent of the internet. In the 1980s and before in the United States, if you joined one book club or ordered one winter coat on Land's End, you almost certainly ended up on mailing lists and received other offers and many, many mail-order catalogs. Your transactional information was marketed, packaged, and sold to other vendors (as was your payment and credit history). If false information was shared about you, you perhaps had some options ranging from writing remove-me-from-your-list letters to legal remedies under the federal Fair Credit Reporting Act. But the process was typically cumbersome, slow, and less-than-completely satisfactory (and still is when it comes to credit-bureau records). One advantage with some internet platforms is that (a) they give you options to quit seeing ads you don't like (and often to say just why you don't like them), and (b) the internet companies, anxious about regulation, don't exactly want to piss you off. (In that sense, they may be more responsive than TiVo could be.)

Of course it's fair—and, I think, prudent—to note that the combination of algorithms and "big data" may have real consequences for democracy and for freedom of speech. Yale's Jack Balkin has recently written an excellent law-review article that targets these issues. At the same time, it seems possible for internet platforms to anonymize data they collect in ways that pre-internet commercial enterprises never could.

(2) Social Media are bad for you because they allow you to create a filter bubble where you see only (or mostly) opinions you agree with. (2)(a) Social media are bad for you because they foment heated arguments between you and those you disagree with.

To some extent, these two arguments run against each other—if you only hang out online with people who think like you, it seems unlikely that you'll have quite so many fierce arguments, right? (But maybe the arguments between people who share most opinions and backgrounds are fiercer?) In any case, it seems clear that both "filter bubbles" and "flames" can occur. But when they do, statistical research suggests, it's primarily because of user choice, not algorithms. In fact, as a study in Public Opinion Quarterly reported last year, the algorithmically driven social-media platforms may be both increasing polarization and increasing users' exposures to opposing views. The authors summarize their conclusions this way:

"We find that social networks and search engines are associated with an increase in the mean ideological distance between individuals. However, somewhat counterintuitively, these same channels also are associated with an increase in an individual's exposure to material from his or her less preferred side of the political spectrum."

In contrast, the case that "filter bubbles" are a particular, polarizing problem relies to a large degree not on statistics but on anecdotal evidence. That is, the people who don't like arguing or who can't bear too different a set of political opinions tend to curate their social-media feeds accordingly, while people who don't mind arguments (or even love them) have no difficulty encountering heterodox viewpoints on Facebook or Twitter. (At various times I've fallen into one or the other category on the internet, even before the invention of social media or the rise of Google's search engine.)

The argument about “filter bubbles”—people self-segregating and self-isolating into like-minded online groups—is an argument that predates modern social media and the dominance of modern search engines. Law professor Cass Sunstein advanced it in his 2001 book, Republic.com and hosted a website forum to promote that book. I remember this well because I showed up in the forum to express my disagreement with his conclusions—hoping that my showing up as a dissenter would itself raise questions about Sunstein's version of the “filter bubble” hypothesis. I didn't imagine I'd change Sunstein's mind, though, so I was unsurprised to see the professor has revised and refined his hypothesis, first in Republic.com 2.0 in 2007 and now in #Republic: Divided Democracy in the Age of Social Media, published just this year.

(3) Social media are bad for you because they are profit-centered, mostly (including the social media that don't generate profits).

"If you're not paying for the product, you're the product." That's a maxim with real memetic resonance, I have to admit. This argument is related to argument number 1 above, except that instead of focusing on one's privacy concerns, it's aimed at the even-more-disturbing idea that we're being commodified and sold by the companies who give us free services. This necessarily includes Google and Facebook, which provide users with free access but which gather data that is used primarily to target ads. Both of those companies are profitable. Twitter, which also serves ads to its users, isn't yet profitable, but of course aspires to be.

As a former employee of the Wikimedia Foundation—which is dedicated to providing Wikipedia and other informational resources to everyone in the world, for free—I don't quite know what to make of this. Certainly the accounts of the early days of Google or of Facebook suggest that advertising as a mission typically arose after the founders realized that their new internet services needed to make money. But once any new company starts making money by the yacht-load, it's easy to dismiss the whole enterprise as essentially mercenary.

But Wikipedia has steadfastly resisted even the temptation to sell ads—even though it could have become an internet commercial success just as IMDB.com has—because the Wikipedia volunteers and the Wikimedia Foundation see value in providing something useful and fun to everyone regardless of whether one gets rich doing so. So do the creators of free and open-source software. If creating free products and services doesn't always mean you're out to sell other people into data slavery, shouldn't we at least consider the possibility that social-media companies may really mean it when they declare their intentions to do well by doing good? (“Do Well By Doing Good” is a maxim commonly attributed to Benjamin Franklin—who of course sold advertising, and even wrote advertising copy, for his Pennsylvania Gazette.) I think it's a good idea to follow Mike Masnick's advice to stop repeating this “you're the product” slogan—unless you're ready to condemn all traditional journals that subsidize giving their content to you through advertising.

So that's the current top three chart-toppers for the Social Media-Are-Bad-For-You Greatest Hits. But this is a crowded field—only the tip of the iceberg when it comes to trendy criticisms of social-media platforms, search engines, and unregulated mischievous speech on the internet--and we expect to see many other competing criticisms of Facebook, Twitter, Google, etc. surface in the weeks and months to come. I'm already working on Part 2.

from the the-rule-of-law dept

Deputy Attorney General Rod Rosenstein wrote the disapproving memo that President Trump used as a pretext to fire FBI Director James Comey in May. But on at least one area of law-enforcement policy, Rosenstein and Comey remain on the same page—the Deputy AG set out earlier this month to revive the outgoing FBI director's efforts to limit encryption and other digital security technologies. In doing so, Rosenstein has drawn upon nearly a quarter century of the FBI's anti-encryption tradition. But it's a bad tradition.

Like many career prosecutors, Deputy Attorney General Rod Rosenstein is pretty sure he's more committed to upholding the U.S. Constitution and the rule of law than most of the rest of us are. This was the thrust of Rosenstein's recent October 10 remarks on encryption, delivered to an audience of midshipmen at the U.S. Naval Academy.

The most troubling aspect of Rosenstein's speech was his insistence that, while the government's purposes in defeating encryption are inherently noble, the motives of companies that provide routine encryption and other digital-security tools (the way Apple, Google and other successful companies now do) are inherently selfish and greedy.

At the same time, Rosenstein said those who disagree with him on encryption policy as a matter of principle—based on decades of grappling with the public-policy implications of using strong encryption versus weak encryption or no encryption—are "advocates of absolute privacy." (We all know that absolutism isn't good, right?)

In his address, Rosenstein implied that federal prosecutors are devoted to the U.S. Constitution in the same way that Naval Academy students are:

"Each Midshipman swears to 'support and defend the Constitution of the United States against all enemies, foreign and domestic.' Our federal prosecutors take the same oath."

Of course, he elides the fact that many who differ with his views on encryption—including yours truly, as a lawyer licensed in three jurisdictions—have also sworn, multiple times, to uphold the U.S. Constitution. What's more, many of the constitutional rights we now regard as sacrosanct, like the Fifth Amendment privilege against self-incrimination, were only vindicated over time under our rule of law—frequently in the face of overreaching by law-enforcement personnel and federal prosecutors, all of whom also swore to uphold the Constitution.

The differing sides of the encryption policy debate can’t be reduced to supporting or opposing the rule of law and the Constitution. But Rosenstein chooses to characterize the debate this way because, as someone whose generally admirable career has been entirely within government, and almost entirely within the U.S. Justice Department, he simply never attempted to put himself in the position of those with whom he disagrees.

As I've noted, Rosenstein's remarks draw on a long tradition. U.S. intelligence agencies, together with the DOJ and the FBI, reflexively resorted to characterizing their opponents in the encryption debate as fundamentally mercenary (if they're companies) or fundamentally unrealistic (if they're privacy advocates). In Steven Levy's 2001 book Crypto, which documented the encryption policy debates of the 1980s and 1990s, he details how the FBI framed the question for the Clinton administration:

"What if your child is kidnapped and the evidence necessary to find and rescue your child is unrecoverable because of 'warrant-proof' encryption?"

The Clinton administration's answer—deriving directly from George H.W. Bush-era intelligence initiatives—was to try to create a government standard built around a special combination of encryption hardware and software, labeled "the Clipper Chip" in policy shorthand. If the U.S. government endorsed a high-quality digital-security technology that also was guaranteed not to be "warrant-proof"—that allowed special access to government agents with a warrant—the administration asserted this would provide the appropriate "balance" between privacy guarantees and the rule of law.

But, as Levy documents, the government's approach in the 1990s raised just as many questions then as Rosenstein's speech raises now. Levy writes:

"If a crypto solution was not global, it would be useless. If buyers abroad did not trust U.S. products with the [Clipper Chip] scheme, they would eschew those products and buy instead from manufacturers in Switzerland, Germany, or even Russia."

The United States' commitment to rule of law also raised questions about how much our legal system should commit itself to enabling foreign governments to demand access to private communications and other data. As Levy asked at the time:

"Should the United States allow access to stored keys to free-speech—challenged nations like Singapore, or China? And would France, Egypt, Japan, and other countries be happy to let their citizens use products that allowed spooks in the United States to decipher conversations but not their own law enforcement and intelligence agencies?"

Rosenstein attempts to paint over this problem by pointing out that American-based technology companies have cooperated in some respects with other countries' government demands—typically over issues like copyright infringement or child pornography rather than digital-security technologies like encryption. "Surely those same companies and their engineers could help American law enforcement officers enforce court orders issued by American judges, pursuant to American rule of law principles," he says.

Sure, American companies, like companies everywhere, have complied as required with government demands designed to block content deemed in illegal in the countries where they operate. But demanding these companies meet content restrictions—which itself at times also raises international rule-of-law issues—is a wholly separate question from requiring companies to enable law-enforcement everywhere to obtain whatever information they want regarding whatever you do on your phone or on the internet. This is particularly concerning when it comes to foreign governments' demands for private content and personal information, which might include providing private information about dissidents in unfree or "partly free" countries whose citizens must grapple with oppressive regimes.

Technology companies aren't just concerned about money—it's cheaper to exclude digital security measures than to invent and install new ones (such as Apple's 3D-face-recognition technology set to be deployed in its new iPhone X). Companies do this not just to achieve a better bottom line but also to earn the trust of citizens. That's why Apple resists pressure, both from foreign governments and from the U.S. government, to develop tools that governments—and criminals—could use to turn my iPhone against me. This matters even more in 2017 and beyond—because no matter how narrowly a warrant or wiretap order is written, access to my phone and other digital devices is access to more or less everything in my life. The same is true for most other Americans these days.

Rosenstein is certainly correct to have said "there is no constitutional right to sell warrant-proof encryption"—but there absolutely is a constitutional right to write computer software that encrypts my private information so strongly that government can't decrypt it easily. (Or at all.) Writing software is generally understood to be presumptively protected expression under the First Amendment. And, of course, one needn't sell it—many developers of encryption tools have given them away for free.

What's more, our government's prerogative to seek information pursuant to a court-issued order or warrant has never been understood to amount to a "constitutional right that every court order or search warrant be successful." It's common in our law-enforcement culture—of which Rosenstein is unquestionably a part and partisan—to invert the meaning of the Constitution's limits on what our government can do, so that that law-enforcement procedures under the Fourth and Fifth Amendments are interpreted as a right to investigatory success.

We've known this aspect of the encryption debate for a long time, and you don't have to be a technologist to understand the principle involved. Levy quotes Jerry Berman, then of the Electronic Frontier Foundation and later the founder of the Center for Democracy and Technology, on the issue: "The idea that government holds the keys to all our locks, even before anyone has been accused of committing a crime, doesn't parse with the public."

As Berman bluntly sums it up, "It's not America."

Mike Godwin (@sfmnemonic) is a distinguished senior fellow at the R Street Institute.

from the please-comment dept

Today is the deadline for the first round of the FCC's comment period on its attempt to roll back the 2015 open internet "net neutrality" rules. The deadline is partly meaningless, because there's a second comment period that is technically to respond to earlier comments -- but allows you to just file more comments. However, it is still important to make your voice heard no matter which side you're on. We'll be posting our own comments later today, but first, we wanted to share Mike Godwin's thoughtful discussion on why you should comment and why you should provide a thoughtful, careful "quality" comment, which he first posted to the R-Street blog, but which is being cross posted here.

If you count just by numbers alone, net-neutrality activists have succeeded in their big July 12 push to get citizens to file comments with the Federal Communications Commission. As I write this, it looks as if 8 million or more comments have now been filed on FCC Chairman Ajit Pai's proposal to roll back the expansive network-neutrality authority the commission asserted under its previous chairman in 2015.

There's some debate, though, about whether the sheer number of comments—which are unprecedented not only for the FCC, but also for any other federal agency—is a thing that matters. I think they do, but not in any simple way. If you look at the legal framework under which the FCC is authorized to regulate, you see that the commission has an obligation to open its proposed rulemakings (or revisions or repeals of standing rules) for public comments. In the internet era, of course, this has meant enabling the public (and companies, public officials and other stakeholders) to file online. So naturally enough, given the comparative ease of filing comments online, controversial public issues are going to generate more and more public comments over time. Not impossibly, this FCC proceeding—centering as it does on our beloved public internet—marks a watershed moment, after which we'll see increasing flurries of public participation on agency rulemakings.

Columbia University law professor Tim Wu—who may fairly be considered the architect of net neutrality, thanks to his having spent a decade and a half building his case for it—tweeted July 12 that it would be "undemocratic" if the commission ends up "ignoring" the (as of then) 6.8 million comments filed in the proceeding.

There are now 6.8 million comments in the FCC's Net Neutrality docket. Ignoring that is just plain undemocratic

But a number of critics immediately pointed out, correctly, that the high volume of comments (presumed mostly to oppose Pai's proposal) doesn't entail that the commission bow to the will of any majority or plurality of the commenters.

I view the public comments as relevant, but not dispositive. I think Wu overreaches to suggest that ignoring the volume of comments is "undemocratic." We should keep in mind that there is nothing inherently or deeply democratic about the regulatory process – at least at the FCC. (In fairness to Wu, he could also mean that the comments need to be read and weighed substantively, not merely be tallied and dismissed.)

But I happen to agree with Wu that the volume of comments is relevant to regulators, and that it ought to be. Chairman Pai (whose views on the FCC's framing net neutrality as a Title II function predate the Trump administration) has made it clear, I think, that quantity is not quality with regard to comments. The purpose of saying this upfront (as the chairman did when announcing the proposal) is reasonably interpreted by Wu (and by me and others) as an indicating he believes the commission is at liberty to regulate in a different way from what a majority (or plurality) of commenters might want. Pai is right to think this, I strongly believe.

But the chairman also has said he wants (and will consider more deeply) substantive comments, ideally based on economic analysis. This seems to me to identify an opportunity for net-neutrality advocates to muster their own economists to argue for keeping the current Open Internet Order or modifying it more to their liking. And, of course, it's also an opportunity for opponents of the order to do the same.

But it's important for commenters not to miss the forest for the trees. The volume of comments both in 2014 and this year (we can call this "the John Oliver Effect") has in some sense put net-neutrality advocates in a bind. Certainly, if there were far fewer comments (in number alone) this year, it might be interpreted as showing declining public concern over net neutrality. Obviously, that's not how things turned out. So the net-neutrality activists had to get similar or better numbers this year.

At the same time, advocates on all sides shouldn't be blinded by the numbers game. Given that the chairman has said the sheer volume of comments won't be enough to make the case for Title II authority (or other strong interventions) from the commission, it seems clear to me that while racking up a volume of comments is a necessary condition to be heard, it is not a sufficient condition to ensure the policy outcome you want.

Ultimately, what will matter most, if you want to persuade the commissioners one way or another on the net-neutrality proposal, is how substantive, relevant, thoughtful and persuasive your individual comments prove to be. My former boss at Public Knowledge, Gigi Sohn, a net-neutrality advocate who played a major role in crafting the FCC's current Open Internet Order, has published helpful advice for anyone who wants to contribute to the debate. I think it ought to be required reading for anyone with a perspective to share on this or any other proposed federal regulation.

If you want to weigh in on net neutrality and the FCC's role in implementing it—whether you're for such regulation or against it, or if you think it can be improved—you should follow Sohn's advice and file your original comments no later than Monday, July 17, or reply comments no later than Aug. 16. If you miss the first deadline, don't panic—there's plenty of scope to raise your issues in the reply period.

My own feeling is, if you truly care about the net-neutrality issue, the most "undemocratic" reaction would be to miss this opportunity to be heard.

from the your-free-internet dept

Earlier this week, we wrote a little bit about the 20th anniversary of a key case in internet history, Reno v. ACLU, and its important place in internet history. Without that ruling, the internet today would be extraordinarily different -- perhaps even unrecognizable. Mike Godwin, while perhaps best known for making sure his own obituary will mention Hitler, also played an important role in that case, and wrote up the following about his experience with the case, and what it means for the internet.

The internet we have today could have been very different, more like the over-the-air broadcast networks that still labor under broad federal regulatory authority while facing declining relevance.

But 20 years ago this week, the United States made a different choice when the U.S. Supreme Court handed down its 9-0 opinion in Reno v. American Civil Liberties Union, the case that established how fundamental free-speech principles like the First Amendment apply to the internet.

I think of Reno as "my case" because I'd been working toward First Amendment protections for the internet since my first days as a lawyer—the first staff lawyer for the Electronic Frontier Foundation (EFF), which was founded in 1990 by software entrepreneur Mitch Kapor and Grateful Dead lyricist John Perry Barlow. There are other lawyers and activists who feel the same possessiveness about the Reno case, most with justification. What we all have in common is the sense that, with the Supreme Court's endorsement of our approach to the internet as a free-expression medium, we succeeded in getting the legal framework more or less right.

We had argued that the internet—a new, disruptive and, to some large extent, unpredictable medium—deserved not only the free-speech guarantees of the traditional press, but also the same freedom of speech that each of us has as an individual. The Reno decision established that our government has no presumptive right to regulate internet speech. The federal government and state governments can limit free speech on the internet only in narrow types of cases, consistent with our constitutional framework. As Chris Hanson, the brilliant ACLU lawyer and advocate who led our team, recently put it: "We wanted to be sure the internet had the same strong First Amendment standards as books, not the weaker standards of broadcast television."

The decision also focused on the positive benefits this new medium had already brought to Americans and to the world. As one of the strategists for the case, I'd worked to frame this part of the argument with some care. I'd been a member of the Whole Earth 'Lectronic Link (the WELL) for more than five years and of many hobbyist computer forums (we called them bulletin-board systems or "BBSes") for a dozen years. In these early online systems—the precursors of today's social media like Facebook and Twitter—I believed I saw something new, a new form of community that encompassed both shared values and diversity of opinion. A few years before Reno v. ACLU—when I was a relatively young, newly minted lawyer—I'd felt compelled to try to figure out how these new communities work and how they might interact with traditional legal understandings in American law, including the "community standards" relevant to obscenity law and broadcasting law.

When EFF, ACLU and other organizations, companies, and individuals came together to file a constitutional challenge to the Communications Decency Act that President Bill Clinton signed as part of the Telecommunications Act of 1996, not everyone on our team saw this issue the way I did, at the outset. Hanson freely admits that "[w]hen we decided to bring the case, none of [ACLU's lead lawyers] had been online, and the ACLU did not have a website." Hanson had been skeptical of the value of including testimony about what we now call "social media" but more frequently back then referred to as "virtual communities." As he puts it:

"I proposed we drop testimony about the WELL — the social media site — on the grounds that the internet was about the static websites, not social media platforms where people communicate with each other. I was persuaded not to do that, and since I was monumentally wrong, I'm glad I was persuaded."

Online communities turned out to be vastly more important than many of the lawyers first realized. The internet's potential to bring us together meant just as much as the internet's capacity to publish dissenting, clashing and troubling voices. Justice John Paul Stevens, who wrote the Reno opinion, came to understand that community values were at stake, as well. In early sections of his opinion, Justice Stevens dutifully reasons through traditional "community standards" law, as would be relevant to obscenity and broadcasting cases. He eventually arrives at a conclusion that acknowledges that a larger community is threatened by broad internet-censorship provisions:

"We agree with the District Court's conclusion that the CDA places an unacceptably heavy burden on protected speech, and that the defenses do not constitute the sort of 'narrow tailoring; that will save an otherwise patently invalid unconstitutional provision. In Sable, 492 U. S., at 127, we remarked that the speech restriction at issue there amounted to ' 'burn[ing] the house to roast the pig.' ' The CDA, casting a far darker shadow over free speech, threatens to torch a large segment of the Internet community."

The opinion's recognition of "the Internet community" paved the way for the rich and expressive, but also divergent and sometime troubling internet speech and expression we have today.

Which leaves us with the question: now that we've had two decades of experience under a freedom-of-expression framework for the internet—one that has informed not just how we use the internet in the United States but also how other voices around the world use it—what do we now need to do to promote "the Internet community"?

In 2017, not everyone views the internet as an unalloyed blessing. Most recently, we've seen concern about whether Google facilitates copyright infringement, whether Twitter's political exchanges are little more than "outrage porn" and whether Facebook enables "hate speech." U.K. Prime Minister Theresa May, who is almost exactly the same age I am, seems to view the internet primarily as an enabler of terrorism.

Even though we're now a few decades into the internet revolution, my view is that it's still too early to make the call that the internet needs more censorship and government intervention. Instead, we need more protection of the free expression and online communities that we've come to expect. Part of that protection may come from some version of the network neutrality principles currently being debated at the Federal Communications Commission, although it may not be the version in place under today's FCC rules.

In my view, there are two additional things the internet community needs now. The first is both legal and technological guarantees of privacy, including through strong encryption. The second is universal access—including for lower-income demographics and populations in underserved areas and developing countries—that would enable everyone to particulate fully, not just as consumers but as contributors to our shared internet. For me, the best way to honor the 40th anniversary of Reno v. ACLU will be to make sure everybody is here on the internet to celebrate it.

Mike Godwin (mnemonic@gmail.com) is a senior fellow at R Street Institute. He formerly served as staff counsel for the Electronic Frontier Foundation and as general counsel for the Wikimedia Foundation, which operates Wikipedia.

But the pain I feel is not grounded in Taplin's certainty that something amoral, libertarian and unregulated is undermining democracy. Instead, it's in Taplin's profound misunderstanding of both the innovations and social changes that have made these companies not merely successful but also—for most Americans—vastly useful in enabling people to stay connected, express themselves and find the goods and services (and, even more importantly, communities) they need.

"It is impossible to deny that Facebook, Google and Amazon have stymied innovation on a broad scale," Taplin argues in his screed. He wants Google to divest itself of DoubleClick, in theory because the search engine would be much better if it were unable to generate profits from digitized ad services. He wants Facebook to unload WhatsApp, because the world was much better when connected citizens in the developing world had to pay 10 cents for each SMS message they sent. None of this really amounts to reform and, of course, such "reforms" wouldn't touch companies like Apple or Microsoft in the least.

What Taplin really wants isn't to reform but to reframe. He wants us to understand current tech-company leaders as evil, or at least amoral and out of control. Toward this end, he begins his new book (a much more extended version of his Times screed) by ominously quoting Facebook's Mark Zuckerberg: "Move fast and break things. Unless you are breaking stuff, you aren't moving fast enough."

Despite his misreading of the underlying technologies shaping today's digital world, Taplin—founding director and now director emeritus of the University of Southern California's Annenberg Innovation Lab—is no dummy. He knows that if he asks ordinary internet users whether they hate or love Google or Amazon or Facebook (or whether they'll willingly part with their new iPhones) he's not going to get a lot of buy-in. Even under a hypothetical President Bernie Sanders, regulating Google as a monopoly wouldn't be a meat-and-potatoes issue.

Instead, Taplin creates a counter-narrative in which American technology successes (with the notable exception of Microsoft) represent the kind of rapacious octopus-like capitalism so often caricatured by cartoonists like Thomas Nast. Google and Facebook may not hurt me in particular, but the theory he offers is that they somehow hurt America in the abstract. Taplin essentially reframes American tech success as a retelling of the oil, railroad, banking and telegraph robber-baron trusts of the 19th and early 20th centuries.

But the very tech companies whose success Taplin is absolutely certain is anti-democratic were built on infrastructure and resources that, under federal law and regulation, have been highly regulated throughout his (and my) lifetime. We may disagree about what the regulations should be, but there's little disagreement that there's already a regulatory framework. The regulation of monopoly infrastructures—telephone and telegraph networks, in particular—were what made it possible to refrain from regulating what you said or did on those networks. Regulation at the "wire" level of the infrastructure—and at various technical levels above that—created the space for today's innovative services that provide near-instantaneous access to, potentially, all the information in the world and all the people with whom you would want to stay in touch.

Search engines and other digital tools are, of course, highly disruptive to industries whose traditional model involved having school-age kids hawking ink and wood pulp on street corners. Like Taplin, I still believe newspaper journalism is essential to democracy. Indeed, I read Taplin's op-ed early Sunday morning because I subscribe to the digital edition of The New York Times. We must continue to explore new ways to make this necessary journalism not merely survive, but thrive.

But it also bears mentioning that Taplin doesn't mention Craig Newmark or Craigslist in his screed against Google, even though, if you were to buy into the fundamentals of Taplin's argument, Craigslist clearly did more to erode daily newspapers' advertising revenue than Google has ever done. And, yet, at the same time, it's worth noting here that Newmark—like most of the other successful tech moguls Taplin lumps together into a sort of secret-handshake techno-libertarian fraternity—actually gives money to Poynter, ProPublica and other enterprises that actively respond to the very real problem of very fake news.

A little research into the history of scientific discovery puts even the scary Zuckerberg quote about "breaking stuff" in a different light. The philosopher Karl Popper opens his essential book Conjectures and Refutations with two quotations: "Experience is the name every one gives to their mistakes," from Oscar Wilde and "Our whole problem is to make the mistakes as fast as possible," from the physicist John Archibald Wheeler.

That sentiment—to be adventurous, to risk things, to learn quickly from making mistakes quickly—is, I believe, exactly what Zuckerberg was getting at. It also extends to making mistakes in our search for a new business model for journalism. But this shouldn't include Jonathan Taplin's great big mistake of looking into the digital future and seeing only places we've been before.

Mike Godwin (@sfmnemonic) is a Senior Fellow at R Street Institute. Godwin was named as a Freedom Forum Fellow at the Freedom Forum Media Studies Center in 1997 and may have once said something about Nazis online for which he will always be remembered.

Mike Godwin’s Comments

Of course "authorised" is fine. My point, as I said earlier, is that anything I put in quotation marks has to be a precise quotation. I didn't want to run the risk of any reader or automated process changing "authorised" to "authorized" because I didn't want to be charged with misquoting, in any way, the actual legislative language. Not every [sic] is a derogation.

I know it's normal Aussie spelling, but I was worried that somewhere in the copy editing process it might get corrected. Since it's a direct quotation from the bill, my view is that one has to make sure no one yields to the temptaiton to correct the accurate spelling in a direct quotation.

These criticisms are mostly valid. I ended up juggling multiple versions in which I fixed a number of errors, including the one you complain about, but I see after reviewing the draft I submitted that some of my corrections didn't make it into the final version I had my hands on. So mea culpa.

I wouldn't dispute your ability to write better than I do. I haven't read your work. All I know is that I tried to get a handle on what I was feeling when I learned my friend died. I don't know whether anyone is gifted dealing with that, but I'm pretty sure I'm not. If you have something to share about Barlow, please send me the link.

'For 100 years (and currently),print press are responsible for their users toxic bile, but the American public has to suffer for 20 years plus because it's the just internet?'

Is the argument here that the internet is precisely like the "print press"? That's a difficult equation to support without analysis. Section 230 is grounded in the ways in which the internet, even in 1996, was not just the "print press" transmitted in bits, but also functions analogously to (a) telephones and telegraphs, (b) common carriers like postal services and package services, and (c) bookstores and libraries. These other traditional means of content distribution are not governed by press law but by different legal and regulatory frameworks, and not just here, but also in "UK, Europe, Australia."

Mr. Anonymous omits facts that undercut whatever point he thinks he's making about me.

(1) I worked for EFF from 1990 to 1999. Google didn't exist then.

(2) I worked for CDT from 1999 to 2003. Google didn't fund CDT then.

(3) I worked for Public Knowledge from 2003-2005. Google didn't fund Public Knowledge.

(4) I worked for Yale University from 2005-2007. Google didn't fund my position at Yale.

(5) I worked or Wikimedia Foundation from 2007-2012. Funded by individual donations, for the most part.

(6) I worked for Internews from 2013 to 2014. Funded by the U.S. government, for the most part.

My work for R Street certainly has benefited from Google funding, as well as funding by many other sources, but my work on Section 230, now more than two decades old, has no roots in Google funding (and certainly not in Backpage funding).

My views about Section 230 are a function of my work on internet-freedom issues dating back now more than a quarter century. Maybe they're incorrect views, but nobody whose sole argument is that I was paid to have those views is likely to be persuasive on that point.