from the public-policy-on-sale-now! dept

They say that laws are like sausages, and you should never watch either be made if you don't want to be sick. But some manufacturing processes are more disgusting than others, and if we don't want to suffer ill-effects, we need to keep an eye on the worst of them.

As others have discussed, the new California Consumer Privacy Act (CCPA) is at best a law with troubling aspects, if not completely chilling for future Internet businesses and even non-commercial online expression. True, there may be the opportunity to amend it before it goes into effect to dull the worst of it, but how we find ourselves in this position where we are stuck with a ticking time bomb of a law that we now need to fix is a story worth telling, because if it could happen once it could happen again. And already has.

Which is why I'm going to tell the story about how California just banned soda taxes (in fact, not coincidentally, right around the same time that it passed the CCPA).

To understand what happened, one first needs to understand a bit about the California Constitution. In addition to setting up the typical branches of government (legislative, executive, judicial), it also allows for a form of direct democracy through ballot initiatives. Ballot initiatives generally only need a simple majority to pass, but once passed, they can be very difficult, if not impossible, to un-pass or modify them without another ballot measure. Even when ballot measures only amend statutory code, and not the Constitution itself, the legislature can be prevented from making any modifications to that new language, no matter how necessary those changes may be, unless the ballot initiative allows the legislature to act. And even if the initiative does permit it, it may require a much more difficult to attain super-majority of the legislature to make any changes, rather than the simple majority typically required to pass legislation.

The upshot is that an awful lot of California law and policy can depend on the initiative process -- and thus a whole lot can depend on who is able to use it to push forth the policy they prefer. In one sense, it's hard to get a new initiative on the ballot: it requires hundreds of thousands of signatures to qualify. But it turns out that for people who have a lot of money, it's not all that hard. Some estimate that it may take only $3-4 million to acquire enough signatures to get any initiative on the ballot.

Of course, whether such an initiative would pass is a separate question, but there are a few factors that make the odds pretty good. One is that it's very difficult for the electorate to make informed choices, and I don't say that as any sort of insult to the average California voter. In the most recent election this past June I timed how long it took to figure out who and what to vote for and clocked it at a whole hour. And that's with me, a lawyer practiced in reading and evaluating law and policy, living in an unincorporated area of California, meaning that I was spared having to wade through any city candidate or ballot measure choices. I just had to vote on candidates for all county, state, and federal offices, and on all county and state ballot measures. And this was in June, where there were fewer choices all around than there will be in November, yet it still took an hour to make any sort of responsible decisions before I was prepared to head to the polls. Of course, not everyone has that hour, and for many it will likely take longer, which means that the electorate tends to be dependent on campaign advertising to help them make those choices. But if someone has a few million dollars to spend to get an initiative on the ballot, they may easily have a few more, or a lot more, to spend on that advertising, and their opponents, no matter how principled in their opposition, just as easily may not.

The reality is that anyone who can spend a few million dollars to get an initiative on the ballot can use that money to put an electoral gun to the head of policymakers and force them to legislate for their desired policy in exchange for withdrawing the initiative from the upcoming election. Because at least if the policy gets implemented via the legislature's hand, rather than through the initiative process, the legislature might be able to temper some of its language. Also, by being an ordinary bill, it would theoretically be more changeable in the future, subject only to ordinary legislative majorities and not dependent on someone funding a new initiative that could successfully override it.

As this article in the Sacramento Bee describes, the soda tax ban is a case study of this dynamic. A business group wrote a proposal that would have created some significant limitations in the state's ability to raise revenue. It then shopped around the proposed initiative until it found someone willing to underwrite the signature-gathering necessary to get it on the ballot. That someone turned out to be the beverage industry, which generally hates soda taxes.

The relative merits of soda taxes are beyond the scope of this post. Suffice it to say, certain California communities like them, often as a way of raising revenue for public health programs and deterring the over-consumption of unhealthy drinks. Several of these communities have already passed a few such taxes.

But after the beverage industry underwrote the effort to get enough signatures to qualify the tax-limiting initiative for the ballot, an initiative that did more than just ban soda taxes but instead affected the state's taxation ability more broadly, the legislature found itself having to play electoral roulette: perhaps the ballot measure might fail and everything would be fine, but if it passed, it risked messing up the fiscal health of the state and all the policies and programs the legislature wanted to fund. So it capitulated and did a deal with the initiative's sponsor to bar any other California communities from passing their own soda taxes for the next 12 years in exchange for having the ballot initiative withdrawn.

In fact, June was a busy month for legislative capitulation, because right around the same time that the legislature did that deal it also did a deal with the sponsors of the "Consumer Right to Privacy Act of 2018" initiative that had also qualified for the November ballot.* Because that initiative, if it passed, would definitely cripple the Internet, the legislature instead agreed to pass the CCPA, which will only probably cripple it, but at least has the potential for improvement.

And that's what this post is really about, this extortionate ability for basically anyone with $4 million to spend to blackmail the legislature to set aside its own legislative judgment and build into California law whatever terrible policy the person with the money wants. Sure, for any policy that is so awful or unpopular there's always the chance that it might lose at the polls come Election Day, and from time to time ballot initiatives do get shot down. But it's very easy for garbage to get through, and wealthy minority voices count on that possibility when they try to ram through all sorts of policies that aren't necessarily good ones for Californians or its businesses – including on matters of tech policy.

On our best days these tech policy challenges require careful, nuanced treatment. We should look to the legislature, and legislators, to give it that careful, nuanced treatment before imposing drastic changes in the law that will affect them. But they can't give these regulatory proposals that sort of necessary attention they deserve if for a mere $4 million or so people can force them to rush through law that has been drafted without any of the care or necessary transparency sound regulation requires.

And when they are forced to pass a law like that, as they were just now with the CCPA, it is unlikely to be something we should cheer.

* Also, per the Los Angeles Times article linked above, "A third proposal, asking taxpayers to subsidize lead paint cleanup projects, was withdrawn by paint companies in exchange for lawmakers scrapping a slate of bills designed to impose new rules on the industry."

from the everything's-turning-up-facebook dept

Imagine that you're a new-media entrepreneur in Europe a few centuries back, and you come up with the idea of using moveable type in your printing press to make it easier and cheaper to produce more copies of books. If there are any would-be media critics in Europe taking note of your technological innovation, some will be optimists. The optimists will predict that cheap books will hasten the spread of knowledge and maybe even fuel a Renaissance of intellectual inquiry. They'll predict the rise of newspapers, perhaps, and anticipate increased solidarity of the citizenry thanks to shared information and shared culture.

Others will be pessimists—they'll foresee that the cheap spread of printed information will undermine institutions, will lead to doubts about the expertise of secular and religious leaders (who are, after all, better educated and better trained to handle the information that's now finding its way into ordinary people's hands). The pessimists will guess, quite reasonably, that cheap printing will lead to more publication of false information, heretical theories, and disruptive doctrines, which in turn may lead, ultimately, to destructive revolutions and religious schisms. The gloomiest pessimists will see, in cheap printing and later in the cheapness of paper itself—making it possible for all sorts of "fake news" to be spread--the sources of centuries of strife and division. And because the pain of the bad outcomes of cheap books is sharper and more attention-grabbing than contemplation of the long-term benefits of having most of the population know how to read, the gloomiest pessimists will seem to many to possess the more clear-eyed vision of the present and of the future. (Spoiler alert: both the optimists and the pessimists were right.)

Fast-forward to the 21st century, and this is just where we're finding ourselves when we look at public discussion and public policy centering on the internet, digital technologies, and social media. Two recent books written in the aftermath of recent revelations about mischievous and malicious exploitation of social-media platforms—especially Facebook and Twitter—exemplify this zeitgeist in different ways. And although both of these books are filled with valuable information and insights, they also yield (in different ways) to the temptation to see social media as the source of more harm than good. Which leaves me wanting very much both to praise what's great in these two books (which I read back-to-back) and to criticize them where I think they've gone too far over to the Dark Side.

The first book is Clint Watts's MESSING WITH THE ENEMY: SURVIVING IN A SOCIAL MEDIA WORLD OF HACKERS, TERRORISTS, RUSSIANS, AND FAKE NEWS. Watts is a West Point graduate and former FBI agent who's an expert on today's information warfare, including efforts by state actors (notably Russia) and non-state actors (notably Al Qaeda and ISIS) to exploit social media both to confound enemies and to recruit and inspire allies. I first heard of the book when I attended a conference at Stanford this spring where Watts—who has testified several times on these issues—was a presenter. His presentation was an eye-opening, erasing whatever lingering doubt I might have had about the scope and organization of those who want to use today's social media for malicious or destructive ends.

In MESSING WITH THE ENEMY Watts relates in a bracing yet matter-of-fact tone not only his substantive knowledge as a researcher and expert in social-media information warfare but also his first-person experiences in engaging with foreign terrorists active on social-media platforms and in being harassed by terrorists (mostly virtually) for challenging them in public exchanges. "The internet brought people together," Watts writes, "but today social media is tearing everyone apart." He notes the irony of social media's receiving premature and overgenerous credit for democratic movements against various dictatorships but later being exploited as platforms for anti-democratic and terrorist initiatives:

"Not long after many across the world applauded Facebook for toppling dictators during the Arab Spring revolutions of 2010 and 2011, it proved to be a propaganda platform and operational communications network for the largest terrorist mobilization in world history, bringing tens of thousands of foreign fighters under the Islamic State's banner in Syria and Iraq."

And it wasn't just non-state terrorists who learned quickly how to leverage social-media platforms; an increasingly activist and ambitious Russia, under the direction of Russian President Vladimir Putin, did so as well. Watts argues persuasively that Russia not only assisted and sponsored relatively inexpensive disinformation and propaganda campaigns using the social-media platforms to encourage divisiveness and lack of faith in government institutions (most successfully with the Brexit vote and the 2016 American elections) but also actively supported the hacking of the Democratic National Committee computer network which led to email dumps (using Wikileaks as a cutout). The security breaches, together with "computational propaganda"—social-media "bots" that mimicked real users in spreading disinformation and dissension—played an important role in the U.S. election, Watts writes, helping "the race remain close at times when Trump might have fallen completely out of the running." Even so, Watts doesn't believe Russian propaganda efforts alone would have tilted the outcome of the election—what it did instead was hobble support for Clinton so much that when, when FBI Director James Comey announced, one week before the election, that the Clinton email-server investigation had reopened, the Clinton campaign couldn't recover. "Without the Comey letter," he writes, "I believe Clinton would have won the election." Later in the book he connects the dots more explicitly: "Without the Russian influence effort, I believe Trump would not have been within striking distance of Clinton on Election Day. Russian influence, the Clinton email investigation, and luck brought Trump a victory—all of these forces combined."

Where Watts's book focuses on bad actors who exploit the openness of social-media platforms for various malicious ends, Siva Vaidhyanathan's ANTISOCIAL MEDIA: HOW FACEBOOK DISCONNECTS US AND UNDERMINES DEMOCRACY argues that the platforms—and especially the Facebook platform—is inherently corrosive to democracy. (Full disclosure: I went to school with Vaidhyanathan, worked on our student newspaper with him, and I consider him a friend.) Acknowledging his intellectual debt to his mentor, the late social critic Neil Postman, Vaidhyanathan blames the negative impacts of various exploitations of Facebook and other platforms on the platforms themselves. Postman was a committed technopessimist, and Vaidhyanathan takes time to chart in ANTISOCIAL MEDIA how Postman's general skepticism about new information technologies ultimately led his younger colleague to temper his originally optimistic view of the internet and digital technologies generally. If you read Vaidhyanathan's work over time, you find in his writing a progressively darker view of the internet and its ongoing evolution, taking a significantly more pessimistic turn around the time of his 2011 book, THE GOOGLIZATION OF EVERYTHING (AND WHY WE SHOULD WORRY). In his earlier book, Vaidhyanathan took pains to be as fair-minded as he could in raising questions about Google and whether it can or should be trusted to play such an outsized role in our culture as the mediator of so much of our informational resources. He was skeptical (not unreasonably) about whether Google's confidence in both its own good intentions and its own expertise is sufficient reason to trust the company—not least because a powerful company can stay around as a gatekeeper for the internet long past the time its well-intentioned founders depart or retire.

With ANTISOCIAL MEDIA, Vaidhyanathan cuts Mark Zuckerberg (and his COO, Sheryl Sandberg) rather less of a break. Facebook's leadership, as I read Vaidhyanathan's take, is both more arrogant than Google's and more heedless of the consequences of its commitment to connect everyone in the world through the platform. Synthesizing a full range of recent critiques of Facebook's design as a platform, he relentlessly characterizes Facebook as driving us to shallow, reactive reactions to one another rather than promoting reflective discourse that might improve or promote our shared values. Facebook, in his view, distracts us instead of inspiring us to think. It's addictive for us in something like the same way gambling or potato chips can be addictive for us. Facebook privileges the visual (photographs, images, GIFs, and the like), he insists, over the verbal and discursive.

And of course even the verbal content is either filter-bubbly—as when we convene in private Facebook groups to share, say, our unhappiness about current politics—or divisive (so that we share and intensify our outrage about other people's bad behavior, maybe including screenshots of something awful someone has said elsewhere on Facebook or on Twitter). Vaidhyanathan suggests that at one point our political discourse as ordinary citizens was more rational and reflective, but now is more emotion- and rage-driven and divisive. Me, I think the emotionalism and rage was always there.

Even when Vaidhyanathan allows that there may be something positive about one's interactions on Facebook, he can't quite help himself from being reductive and dismissive about it:

"Nor is Facebook bad for everyone all the time. In fact, it's benefited millions individually. Facebook has also allowed people to find support and community despite being shunned by friends and family or being geographically isolated. Facebook is still our chief source of cute baby and puppy photos. Babies and puppies are among the things that make life worth living. We could all use more images of cuteness and sweetness to get us through our days. On Facebook babies and puppies run in the same column as serious personal appeals for financial help with medical care, advertisements for and against political candidates, bogus claims against science, and appeals to racism and violence."

In other words, Facebook may occasionally make us feel good for the right reasons (babies and puppies) but that's about the best most people can hope for from the platform. Vaidhyanathan has a particular antipathy towards Candy Crush, which you can connect to your Facebook account—a video game that certainly seems vacuous, but also seems innocuous to me. (I've never played it myself.)

Given his antipathy towards Facebook, you might think that Vaidhyanathan's book is just another reworking of the moral-panic tomes that we've seen a lot of in the last year or two, which decry the internet and social media much the same way previous generations of would-be social critics complained about television, or the movies, or rock music, or comic books. (Hi, Jonathan Taplin! Hi, Franklin Foer!) But that's a mistake, primarily because Vaidhyanathan digs deep into choices—some technical and some policy-driven—that Facebook has made that facilitated bad actors' using the platform maliciously and destructively. Plus, Vaidhyanathan, to his credit, gives attention to how oppressive governments have learned to use the platform to stifle dissent and mute political opposition. (Watts notes this as well.) I was particularly pleased to see his calling out how Facebook is used in India, in the Philippines, and in Cambodia—all countries where I've been privileged to work directly with pro-democracy NGOs.

What I find particularly valuable is Vaidhyanathan's exploration of Facebook's advertising policies and their effect on political ads—I learned plenty from ANTISOCIAL MEDIA about the company's "Custom Audiences from Customer Lists," including this disturbing bit:

"Facebook's Custom Audiences from Customer Lists also gives campaigns an additional power. By entering email addresses of those unlikely to support a candidate or those likely to support an opponent, a campaign can narrowly target groups as small as twenty people and dissuade them from voting at all. 'We have three major voter suppression operations under way,' a campaign official told Bloomberg News just weeks before the election. The campaign was working to convince white leftists and liberals who had supported socialist Bernie Sanders in his primary bid against Clinton, young women, and African American voters not to go to the polls on election day. The campaign carefully targeted messages on Facebook to each of these groups. Clinton's former support for international trade agreements would raise doubts among leftists. Her husband's documented affairs with other women might soften support for Clinton among young women...."

What one saw in Facebook's deployment of the Custom Audiences feature is something fundamentally new and disturbing:

"Custom Audiences is a powerful tool that was not available to President Barack Obama and Governor Mitt Romney when they ran for president in 2012. It was developed in 2014 to help Facebook reach the takeoff point in profits and revenue. Because Facebook develops advertising tools for firms that sell shoes and cosmetics and only later invites political campaigns to use them, 'they never worried about the worst-case abuse of this capability, unaccountable, unreviewable political ads,' said Professor David Carroll of the Parsons School of Design. Such ads are created on a massive scale, targeted at groups as small as twenty, and disappear, so they are never examined or debated."

Vaidhyanathan quite properly criticizes Mark Zuckerberg's late-to-the-party recognition that perhaps Facebook may much more of a home to divisiveness and political mischief (and general unhappiness) than he previously had been willing to admit. And he's right to say that some of Zuckerberg's framing of new design directions for Facebook may be as likely to cause harm (e.g., more self-isolation in filter bubbles) than good. "The existence of hundreds of Facebook groups devoted to convincing others that the earth is flat should have raised some doubt among Facebook's leaders that empowering groups might not enhance the information ecosystem of Facebook," he writes. "Groups are as likely to divide us and make us dumber as any other aspect of Facebook."

But here I have to take issue with my friend Siva, because he overlooks or dismisses the possibility that Facebook's increasing support for "groups" of like-minded users may ultimately add up to a net social positive. For example, the #metoo groups seem to have enabled more women (and men) to come forward and talk frankly about their experiences with sexual assault and to begin to hold perpetrators of sexual assault and sexual harassment accountable. The fact that some folks also use Facebook groups for more frivolous or wrongheaded reasons (like promoting flat-earthism) strikes me as comparatively inconsequential.

Vaidhyanathan's also too quick, it seems to me, to dismiss the potential for Facebook and other platforms to facilitate political and social reform in transitional democracies and developing countries. Yes, bad governments can use social media to promote support for their regimes, and I don't think it's particularly remarkable that oppressive governments (or non-state actors like ISIS) learn to use new communications media maliciously. Governments may frequently be slow, but they're not invariably stupid—so it's no big surprise, for example that Cambodian prime minister Hun Sen has figured out how to use his Facebook page to drum up support for his one-party rule, which has driven out opposition press and the opposition Cambodia National Rescue Party.

But Vaidhyanathan overlooks how some activists are using Facebook's private groups to organize reform or opposition activities. In researching this review, I reached out to friends and colleagues in Cambodia, the Philippines and elsewhere to confirm whether the platform is useful to them—certainly they're cautious about what they say in public on Facebook, but they definitely use private groups for some organizational purposes. What makes the platform useful to activists is that it's accessible, easy to use, and amenable to posting multimedia sources (like pictures and videos of police and soldiers acting brutally towards protestors). And it's not just images--when I worked with activists in Cambodia on developing a citizen-rights framework as a response to their government's abrupt initiation of "cybercrime" legislation (really an effort to suppress dissenting speech), I suggested they work collaboratively in the MediaWiki software that Wikipedia's editors use. But the Cambodian activists quickly discovered that Facebook was an easier platform for technically less proficient users to learn quickly and use to review draft texts together. I was surprised at this, but also encouraged. Even though I had my own doubts whether Facebook was the right tool for the job, I figured they didn't need yet another American trying to tell them how to manage their own collaborations.

Like Watts's book, Vaidhyanathan's is strongest where it's built on independent research that doesn't merely echo what other critics have said. And both books are weakest when they uncritically import notions like Eli Pariser's "filter bubble" hypothesis or the social-media-makes-us-depressed hypothesis. (Both these notions are echoes of previous moral panics about previous new media, including broadcasting in the 20th century and cheap paper in the 19th. And both have been challenged by researchers.) Vaidhyanathan's so certain of the meme that Facebook's Free Basics program is an assault on network neutrality that he mostly doesn't investigate the program itself in any detail. The result is that his book (to this reader, anyway) seems to conflate Free Basics (a collection of low-bandwidth resources that Facebook provided a zero-rated platform for) with Facebook Zero (a zero-rated low-bandwidth version of Facebook by itself). In contrast, the Wikipedia articles on Free Basics and Facebook Zero lead off with warnings not to confuse the two.

In addition to the strengths and weaknesses the two books share, they also have a certain rhetorical approach in common—largely, in my view, because both authors want to push for reform, and because they want to challenge with the sunny-yet-unwarranted optimism with which Zuckerberg and Sandberg and other boosters have characterized social media. In effect, both authors seem to take the approach that, as we learn to be much more critical of social-media platforms, we don't need to worry about throwing out the baby with the bathwater—because, really, there is no baby. (If we bail on Facebook altogether, it's only the frequent baby pictures that we'd lose.)

Even so, both books also share an unwillingness to call for simple opposition to Facebook and other social-media platforms merely because they're misused. Watts argues persuasively instead for more coherent and effective positive messaging about American politics and culture—of the sort that used to be the province of the United States Information Agency. (I think he'd be happy if the USIA were revived; I would be too.) He also calls for an "equivalent of Consumer Reports" to "be created for social media feeds," which also strikes me as a fine idea.

Vaidhyanathan's reform agenda is less optimistic. For one thing, he's dismissive of "media literacy" as a solution because he doubts "we could even agree on what that term means and that there would be some way to train nearly two billion people to distinguish good from bad content." He has some near-term suggestions—for example, he'd like to see an antitrust-type initiative to break up Facebook, although it's unclear to me whether multiple competing Facebooks or a disassembled Facebook would be less hospitable to the kind of shallowness and abuses he sees in the platform's current incarnation. But mostly he calls for a kind of cultural shift driven by social critics and researchers like himself:

"This will be a long process. Those concerned about the degradation of public discourse and the erosion of trust in experts and institutions will have to mount a campaign to challenge the dominant techno-fundamentalist myth. The long, slow process of changing minds, cultures, and ideologies never yields results in the short term. It sometimes yields results over decades or centuries."

I agree that it frequently takes decades or even longer to truly assess how new media affect our culture for good or for ill. But as long as we're contemplating all those years of effort, I see no reason not to put media literacy on the agenda as well. I think there's plenty of evidence that people can learn to read what they see on the internet critically and do better than simply cherry-pick sources that agree with them—a vice that, it must be said, predates social media and the internet itself. The result of increasing skepticism about media platforms and the information we find in them may also lead (as Watts warns us) to more distrust of "experts" and "expertise," with the result that true expertise is more likely to be unfairly and unwisely devalued. But my own view is that skepticism and critical thinking—even about experts with expertise—is generally positive. For example, it may be annoying to today's physicians that patients increasingly resort to the internet about their real or imagined health problems—but engaged patients, even if they have to be walked back from foolish ideas again and again, are probably better off than the more passive health-care consumers of previous generations.

I think Vaidhyanathan is right, ultimately, to urge that we continue to think about social media critically and skeptically, over decades—and, you know, forever. But I think Watts offers the best near-term tactical solution:

"On social media, the most effective way to challenge a troll comes from a method that's taught in intelligence analysis. To sharpen an analyst's skills and judgment, a supervisor or instructor will ask the subordinate two questions when he or she provides an assessment: 'What do those who disagree with your assessment think, and why?' The analyst must articulate a competing viewpoint. The second question is even more important: 'Under what conditions, specifically, would your assessment be wrong?' [...] When I get a troll on Facebook, I'll inquire, 'Under what circumstance would you admit you were wrong?' or 'What evidence would convince you otherwise?" If they don't answer or can't articulate their answer, then I disregard them on that topic indefinitely."

Watts's heuristic strikes me as the perfect first entry in the syllabus for media literacy in particular and for criticism of social media in general.

In sum, I think both MESSING WITH THE ENEMY and ANTISOCIAL MEDIA deserve to be on every internet-focused policymaker's must-read list this season. I also think it's best that readers honor these books by reading them with the same clear-eyed skepticism that their authors preach.

from the this-seems-problematic dept

Last month, we wrote about the crazy situation in Spain, where the government was so totally freaked out about a Catalonian referendum on independence that it shut down the operators of the .cat domain, arrested the company's head of IT for "sedition" and basically shut down a ton of websites about the referendum. The Washington Post now has an article with even more details about the digital attacks in both directions around the Catalonian independence referendum, including hack attacks and DDoS attacks. But one thing caught my eye. Apparently, the supporters of the referendum had created an app called "On Votar 1-Oct." The app had a bunch of the expected functions:

The app, available on Google Play until just before 7 p.m. on Friday, helps people to find their polling station via their address and shows the closest polling stations on Google Maps via GPS, the name of the town or keywords.

It also allows users to share links to polling station locations.

But the Spanish government was so freaked out by the referendum and anything related to it, that it ran and got a court order demanding Google take the app out of Google's app store:

The court order told Google Inc—at 1600 Amphitheatre Parkway Mountain View CA 94043 (USA)—to take down the app located at that URL and also to block or eliminate any future apps submitted by the user with e-mail address "onvotar1oct@gmail.com" or identifying as "Catalonia Voting Software".

The judge says in her ruling that the tweet with the app link is "only a continuation of the actions of the [Catalan government] to block" Constitutional Court and High Court orders "repeatedly".

In the Washington Post article, the CTO of the Catalonian government explains why this is so disappointing:

“I’m a tech guy,” says Jordi Puigneró, chief technology officer of the Catalonian government. “So I’ve always been a great fan of Google and its principles of respect for digital rights. But now I’m really disappointed with the company.” (Puigneró’s office was also occupied by police during the referendum, he says.)

And you can understand why he's disappointed. But, the real problem here, seems to be going back to the same problem we keep identifying over and over again: deep centralization of the digital world. Part of the very promise of Android was that it was supposed to be open, and people weren't supposed to be locked into just Google's app store. And, indeed, there are competing app stores -- but the general argument around them (with the possible exception of Amazon's competing Android app store) is that if you want to keep your device secure, you'll only download via Google's app store.

And then we're back to a problem where there's a centralized choke point for censorship -- one which the Spanish government is able to exploit to make that app much more difficult to access. Google, for its part, said it took the app down because it had received a valid court order. And, that's true, but it's also opening up yet another path to widespread censorship. Google has stood up against similar situations in the past, but the decision of whether or not a movement should be stifled should never come down to whether or not a giant company like Google decides its worth taking a moral stand against a legal court order. The problem is much more systemic, and its built into this world where we've started to build back up gatekeepers.

For nearly two decades, I've argued that the real power of the internet was not -- as many people initally argued -- that it got rid of "middlemen," but rather that the middlemen turned into enablers rather than gatekeepers. In the old world, when only some content could get released/published/sold/etc., you had to rely on gatekeepers to choose which tiny percentage would get blessed. The power of internet platforms was that they became enablers, allowing anyone to use those platforms and to publish/release/sell/distribute things themselves, often to a much wider audience. But there'a always a risk that over time, former enablers become gatekeepers. And it's a fear we should be very conscious about -- even if it's not done on purpose.

To be clear, I don't think Google wants to be a gatekeeper around things like apps. It would prefer not to be. But because the marketplace has become so important, and because Google's role is so central, it almost has no choice. And when governments start issuing court orders to take down apps, suddenly Google is left with few good options. Either it censors or it picks fights with a government. And even if many of us would probably support and cheer on the latter as a choice, we should be concerned that this is even an issue at all. The solution has to be less reliance on centralized platforms and centralized choke points. Catalonians shouldn't have to rely on Google to get a simple voting app out to the public. The next big breakthroughs need to be towards getting past such bottlenecks.

from the or-vice-versa dept

The economy is important — very important. But is that because it matters in and of itself, or because it's the engine for achieving the things we really do care about? Here at Techdirt we've always been strong advocates of the free market, but we've never been absolutists about things like regulation, and we believe it's very important to explore these issues in detail. This week on the podcast we're joined by James Allworth, co-host of the Exponent podcast and author of a recent post entitled Prioritizing Economics is Crippling the U.S. Economy, to discuss entrepreneurship, democracy, the economy and more.

from the take-that dept

Alfred de Zayas, who is the UN's "Independent Expert on the promotion of a democratic and international order" has put out quite a statement in support of President Obama's decision to commute Chelsea Manning's sentence. But de Zayas didn't stop there. He went on to point out that the US government and other governments have been persecuting many other whistleblowers around the world, including Ed Snowden and Julian Assange, and that should stop:

I welcome the commutation of sentence of Chelsea Manning and her forthcoming release in May. There are, however, many whistleblowers who have served the cause of human rights and who are still in prison in many countries throughout the world. It is time to recognize the contribution of whistleblowers to democracy and the rule of law and to stop persecuting them.

I call on Governments worldwide to put an end to multiple campaigns of defamation, mobbing and even prosecution of whistleblowers like Julian Assange, Edward Snowden, the Luxleakers Antoine Deltour and Raphael Halet and the tax corruption leaker Rafi Rotem, who have acted in good faith and who have given meaning to article 19 of the International Covenant on Civil and Political Rights on freedom of expression. Whistleblowers who are serving prison sentence in many countries should be pardoned.

Whistleblowers are human rights defenders whose contribution to democracy and the rule of law cannot be overestimated. They serve democracy and human rights by revealing information that all persons are entitled to receive. A culture of secrecy is frequently also a culture of impunity. Because the right to know proclaimed in article 19 of the International Covenant on Civil and Political Rights is absolutely crucial to every democracy, whistleblowers should be protected, not persecuted.

The statement goes on for a few more paragraphs and concludes:

It is time for this abnormal and inhuman situation to end.

Of course, this kind of statement will mostly be ignored by those in power -- and where not ignored, it will likely be mocked or attacked. But it is an important and useful statement. In the hype and buzz around these individuals, the underlying facts often get buried. But it remains the case that the individuals de Zayas named have focused on revealing to the public important information that powerful people have tried to keep secret, often exposing massive government overreach or outright lies.

from the that's-a-good-point dept

So lots of people have been discussing the story claiming that some e-voting experts believe the Clinton campaign should be asking for a recount in certain battleground states, where it's possible there were some e-voting irregularities. As we noted in our post, the story would barely be worth mentioning if one of the people involved wasn't Alex Halderman, a computer science professor we've been talking about for nearly a decade and a half, going back to when he was a student. Halderman is basically the expert on e-voting security -- so when he says something, it's worth paying attention.

Halderman has now posted something of a follow-up to the NY Magazine article clarifying his views and what he's suggesting. He's not saying there's evidence of a hack, but basically saying that no one knows if there was a hack or not, and because of that, there should be a recount as a way to audit the results to see if there were any irregularities.

After the election, human beings can examine the paper to make sure the results from the voting machines accurately determined who won. Just as you want the brakes in your car to keep working even if the car’s computer goes haywire, accurate vote counts must remain available even if the machines are malfunctioning or attacked. In both cases, common sense tells us we need some kind of physical backup system. I and other election security experts have been advocating for paper ballots for years, and today, about 70% of American voters live in jurisdictions that keep a paper record of every vote.

There’s just one problem, and it might come as a surprise even to many security experts: no state is planning to actually check the paper in a way that would reliably detect that the computer-based outcome was wrong. About half the states have no laws that require a manual examination of paper ballots, and most other states perform only superficial spot checks. If nobody looks at the paper, it might as well not be there. A clever attacker would exploit this.

There’s still one way that some of this year’s paper ballots could be examined. In many states, candidates can petition for a recount.

So, in effect, Halderman isn't saying that he's got evidence of e-voting fraud, but is simply arguing that if no one checks, no one will ever know. So we should check in order to be sure that there wasn't hacking. That's... pretty sensible.

Examining the physical evidence in these states — even if it finds nothing amiss — will help allay doubt and give voters justified confidence that the results are accurate. It will also set a precedent for routinely examining paper ballots, which will provide an important deterrent against cyberattacks on future elections. Recounting the ballots now can only lead to strengthened electoral integrity, but the window for candidates to act is closing fast.

Basically, the only way we can actually get an effective audit to see if there were any voting irregularities is to ask for a recount. The problem, of course, is a political one. If the Clinton campaign does call for a recount, it will immediately be seen as a political play, and lead to a ton of negative publicity. My guess is that the campaign won't want to go there. If we lived in a time where people were intellectually honest, the campaign could present it exactly the way Halderman has framed it -- not as a claim that they believe fraud happened, but rather as a way to ensure that the e-voting machines were accurate and not manipulated -- but does anyone think that the press (either those that supported or those that opposed Clinton) would treat it that way? It would become a complete mess in about two-and-a-half seconds.

And, that's unfortunate. Because as Halderman points out (and, like us, has been pointing out for over a decade), it absolutely is possible to hack most e-voting machines. Especially if the attacker is determined enough to do so:

Here’s one possible scenario. First, the attackers would probe election offices well in advance in order to find ways to break into their computers. Closer to the election, when it was clear from polling data which states would have close electoral margins, the attackers might spread malware into voting machines in some of these states, rigging the machines to shift a few percent of the vote to favor their desired candidate. This malware would likely be designed to remain inactive during pre-election tests, do its dirty business during the election, then erase itself when the polls close. A skilled attacker’s work might leave no visible signs — though the country might be surprised when results in several close states were off from pre-election polls.

Could anyone be brazen enough to try such an attack? A few years ago, I might have said that sounds like science fiction, but 2016 has seen unprecedented cyberattacks aimed at interfering with the election. This summer, attackers broke into the email system of the Democratic National Committee and, separately, into the email account of John Podesta, Hillary Clinton’s campaign chairman, and leaked private messages. Attackers infiltrated the voter registration systems of two states, Illinois and Arizona, and stole voter data. And there’s evidence that hackers attempted to breach election offices in several other states.

In all these cases, Federal agencies publiclyasserted that senior officials in the Russian government commissioned these attacks. Russia has sophisticated cyber-offensive capabilities, and has shown a willingness to use them to hack elections. In 2014, during the presidential election in Ukraine, attackers linked to Russia sabotaged the country’s vote-counting infrastructure and, according to published reports, Ukrainian officials succeeded only at the last minute in defusing vote-stealing malware that was primed to cause the wrong winner to be announced. Russia is not the only country with the ability to pull off such an attack on American systems — most of the world’s military powers now have sophisticated cyberwarfare capabilities.

So, yes, it would be good if the votes here were reviewed, if only as an opportunity to explore the potential problems of e-voting machines, rather than as a political ploy. The only problem is that everyone would see it as a political ploy and with political ploys comes general dumpster fires of idiocy.

from the there's-real-anger-at-the-status-quo dept

Yeah, okay, I know there are a million and one "hot takes" going on across the media about what happened yesterday and "what went wrong." I already wrote about what the election means for tech policy and civil liberties, but the trite setup of the blame game is getting really stupid, really fast. I had already started writing up a response to this silly Vox article about how "Facebook is harming our democracy" before the election (the story came out over the weekend), but now that I'm seeing more and more people (especially in the media) blaming Facebook and "algorithms" for the results of the election, I'm turning it into this post: if you're blaming Facebook for the results of this election, you're an idiot.

Facebook's algorithm and whatever "echo chamber" or "filter bubble" or whatever it may have created did not lead to this result. This was the result of a very large group of people who are quite clearly -- and reasonably -- pissed off at the status quo. Politics has been a really corrupt game for basically ever, and for the past few decades, lots of people have been trying to pretend it wasn't as corrupt as it really is. The fact that Trump is likely to be as corrupt -- if not more so -- than those who came before him didn't matter. People were upset and voted against a candidate who, to them, basically defined the status quo and the problems with the system. This was a "throw the bums out" vote, and many of the bums deserved to be thrown out. That they voted in someone likely to be worse (especially given who he's surrounded himself with so far) wasn't the point. Just as with Brexit, this was a vote of "what we have now ain't working, let's try something different."

It's no surprise many people argued that Clinton was the wrong candidate to go against Trump. She absolutely was. She was the status quo candidate in a time when lots and lots of people didn't want the status quo.

But that's not Facebook's fault. And the idea that a better or different algorithm on Facebook would have made the results any different is just as ridiculous as the idea that newspaper endorsements or "fact checking" mattered one bit. People are angry because the system has failed them in many, many ways, and it's not because they're idiots who believed all the fake news Facebook pushed on them (even if some of them did believe it). Many people don't think Trump will be any good, but they voted for him anyway, because the status quo is broken.

There is a large slice of voters who told exit pollsters they thought Trump was dishonest, had a bad temperament, etc.--but voted for him.

The idea that people are just such suckers they believe whatever Facebook puts in front of them is silly. That's not how it works:

The fundamental problem here is that Facebook’s leadership is in denial about the kind of organization it has become. “We are a tech company, not a media company,” Zuckerberg has said repeatedly over the last few years. In the mind of the Facebook CEO, Facebook is just a “platform,” a neutral conduit for helping users share information with one another.

But that’s wrong. Facebook makes billions of editorial decisions every day. And often they are bad editorial decisions — steering people to sensational, one-sided, or just plain inaccurate stories. The fact that these decisions are being made by algorithms rather than human editors doesn’t make Facebook any less responsible for the harmful effect on its users and the broader society.

Yes, many people are falling for fake or bogus or sensationalized news -- and the Trump campaign expertly took a kernel of truth (that many mainstream media sources didn't want him to win) and spun it into the idea that no media story highlighting his flaws, lies or corruption (no matter how carefully and factually reported) could be believed. But people are believing those stories because they match with their real world experience of seeing how the system has worked (or not worked) for too long.

I've already expressed my concerns about what a Trump presidency will do for the issues that I spend my days focused on -- and it's not good. But as loyal readers here at Techdirt should know well, we've never been particularly supportive of the way things have been running in the government all along -- and that's through 10 years under Democratic presidencies and 8 years under GOP presidencies. The federal government has a long history of doing bad stuff: stomping on free speech and expanding surveillance (who cares about the 1st or 4th Amendments?), pushing policies that will harm innovators in favor of legacy industries (including in both the copyright and patent spaces) and generally disregarding what's best for the public. I fear that Trump will make things significantly worse, but I certainly recognize the need to change the status quo overall. And not because of Facebook's stupid algorithm.

from the election-fever dept

Well, today's the day. By tomorrow there will be a new President of the United States, and a large segment of the population claiming that they were robbed by the system. But immediate anger aside, that system is hardly above criticism: the Electoral College has had all sorts of unanticipated and often undesirable effects on democracy, and a wide variety of alternatives have been proposed. This week we discuss the question: is there a better way to pick the president?

from the buy-into-the-system,-people! dept

Trust and respect aren't things someone (or something) holds in an infinite, uninterrupted supply. They're gained and lost due to the actions of the entity holding this extremely liquid supply of trust. Oddly, some people -- like Washington Post's Chris Cillizza -- seem to believe trust and respect should be given to certain "venerated institutions," because to do otherwise is to surrender to something approaching nihilism.

Cillizza starts out with an obvious conclusion:

The FBI has long been an iconic institution in American life. After last week's announcement by FBI Director James Comey that the investigation into Hillary Clinton's private email server continues, it's hard to see it staying that way.

The problem is more Comey's than the FBI as a whole, but neither have done much over their histories to raise their levels of trust and respect. Cillizza notes that several other venerated institutions -- from the Supreme Court to the presidency to public schools have all seen steady declines in public trust, according to polls.

This is to be expected. Trust is easy to lose, but much harder to earn. Our government institutions have done very little to maintain the level of trust and plenty to squander it. If that had been the end of it -- an examination of continually-diminishing trust levels, it would have been fine.

But Cillizza somehow feels that failing to hand over trust and respect these institutions haven't earned -- or haven't protected -- is damaging to the fabric of society… and democracy itself.

Nothing has cropped up to replace these fallen idols. The foundational pieces of society — the things we always knew we could rely on — are no longer foundational. But, with nothing to replace them, we are left rootless, casting about for a new set of institutions on which we can rely. That casting around causes fear and anxiety — and sometimes even anger.

None of those emotions are conducive to a functioning, healthy democracy.

This is far more conducive to a functioning democracy than Cillizza thinks. This democracy (although actually a Constitutional republic, but pedantry) rose from the ashes of venerated, foundational pieces of society. The system was burned to the ground and rebuilt to better serve the constituents, rather than those governing them. What's not conducive to a functioning, healthy democracy are government institutions continually and casually destroying the trust they once had. Venerated institutions shouldn't always remain venerated. They should be questioned aggressively and held accountable for their actions.

Seriously, there's a long list of "venerated institutions" once present in this "healthy, functioning democracy" that almost no one agrees should be granted the respect they once were. Like slavery. Or voting being limited to white males. Or aggressive land-grabs that displaced the native population when we weren't actually just straight up killing them. Or how about the draft? It was once respected as well, but back-to-back failures in wars fought more against ideals than enemies turned it into a sick joke that further proved the notion that a supposed nation of equals was just more of the same old multi-tiered favoritism.

If the FBI doesn't have our trust anymore, it's because it threw it all away. Decades of shady, if not downright abusive, behavior preceded Comey's "lone gunslinger" approach to heading the agency's unofficial political warfare operations.

The decline in veneration for other institutions roughly tracks the increase in transparency and accountability. Freedom of information laws. Citizens with cellphones. A worldwide platform for instant dissemination of information. If trust is at an all-time low, it's because more people are better informed than they ever have been in the history of this nation. And that's exactly the sort of thing that keeps a democracy functioning and healthy.

[Final note: the original version of Cillizza's post contained some rather hilarious inaccuracies about the FBI to buttress his arguments about the agency's venerability. They've since been excised, but the supposed paragons of Bureau virtue included a fictional character and US Treasury agent who never worked for the FBI.]